WorldWideScience

Sample records for selected time variable

  1. THE TIME DOMAIN SPECTROSCOPIC SURVEY: VARIABLE SELECTION AND ANTICIPATED RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Morganson, Eric; Green, Paul J. [Harvard Smithsonian Center for Astrophysics, 60 Garden St, Cambridge, MA 02138 (United States); Anderson, Scott F.; Ruan, John J. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Myers, Adam D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Eracleous, Michael; Brandt, William Nielsen [Department of Astronomy and Astrophysics, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA 16802 (United States); Kelly, Brandon [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106-9530 (United States); Badenes, Carlos [Department of Physics and Astronomy and Pittsburgh Particle Physics, Astrophysics and Cosmology Center (PITT PACC), University of Pittsburgh, 3941 O’Hara St, Pittsburgh, PA 15260 (United States); Bañados, Eduardo [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Blanton, Michael R. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States); Bershady, Matthew A. [Department of Astronomy, University of Wisconsin, 475 N. Charter St., Madison, WI 53706 (United States); Borissova, Jura [Instituto de Física y Astronomía, Universidad de Valparaíso, Av. Gran Bretaña 1111, Playa Ancha, Casilla 5030, and Millennium Institute of Astrophysics (MAS), Santiago (Chile); Burgett, William S. [GMTO Corp, Suite 300, 251 S. Lake Ave, Pasadena, CA 91101 (United States); Chambers, Kenneth, E-mail: emorganson@cfa.harvard.edu [Institute for Astronomy, University of Hawaii at Manoa, Honolulu, HI 96822 (United States); and others

    2015-06-20

    We present the selection algorithm and anticipated results for the Time Domain Spectroscopic Survey (TDSS). TDSS is an Sloan Digital Sky Survey (SDSS)-IV Extended Baryon Oscillation Spectroscopic Survey (eBOSS) subproject that will provide initial identification spectra of approximately 220,000 luminosity-variable objects (variable stars and active galactic nuclei across 7500 deg{sup 2} selected from a combination of SDSS and multi-epoch Pan-STARRS1 photometry. TDSS will be the largest spectroscopic survey to explicitly target variable objects, avoiding pre-selection on the basis of colors or detailed modeling of specific variability characteristics. Kernel Density Estimate analysis of our target population performed on SDSS Stripe 82 data suggests our target sample will be 95% pure (meaning 95% of objects we select have genuine luminosity variability of a few magnitudes or more). Our final spectroscopic sample will contain roughly 135,000 quasars and 85,000 stellar variables, approximately 4000 of which will be RR Lyrae stars which may be used as outer Milky Way probes. The variability-selected quasar population has a smoother redshift distribution than a color-selected sample, and variability measurements similar to those we develop here may be used to make more uniform quasar samples in large surveys. The stellar variable targets are distributed fairly uniformly across color space, indicating that TDSS will obtain spectra for a wide variety of stellar variables including pulsating variables, stars with significant chromospheric activity, cataclysmic variables, and eclipsing binaries. TDSS will serve as a pathfinder mission to identify and characterize the multitude of variable objects that will be detected photometrically in even larger variability surveys such as Large Synoptic Survey Telescope.

  2. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  3. Variable Selection in Time Series Forecasting Using Random Forests

    Directory of Open Access Journals (Sweden)

    Hristos Tyralis

    2017-10-01

    Full Text Available Time series forecasting using machine learning algorithms has gained popularity recently. Random forest is a machine learning algorithm implemented in time series forecasting; however, most of its forecasting properties have remained unexplored. Here we focus on assessing the performance of random forests in one-step forecasting using two large datasets of short time series with the aim to suggest an optimal set of predictor variables. Furthermore, we compare its performance to benchmarking methods. The first dataset is composed by 16,000 simulated time series from a variety of Autoregressive Fractionally Integrated Moving Average (ARFIMA models. The second dataset consists of 135 mean annual temperature time series. The highest predictive performance of RF is observed when using a low number of recent lagged predictor variables. This outcome could be useful in relevant future applications, with the prospect to achieve higher predictive accuracy.

  4. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  5. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  6. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    OpenAIRE

    Jun-He Yang; Ching-Hsue Cheng; Chia-Pan Chan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting m...

  7. Selective attrition and intraindividual variability in response time moderate cognitive change.

    Science.gov (United States)

    Yao, Christie; Stawski, Robert S; Hultsch, David F; MacDonald, Stuart W S

    2016-01-01

    Selection of a developmental time metric is useful for understanding causal processes that underlie aging-related cognitive change and for the identification of potential moderators of cognitive decline. Building on research suggesting that time to attrition is a metric sensitive to non-normative influences of aging (e.g., subclinical health conditions), we examined reason for attrition and intraindividual variability (IIV) in reaction time as predictors of cognitive performance. Three hundred and four community dwelling older adults (64-92 years) completed annual assessments in a longitudinal study. IIV was calculated from baseline performance on reaction time tasks. Multilevel models were fit to examine patterns and predictors of cognitive change. We show that time to attrition was associated with cognitive decline. Greater IIV was associated with declines on executive functioning and episodic memory measures. Attrition due to personal health reasons was also associated with decreased executive functioning compared to that of individuals who remained in the study. These findings suggest that time to attrition is a useful metric for representing cognitive change, and reason for attrition and IIV are predictive of non-normative influences that may underlie instances of cognitive loss in older adults.

  8. Using Variable Dwell Time to Accelerate Gaze-based Web Browsing with Two-step Selection

    OpenAIRE

    Chen, Zhaokang; Shi, Bertram E.

    2017-01-01

    In order to avoid the "Midas Touch" problem, gaze-based interfaces for selection often introduce a dwell time: a fixed amount of time the user must fixate upon an object before it is selected. Past interfaces have used a uniform dwell time across all objects. Here, we propose an algorithm for adjusting the dwell times of different objects based on the inferred probability that the user intends to select them. In particular, we introduce a probabilistic model of natural gaze behavior while sur...

  9. Quantum Stephani exact cosmological solutions and the selection of time variable

    International Nuclear Information System (INIS)

    Pedram, P; Jalalzadeh, S; Gousheh, S S

    2007-01-01

    We study a perfect fluid Stephani quantum cosmological model. In the present work, the Schutz's variational formalism which recovers the notion of time is applied. This gives rise to a Wheeler-DeWitt equation for the scale factor. We use the eigenfunctions in order to construct wave packets for each case. We study the time-dependent behavior of the expectation value of the scale factor, using many-worlds and de Broglie-Bohm interpretations of quantum mechanics

  10. QUASI-STELLAR OBJECT SELECTION ALGORITHM USING TIME VARIABILITY AND MACHINE LEARNING: SELECTION OF 1620 QUASI-STELLAR OBJECT CANDIDATES FROM MACHO LARGE MAGELLANIC CLOUD DATABASE

    International Nuclear Information System (INIS)

    Kim, Dae-Won; Protopapas, Pavlos; Alcock, Charles; Trichas, Markos; Byun, Yong-Ik; Khardon, Roni

    2011-01-01

    We present a new quasi-stellar object (QSO) selection algorithm using a Support Vector Machine, a supervised classification method, on a set of extracted time series features including period, amplitude, color, and autocorrelation value. We train a model that separates QSOs from variable stars, non-variable stars, and microlensing events using 58 known QSOs, 1629 variable stars, and 4288 non-variables in the MAssive Compact Halo Object (MACHO) database as a training set. To estimate the efficiency and the accuracy of the model, we perform a cross-validation test using the training set. The test shows that the model correctly identifies ∼80% of known QSOs with a 25% false-positive rate. The majority of the false positives are Be stars. We applied the trained model to the MACHO Large Magellanic Cloud (LMC) data set, which consists of 40 million light curves, and found 1620 QSO candidates. During the selection none of the 33,242 known MACHO variables were misclassified as QSO candidates. In order to estimate the true false-positive rate, we crossmatched the candidates with astronomical catalogs including the Spitzer Surveying the Agents of a Galaxy's Evolution LMC catalog and a few X-ray catalogs. The results further suggest that the majority of the candidates, more than 70%, are QSOs.

  11. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. A spatio-temporal nonparametric Bayesian variable selection model of fMRI data for clustering correlated time courses.

    Science.gov (United States)

    Zhang, Linlin; Guindani, Michele; Versace, Francesco; Vannucci, Marina

    2014-07-15

    In this paper we present a novel wavelet-based Bayesian nonparametric regression model for the analysis of functional magnetic resonance imaging (fMRI) data. Our goal is to provide a joint analytical framework that allows to detect regions of the brain which exhibit neuronal activity in response to a stimulus and, simultaneously, infer the association, or clustering, of spatially remote voxels that exhibit fMRI time series with similar characteristics. We start by modeling the data with a hemodynamic response function (HRF) with a voxel-dependent shape parameter. We detect regions of the brain activated in response to a given stimulus by using mixture priors with a spike at zero on the coefficients of the regression model. We account for the complex spatial correlation structure of the brain by using a Markov random field (MRF) prior on the parameters guiding the selection of the activated voxels, therefore capturing correlation among nearby voxels. In order to infer association of the voxel time courses, we assume correlated errors, in particular long memory, and exploit the whitening properties of discrete wavelet transforms. Furthermore, we achieve clustering of the voxels by imposing a Dirichlet process (DP) prior on the parameters of the long memory process. For inference, we use Markov Chain Monte Carlo (MCMC) sampling techniques that combine Metropolis-Hastings schemes employed in Bayesian variable selection with sampling algorithms for nonparametric DP models. We explore the performance of the proposed model on simulated data, with both block- and event-related design, and on real fMRI data. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Variable Selection via Partial Correlation.

    Science.gov (United States)

    Li, Runze; Liu, Jingyuan; Lou, Lejia

    2017-07-01

    Partial correlation based variable selection method was proposed for normal linear regression models by Bühlmann, Kalisch and Maathuis (2010) as a comparable alternative method to regularization methods for variable selection. This paper addresses two important issues related to partial correlation based variable selection method: (a) whether this method is sensitive to normality assumption, and (b) whether this method is valid when the dimension of predictor increases in an exponential rate of the sample size. To address issue (a), we systematically study this method for elliptical linear regression models. Our finding indicates that the original proposal may lead to inferior performance when the marginal kurtosis of predictor is not close to that of normal distribution. Our simulation results further confirm this finding. To ensure the superior performance of partial correlation based variable selection procedure, we propose a thresholded partial correlation (TPC) approach to select significant variables in linear regression models. We establish the selection consistency of the TPC in the presence of ultrahigh dimensional predictors. Since the TPC procedure includes the original proposal as a special case, our theoretical results address the issue (b) directly. As a by-product, the sure screening property of the first step of TPC was obtained. The numerical examples also illustrate that the TPC is competitively comparable to the commonly-used regularization methods for variable selection.

  14. SELECTING QUASARS BY THEIR INTRINSIC VARIABILITY

    International Nuclear Information System (INIS)

    Schmidt, Kasper B.; Rix, Hans-Walter; Jester, Sebastian; Hennawi, Joseph F.; Marshall, Philip J.; Dobler, Gregory

    2010-01-01

    We present a new and simple technique for selecting extensive, complete, and pure quasar samples, based on their intrinsic variability. We parameterize the single-band variability by a power-law model for the light-curve structure function, with amplitude A and power-law index γ. We show that quasars can be efficiently separated from other non-variable and variable sources by the location of the individual sources in the A-γ plane. We use ∼60 epochs of imaging data, taken over ∼5 years, from the SDSS stripe 82 (S82) survey, where extensive spectroscopy provides a reference sample of quasars, to demonstrate the power of variability as a quasar classifier in multi-epoch surveys. For UV-excess selected objects, variability performs just as well as the standard SDSS color selection, identifying quasars with a completeness of 90% and a purity of 95%. In the redshift range 2.5 < z < 3, where color selection is known to be problematic, variability can select quasars with a completeness of 90% and a purity of 96%. This is a factor of 5-10 times more pure than existing color selection of quasars in this redshift range. Selecting objects from a broad griz color box without u-band information, variability selection in S82 can afford completeness and purity of 92%, despite a factor of 30 more contaminants than quasars in the color-selected feeder sample. This confirms that the fraction of quasars hidden in the 'stellar locus' of color space is small. To test variability selection in the context of Pan-STARRS 1 (PS1) we created mock PS1 data by down-sampling the S82 data to just six epochs over 3 years. Even with this much sparser time sampling, variability is an encouragingly efficient classifier. For instance, a 92% pure and 44% complete quasar candidate sample is attainable from the above griz-selected catalog. Finally, we show that the presented A-γ technique, besides selecting clean and pure samples of quasars (which are stochastically varying objects), is also

  15. Penalized variable selection in competing risks regression.

    Science.gov (United States)

    Fu, Zhixuan; Parikh, Chirag R; Zhou, Bingqing

    2017-07-01

    Penalized variable selection methods have been extensively studied for standard time-to-event data. Such methods cannot be directly applied when subjects are at risk of multiple mutually exclusive events, known as competing risks. The proportional subdistribution hazard (PSH) model proposed by Fine and Gray (J Am Stat Assoc 94:496-509, 1999) has become a popular semi-parametric model for time-to-event data with competing risks. It allows for direct assessment of covariate effects on the cumulative incidence function. In this paper, we propose a general penalized variable selection strategy that simultaneously handles variable selection and parameter estimation in the PSH model. We rigorously establish the asymptotic properties of the proposed penalized estimators and modify the coordinate descent algorithm for implementation. Simulation studies are conducted to demonstrate the good performance of the proposed method. Data from deceased donor kidney transplants from the United Network of Organ Sharing illustrate the utility of the proposed method.

  16. Robust cluster analysis and variable selection

    CERN Document Server

    Ritter, Gunter

    2014-01-01

    Clustering remains a vibrant area of research in statistics. Although there are many books on this topic, there are relatively few that are well founded in the theoretical aspects. In Robust Cluster Analysis and Variable Selection, Gunter Ritter presents an overview of the theory and applications of probabilistic clustering and variable selection, synthesizing the key research results of the last 50 years. The author focuses on the robust clustering methods he found to be the most useful on simulated data and real-time applications. The book provides clear guidance for the varying needs of bot

  17. Quantification of glutathione transverse relaxation time T2 using echo time extension with variable refocusing selectivity and symmetry in the human brain at 7 Tesla

    Science.gov (United States)

    Swanberg, Kelley M.; Prinsen, Hetty; Coman, Daniel; de Graaf, Robin A.; Juchem, Christoph

    2018-05-01

    Glutathione (GSH) is an endogenous antioxidant implicated in numerous biological processes, including those associated with multiple sclerosis, aging, and cancer. Spectral editing techniques have greatly facilitated the acquisition of glutathione signal in living humans via proton magnetic resonance spectroscopy, but signal quantification at 7 Tesla is still hampered by uncertainty about the glutathione transverse decay rate T2 relative to those of commonly employed quantitative references like N-acetyl aspartate (NAA), total creatine, or water. While the T2 of uncoupled singlets can be derived in a straightforward manner from exponential signal decay as a function of echo time, similar estimation of signal decay in GSH is complicated by a spin system that involves both weak and strong J-couplings as well as resonances that overlap those of several other metabolites and macromolecules. Here, we extend a previously published method for quantifying the T2 of GABA, a weakly coupled system, to quantify T2 of the strongly coupled spin system glutathione in the human brain at 7 Tesla. Using full density matrix simulation of glutathione signal behavior, we selected an array of eight optimized echo times between 72 and 322 ms for glutathione signal acquisition by J-difference editing (JDE). We varied the selectivity and symmetry parameters of the inversion pulses used for echo time extension to further optimize the intensity, simplicity, and distinctiveness of glutathione signals at chosen echo times. Pairs of selective adiabatic inversion pulses replaced nonselective pulses at three extended echo times, and symmetry of the time intervals between the two extension pulses was adjusted at one extended echo time to compensate for J-modulation, thereby resulting in appreciable signal-to-noise ratio and quantifiable signal shapes at all measured points. Glutathione signal across all echo times fit smooth monoexponential curves over ten scans of occipital cortex voxels in nine

  18. Pulse timing for cataclysmic variables

    International Nuclear Information System (INIS)

    Chester, T.J.

    1979-01-01

    It is shown that present pulse timing measurements of cataclysmic variables can be explained by models of accretion disks in these systems, and thus such measurements can constrain disk models. The model for DQ Her correctly predicts the amplitude variation of the continuum pulsation and can also perhaps explain the asymmetric amplitude of the pulsed lambda4686 emission line. Several other predictions can be made from the model. In particular, if pulse timing measurements that resolve emission lines both in wavelength and in binary phase can be made, the projected orbital radius of the white dwarf could be deduced

  19. Variable selection by lasso-type methods

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2011-09-01

    Full Text Available Variable selection is an important property of shrinkage methods. The adaptive lasso is an oracle procedure and can do consistent variable selection. In this paper, we provide an explanation that how use of adaptive weights make it possible for the adaptive lasso to satisfy the necessary and almost sufcient condition for consistent variable selection. We suggest a novel algorithm and give an important result that for the adaptive lasso if predictors are normalised after the introduction of adaptive weights, it makes the adaptive lasso performance identical to the lasso.

  20. Variable and subset selection in PLS regression

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2001-01-01

    The purpose of this paper is to present some useful methods for introductory analysis of variables and subsets in relation to PLS regression. We present here methods that are efficient in finding the appropriate variables or subset to use in the PLS regression. The general conclusion...... is that variable selection is important for successful analysis of chemometric data. An important aspect of the results presented is that lack of variable selection can spoil the PLS regression, and that cross-validation measures using a test set can show larger variation, when we use different subsets of X, than...

  1. Purposeful selection of variables in logistic regression

    Directory of Open Access Journals (Sweden)

    Williams David Keith

    2008-12-01

    Full Text Available Abstract Background The main problem in many model-building situations is to choose from a large set of covariates those that should be included in the "best" model. A decision to keep a variable in the model might be based on the clinical or statistical significance. There are several variable selection algorithms in existence. Those methods are mechanical and as such carry some limitations. Hosmer and Lemeshow describe a purposeful selection of covariates within which an analyst makes a variable selection decision at each step of the modeling process. Methods In this paper we introduce an algorithm which automates that process. We conduct a simulation study to compare the performance of this algorithm with three well documented variable selection procedures in SAS PROC LOGISTIC: FORWARD, BACKWARD, and STEPWISE. Results We show that the advantage of this approach is when the analyst is interested in risk factor modeling and not just prediction. In addition to significant covariates, this variable selection procedure has the capability of retaining important confounding variables, resulting potentially in a slightly richer model. Application of the macro is further illustrated with the Hosmer and Lemeshow Worchester Heart Attack Study (WHAS data. Conclusion If an analyst is in need of an algorithm that will help guide the retention of significant covariates as well as confounding ones they should consider this macro as an alternative tool.

  2. Machine learning techniques to select variable stars

    Directory of Open Access Journals (Sweden)

    García-Varela Alejandro

    2017-01-01

    Full Text Available In order to perform a supervised classification of variable stars, we propose and evaluate a set of six features extracted from the magnitude density of the light curves. They are used to train automatic classification systems using state-of-the-art classifiers implemented in the R statistical computing environment. We find that random forests is the most successful method to select variables.

  3. Survival time and effect of selected predictor variables on survival in owned pet cats seropositive for feline immunodeficiency and leukemia virus attending a referral clinic in northern Italy.

    Science.gov (United States)

    Spada, Eva; Perego, Roberta; Sgamma, Elena Assunta; Proverbio, Daniela

    2018-02-01

    Feline immunodeficiency virus (FIV) and feline leukemia virus (FeLV) are among the most important feline infectious diseases worldwide. This retrospective study investigated survival times and effects of selected predictor factors on survival time in a population of owned pet cats in Northern Italy testing positive for the presence of FIV antibodies and FeLV antigen. One hundred and three retrovirus-seropositive cats, 53 FIV-seropositive cats, 40 FeLV-seropositive cats, and 10 FIV+FeLV-seropositive cats were included in the study. A population of 103 retrovirus-seronegative age and sex-matched cats was selected. Survival time was calculated and compared between retrovirus-seronegative, FIV, FeLV and FIV+FeLV-seropositive cats using Kaplan-Meier survival analysis. Cox proportional-hazards regression analysis was used to study the effect of selected predictor factors (male gender, peripheral blood cytopenia as reduced red blood cells - RBC- count, leukopenia, neutropenia and lymphopenia, hypercreatininemia and reduced albumin to globulin ratio) on survival time in retrovirus-seropositive populations. Median survival times for seronegative cats, FIV, FeLV and FIV+FeLV-seropositive cats were 3960, 2040, 714 and 77days, respectively. Compared to retrovirus-seronegative cats median survival time was significantly lower (P<0.000) in FeLV and FIV+FeLV-seropositive cats. Median survival time in FeLV and FIV+FeLV-seropositive cats was also significant lower (P<0.000) when compared to FIV-seropositive cats. Hazard ratio of death in FeLV and FIV+FeLV-seropositive cats being respectively 3.4 and 7.4 times higher, in comparison to seronegative cats and 2.3 and 4.8 times higher in FeLV and FIV+FeLV-seropositive cats as compared to FIV-seropositive cats. A Cox proportional-hazards regression analysis showed that FIV and FeLV-seropositive cats with reduced RBC counts at time of diagnosis of seropositivity had significantly shorter survival times when compared to FIV and Fe

  4. Time-adjusted variable resistor

    Science.gov (United States)

    Heyser, R. C.

    1972-01-01

    Timing mechanism was developed effecting extremely precisioned highly resistant fixed resistor. Switches shunt all or portion of resistor; effective resistance is varied over time interval by adjusting switch closure rate.

  5. A numeric comparison of variable selection algorithms for supervised learning

    International Nuclear Information System (INIS)

    Palombo, G.; Narsky, I.

    2009-01-01

    Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables. Reducing a full variable set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various variable selection algorithms for supervised learning using several datasets such as, for instance, imaging gamma-ray Cherenkov telescope (MAGIC) data found at the UCI repository. We use classifiers and variable selection methods implemented in the statistical package StatPatternRecognition (SPR), a free open-source C++ package developed in the HEP community ( (http://sourceforge.net/projects/statpatrec/)). For each dataset, we select a powerful classifier and estimate its learning accuracy on variable subsets obtained by various selection algorithms. When possible, we also estimate the CPU time needed for the variable subset selection. The results of this analysis are compared with those published previously for these datasets using other statistical packages such as R and Weka. We show that the most accurate, yet slowest, method is a wrapper algorithm known as generalized sequential forward selection ('Add N Remove R') implemented in SPR.

  6. Variable selection and estimation for longitudinal survey data

    KAUST Repository

    Wang, Li

    2014-09-01

    There is wide interest in studying longitudinal surveys where sample subjects are observed successively over time. Longitudinal surveys have been used in many areas today, for example, in the health and social sciences, to explore relationships or to identify significant variables in regression settings. This paper develops a general strategy for the model selection problem in longitudinal sample surveys. A survey weighted penalized estimating equation approach is proposed to select significant variables and estimate the coefficients simultaneously. The proposed estimators are design consistent and perform as well as the oracle procedure when the correct submodel was known. The estimating function bootstrap is applied to obtain the standard errors of the estimated parameters with good accuracy. A fast and efficient variable selection algorithm is developed to identify significant variables for complex longitudinal survey data. Simulated examples are illustrated to show the usefulness of the proposed methodology under various model settings and sampling designs. © 2014 Elsevier Inc.

  7. Variable Selection for Regression Models of Percentile Flows

    Science.gov (United States)

    Fouad, G.

    2017-12-01

    Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high

  8. Travel time variability and rational inattention

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Jiang, Gege

    2017-01-01

    This paper sets up a rational inattention model for the choice of departure time for a traveler facing random travel time. The traveler chooses how much information to acquire about the travel time out-come before choosing departure time. This reduces the cost of travel time variability compared...

  9. ENSEMBLE VARIABILITY OF NEAR-INFRARED-SELECTED ACTIVE GALACTIC NUCLEI

    International Nuclear Information System (INIS)

    Kouzuma, S.; Yamaoka, H.

    2012-01-01

    We present the properties of the ensemble variability V for nearly 5000 near-infrared active galactic nuclei (AGNs) selected from the catalog of Quasars and Active Galactic Nuclei (13th Edition) and the SDSS-DR7 quasar catalog. From three near-infrared point source catalogs, namely, Two Micron All Sky Survey (2MASS), Deep Near Infrared Survey (DENIS), and UKIDSS/LAS catalogs, we extract 2MASS-DENIS and 2MASS-UKIDSS counterparts for cataloged AGNs by cross-identification between catalogs. We further select variable AGNs based on an optimal criterion for selecting the variable sources. The sample objects are divided into subsets according to whether near-infrared light originates by optical emission or by near-infrared emission in the rest frame; and we examine the correlations of the ensemble variability with the rest-frame wavelength, redshift, luminosity, and rest-frame time lag. In addition, we also examine the correlations of variability amplitude with optical variability, radio intensity, and radio-to-optical flux ratio. The rest-frame optical variability of our samples shows negative correlations with luminosity and positive correlations with rest-frame time lag (i.e., the structure function, SF), and this result is consistent with previous analyses. However, no well-known negative correlation exists between the rest-frame wavelength and optical variability. This inconsistency might be due to a biased sampling of high-redshift AGNs. Near-infrared variability in the rest frame is anticorrelated with the rest-frame wavelength, which is consistent with previous suggestions. However, correlations of near-infrared variability with luminosity and rest-frame time lag are the opposite of these correlations of the optical variability; that is, the near-infrared variability is positively correlated with luminosity but negatively correlated with the rest-frame time lag. Because these trends are qualitatively consistent with the properties of radio-loud quasars reported

  10. Variability of Travel Times on New Jersey Highways

    Science.gov (United States)

    2011-06-01

    This report presents the results of a link and path travel time study conducted on selected New Jersey (NJ) highways to produce estimates of the corresponding variability of travel time (VTT) by departure time of the day and days of the week. The tra...

  11. Additive measures of travel time variability

    DEFF Research Database (Denmark)

    Engelson, Leonid; Fosgerau, Mogens

    2011-01-01

    This paper derives a measure of travel time variability for travellers equipped with scheduling preferences defined in terms of time-varying utility rates, and who choose departure time optimally. The corresponding value of travel time variability is a constant that depends only on preference...... parameters. The measure is unique in being additive with respect to independent parts of a trip. It has the variance of travel time as a special case. Extension is provided to the case of travellers who use a scheduled service with fixed headway....

  12. Can time be a discrete dynamical variable

    International Nuclear Information System (INIS)

    Lee, T.D.

    1983-01-01

    The possibility that time can be regarded as a discrete dynamical variable is examined through all phases of mechanics: from classical mechanics to nonrelativistic quantum mechanics, and to relativistic quantum field theories. (orig.)

  13. CHARACTERIZING THE OPTICAL VARIABILITY OF BRIGHT BLAZARS: VARIABILITY-BASED SELECTION OF FERMI ACTIVE GALACTIC NUCLEI

    International Nuclear Information System (INIS)

    Ruan, John J.; Anderson, Scott F.; MacLeod, Chelsea L.; Becker, Andrew C.; Davenport, James R. A.; Ivezić, Željko; Burnett, T. H.; Kochanek, Christopher S.; Plotkin, Richard M.; Sesar, Branimir; Stuart, J. Scott

    2012-01-01

    We investigate the use of optical photometric variability to select and identify blazars in large-scale time-domain surveys, in part to aid in the identification of blazar counterparts to the ∼30% of γ-ray sources in the Fermi 2FGL catalog still lacking reliable associations. Using data from the optical LINEAR asteroid survey, we characterize the optical variability of blazars by fitting a damped random walk model to individual light curves with two main model parameters, the characteristic timescales of variability τ, and driving amplitudes on short timescales σ-circumflex. Imposing cuts on minimum τ and σ-circumflex allows for blazar selection with high efficiency E and completeness C. To test the efficacy of this approach, we apply this method to optically variable LINEAR objects that fall within the several-arcminute error ellipses of γ-ray sources in the Fermi 2FGL catalog. Despite the extreme stellar contamination at the shallow depth of the LINEAR survey, we are able to recover previously associated optical counterparts to Fermi active galactic nuclei with E ≥ 88% and C = 88% in Fermi 95% confidence error ellipses having semimajor axis r < 8'. We find that the suggested radio counterpart to Fermi source 2FGL J1649.6+5238 has optical variability consistent with other γ-ray blazars and is likely to be the γ-ray source. Our results suggest that the variability of the non-thermal jet emission in blazars is stochastic in nature, with unique variability properties due to the effects of relativistic beaming. After correcting for beaming, we estimate that the characteristic timescale of blazar variability is ∼3 years in the rest frame of the jet, in contrast with the ∼320 day disk flux timescale observed in quasars. The variability-based selection method presented will be useful for blazar identification in time-domain optical surveys and is also a probe of jet physics.

  14. Travel time variability and airport accessibility

    NARCIS (Netherlands)

    Koster, P.R.; Kroes, E.P.; Verhoef, E.T.

    2011-01-01

    We analyze the cost of access travel time variability for air travelers. Reliable access to airports is important since the cost of missing a flight is likely to be high. First, the determinants of the preferred arrival times at airports are analyzed. Second, the willingness to pay (WTP) for

  15. Comparison of selected variables of gaming performance in football

    OpenAIRE

    Parachin, Jiří

    2014-01-01

    Title: Comparison of selected variables of gaming performance in football Objectives: Analysis of selected variables of gaming performance in the matches of professional Czech football teams in the Champions League and UEFA Europa League in 2013. During the observation to register set variables, then evaluate obtained results and compare them. Methods: The use of observational analysis and comparison of selected variables of gaming performance in competitive matches of professional football. ...

  16. Fuzzy target selection using RFM variables

    NARCIS (Netherlands)

    Kaymak, U.

    2001-01-01

    An important data mining problem from the world of direct marketing is target selection. The main task in target selection is the determination of potential customers for a product from a client database. Target selection algorithms identify the profiles of customer groups for a particular product,

  17. Public transport travel time and its variability

    OpenAIRE

    Mazloumi Shomali, Ehsan

    2017-01-01

    Executive Summary Public transport agencies around the world are constantly trying to improve the performance of their service, and to provide passengers with a more reliable service. Two major measures to evaluate the performance of a transit system include travel time and travel time variability. Information on these two measures provides operators with a capacity to identify the problematic locations in a transport system and improve operating plans. Likewise, users can benefit through...

  18. Travel time variability and airport accessibility

    OpenAIRE

    Koster, P.R.; Kroes, E.P.; Verhoef, E.T.

    2010-01-01

    This discussion paper resulted in a publication in Transportation Research Part B: Methodological (2011). Vol. 45(10), pages 1545-1559. This paper analyses the cost of access travel time variability for air travelers. Reliable access to airports is important since it is likely that the cost of missing a flight is high. First, the determinants of the preferred arrival times at airports are analyzed, including trip purpose, type of airport, flight characteristics, travel experience, type of che...

  19. Characterizing the Optical Variability of Bright Blazars: Variability-based Selection of Fermi Active Galactic Nuclei

    Science.gov (United States)

    Ruan, John J.; Anderson, Scott F.; MacLeod, Chelsea L.; Becker, Andrew C.; Burnett, T. H.; Davenport, James R. A.; Ivezić, Željko; Kochanek, Christopher S.; Plotkin, Richard M.; Sesar, Branimir; Stuart, J. Scott

    2012-11-01

    We investigate the use of optical photometric variability to select and identify blazars in large-scale time-domain surveys, in part to aid in the identification of blazar counterparts to the ~30% of γ-ray sources in the Fermi 2FGL catalog still lacking reliable associations. Using data from the optical LINEAR asteroid survey, we characterize the optical variability of blazars by fitting a damped random walk model to individual light curves with two main model parameters, the characteristic timescales of variability τ, and driving amplitudes on short timescales \\hat{\\sigma }. Imposing cuts on minimum τ and \\hat{\\sigma } allows for blazar selection with high efficiency E and completeness C. To test the efficacy of this approach, we apply this method to optically variable LINEAR objects that fall within the several-arcminute error ellipses of γ-ray sources in the Fermi 2FGL catalog. Despite the extreme stellar contamination at the shallow depth of the LINEAR survey, we are able to recover previously associated optical counterparts to Fermi active galactic nuclei with E >= 88% and C = 88% in Fermi 95% confidence error ellipses having semimajor axis r beaming. After correcting for beaming, we estimate that the characteristic timescale of blazar variability is ~3 years in the rest frame of the jet, in contrast with the ~320 day disk flux timescale observed in quasars. The variability-based selection method presented will be useful for blazar identification in time-domain optical surveys and is also a probe of jet physics.

  20. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  1. Periodicity and stability for variable-time impulsive neural networks.

    Science.gov (United States)

    Li, Hongfei; Li, Chuandong; Huang, Tingwen

    2017-10-01

    The paper considers a general neural networks model with variable-time impulses. It is shown that each solution of the system intersects with every discontinuous surface exactly once via several new well-proposed assumptions. Moreover, based on the comparison principle, this paper shows that neural networks with variable-time impulse can be reduced to the corresponding neural network with fixed-time impulses under well-selected conditions. Meanwhile, the fixed-time impulsive systems can be regarded as the comparison system of the variable-time impulsive neural networks. Furthermore, a series of sufficient criteria are derived to ensure the existence and global exponential stability of periodic solution of variable-time impulsive neural networks, and to illustrate the same stability properties between variable-time impulsive neural networks and the fixed-time ones. The new criteria are established by applying Schaefer's fixed point theorem combined with the use of inequality technique. Finally, a numerical example is presented to show the effectiveness of the proposed results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.

    Science.gov (United States)

    Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen

    2011-04-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.

  3. Age-related change in renal corticomedullary differentiation: evaluation with noncontrast-enhanced steady-state free precession (SSFP) MRI with spatially selective inversion pulse using variable inversion time.

    Science.gov (United States)

    Noda, Yasufumi; Kanki, Akihiko; Yamamoto, Akira; Higashi, Hiroki; Tanimoto, Daigo; Sato, Tomohiro; Higaki, Atsushi; Tamada, Tsutomu; Ito, Katsuyoshi

    2014-07-01

    To evaluate age-related change in renal corticomedullary differentiation and renal cortical thickness by means of noncontrast-enhanced steady-state free precession (SSFP) magnetic resonance imaging (MRI) with spatially selective inversion recovery (IR) pulse. The Institutional Review Board of our hospital approved this retrospective study and patient informed consent was waived. This study included 48 patients without renal diseases who underwent noncontrast-enhanced SSFP MRI with spatially selective IR pulse using variable inversion times (TIs) (700-1500 msec). The signal intensity of renal cortex and medulla were measured to calculate renal corticomedullary contrast ratio. Additionally, renal cortical thickness was measured. The renal corticomedullary junction was clearly depicted in all patients. The mean cortical thickness was 3.9 ± 0.83 mm. The mean corticomedullary contrast ratio was 4.7 ± 1.4. There was a negative correlation between optimal TI for the best visualization of renal corticomedullary differentiation and age (r = -0.378; P = 0.001). However, there was no significant correlation between renal corticomedullary contrast ratio and age (r = 0.187; P = 0.20). Similarly, no significant correlation was observed between renal cortical thickness and age (r = 0.054; P = 0.712). In the normal kidney, noncontrast-enhanced SSFP MRI with spatially selective IR pulse can be used to assess renal corticomedullary differentiation and cortical thickness without the influence of aging, although optimal TI values for the best visualization of renal corticomedullary junction were shortened with aging. © 2013 Wiley Periodicals, Inc.

  4. Physical attraction to reliable, low variability nervous systems: Reaction time variability predicts attractiveness.

    Science.gov (United States)

    Butler, Emily E; Saville, Christopher W N; Ward, Robert; Ramsey, Richard

    2017-01-01

    The human face cues a range of important fitness information, which guides mate selection towards desirable others. Given humans' high investment in the central nervous system (CNS), cues to CNS function should be especially important in social selection. We tested if facial attractiveness preferences are sensitive to the reliability of human nervous system function. Several decades of research suggest an operational measure for CNS reliability is reaction time variability, which is measured by standard deviation of reaction times across trials. Across two experiments, we show that low reaction time variability is associated with facial attractiveness. Moreover, variability in performance made a unique contribution to attractiveness judgements above and beyond both physical health and sex-typicality judgements, which have previously been associated with perceptions of attractiveness. In a third experiment, we empirically estimated the distribution of attractiveness preferences expected by chance and show that the size and direction of our results in Experiments 1 and 2 are statistically unlikely without reference to reaction time variability. We conclude that an operating characteristic of the human nervous system, reliability of information processing, is signalled to others through facial appearance. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Bayesian Group Bridge for Bi-level Variable Selection.

    Science.gov (United States)

    Mallick, Himel; Yi, Nengjun

    2017-06-01

    A Bayesian bi-level variable selection method (BAGB: Bayesian Analysis of Group Bridge) is developed for regularized regression and classification. This new development is motivated by grouped data, where generic variables can be divided into multiple groups, with variables in the same group being mechanistically related or statistically correlated. As an alternative to frequentist group variable selection methods, BAGB incorporates structural information among predictors through a group-wise shrinkage prior. Posterior computation proceeds via an efficient MCMC algorithm. In addition to the usual ease-of-interpretation of hierarchical linear models, the Bayesian formulation produces valid standard errors, a feature that is notably absent in the frequentist framework. Empirical evidence of the attractiveness of the method is illustrated by extensive Monte Carlo simulations and real data analysis. Finally, several extensions of this new approach are presented, providing a unified framework for bi-level variable selection in general models with flexible penalties.

  6. Using variable combination population analysis for variable selection in multivariate calibration.

    Science.gov (United States)

    Yun, Yong-Huan; Wang, Wei-Ting; Deng, Bai-Chuan; Lai, Guang-Bi; Liu, Xin-bo; Ren, Da-Bing; Liang, Yi-Zeng; Fan, Wei; Xu, Qing-Song

    2015-03-03

    Variable (wavelength or feature) selection techniques have become a critical step for the analysis of datasets with high number of variables and relatively few samples. In this study, a novel variable selection strategy, variable combination population analysis (VCPA), was proposed. This strategy consists of two crucial procedures. First, the exponentially decreasing function (EDF), which is the simple and effective principle of 'survival of the fittest' from Darwin's natural evolution theory, is employed to determine the number of variables to keep and continuously shrink the variable space. Second, in each EDF run, binary matrix sampling (BMS) strategy that gives each variable the same chance to be selected and generates different variable combinations, is used to produce a population of subsets to construct a population of sub-models. Then, model population analysis (MPA) is employed to find the variable subsets with the lower root mean squares error of cross validation (RMSECV). The frequency of each variable appearing in the best 10% sub-models is computed. The higher the frequency is, the more important the variable is. The performance of the proposed procedure was investigated using three real NIR datasets. The results indicate that VCPA is a good variable selection strategy when compared with four high performing variable selection methods: genetic algorithm-partial least squares (GA-PLS), Monte Carlo uninformative variable elimination by PLS (MC-UVE-PLS), competitive adaptive reweighted sampling (CARS) and iteratively retains informative variables (IRIV). The MATLAB source code of VCPA is available for academic research on the website: http://www.mathworks.com/matlabcentral/fileexchange/authors/498750. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Ensembling Variable Selectors by Stability Selection for the Cox Model

    Directory of Open Access Journals (Sweden)

    Qing-Yan Yin

    2017-01-01

    Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

  8. A Variable-Selection Heuristic for K-Means Clustering.

    Science.gov (United States)

    Brusco, Michael J.; Cradit, J. Dennis

    2001-01-01

    Presents a variable selection heuristic for nonhierarchical (K-means) cluster analysis based on the adjusted Rand index for measuring cluster recovery. Subjected the heuristic to Monte Carlo testing across more than 2,200 datasets. Results indicate that the heuristic is extremely effective at eliminating masking variables. (SLD)

  9. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  10. Variability of gastric emptying time using standardized radiolabeled meals

    International Nuclear Information System (INIS)

    Christian, P.E.; Brophy, C.M.; Egger, M.J.; Taylor, A.; Moore, J.G.

    1984-01-01

    To define the range of inter- and intra-subject variability on gastric emptying measurements, eight healthy male subjects (ages 19-40) received meals on four separate occasions. The meal consisted of 150 g of beef stew labeled with Tc-99m SC labeled liver (600 μCi) and 150 g of orange juice containing In-111 DTPA (100 μCi) as the solid- and liquid-phase markers respectively. Images of the solid and liquid phases were obtained at 20 min intervals immediately after meal ingestion. The stomach region was selected from digital images and data were corrected for radionuclide interference, radioactive decay and the geometric mean of anterior and posterior counts. More absolute variability was seen with the solid than the liquid marker emptying for the group. The mean solid half-emptying time was 58 +- 17 min (range 29-92) while the mean liquid half-emptying time was 24 +- 8 min (range 12-37). A nested random effects analysis of variance showed moderate intra-subject variability for solid half-emptying times (rho = 0.4594), and high intra-subject variability was implied by a low correlation (rho = 0.2084) for liquid half-emptying. The average inter-subject differences were 58.3% of the total variance for solids (rho = 0.0017). For liquids, the inter-subject variability was 69.1% of the total variance, but was only suggestive of statistical significance (rho = 0.0666). The normal half emptying time for gastric emptying of liquids and solids is a variable phenomenon in healthy subjects and has great inter- and intra-individual day-to-day differences

  11. Variability of gastric emptying time using standardized radiolabeled meals

    Energy Technology Data Exchange (ETDEWEB)

    Christian, P.E.; Brophy, C.M.; Egger, M.J.; Taylor, A.; Moore, J.G.

    1984-01-01

    To define the range of inter- and intra-subject variability on gastric emptying measurements, eight healthy male subjects (ages 19-40) received meals on four separate occasions. The meal consisted of 150 g of beef stew labeled with Tc-99m SC labeled liver (600 ..mu..Ci) and 150 g of orange juice containing In-111 DTPA (100 ..mu..Ci) as the solid- and liquid-phase markers respectively. Images of the solid and liquid phases were obtained at 20 min intervals immediately after meal ingestion. The stomach region was selected from digital images and data were corrected for radionuclide interference, radioactive decay and the geometric mean of anterior and posterior counts. More absolute variability was seen with the solid than the liquid marker emptying for the group. The mean solid half-emptying time was 58 +- 17 min (range 29-92) while the mean liquid half-emptying time was 24 +- 8 min (range 12-37). A nested random effects analysis of variance showed moderate intra-subject variability for solid half-emptying times (rho = 0.4594), and high intra-subject variability was implied by a low correlation (rho = 0.2084) for liquid half-emptying. The average inter-subject differences were 58.3% of the total variance for solids (rho = 0.0017). For liquids, the inter-subject variability was 69.1% of the total variance, but was only suggestive of statistical significance (rho = 0.0666). The normal half emptying time for gastric emptying of liquids and solids is a variable phenomenon in healthy subjects and has great inter- and intra-individual day-to-day differences.

  12. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  13. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  14. Exhaustive Search for Sparse Variable Selection in Linear Regression

    Science.gov (United States)

    Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato

    2018-04-01

    We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.

  15. Variable selection in multivariate calibration based on clustering of variable concept.

    Science.gov (United States)

    Farrokhnia, Maryam; Karimi, Sadegh

    2016-01-01

    Recently we have proposed a new variable selection algorithm, based on clustering of variable concept (CLoVA) in classification problem. With the same idea, this new concept has been applied to a regression problem and then the obtained results have been compared with conventional variable selection strategies for PLS. The basic idea behind the clustering of variable is that, the instrument channels are clustered into different clusters via clustering algorithms. Then, the spectral data of each cluster are subjected to PLS regression. Different real data sets (Cargill corn, Biscuit dough, ACE QSAR, Soy, and Tablet) have been used to evaluate the influence of the clustering of variables on the prediction performances of PLS. Almost in the all cases, the statistical parameter especially in prediction error shows the superiority of CLoVA-PLS respect to other variable selection strategies. Finally the synergy clustering of variable (sCLoVA-PLS), which is used the combination of cluster, has been proposed as an efficient and modification of CLoVA algorithm. The obtained statistical parameter indicates that variable clustering can split useful part from redundant ones, and then based on informative cluster; stable model can be reached. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Coupled variable selection for regression modeling of complex treatment patterns in a clinical cancer registry.

    Science.gov (United States)

    Schmidtmann, I; Elsäßer, A; Weinmann, A; Binder, H

    2014-12-30

    For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivated by a clinical cancer registry application, where complex event patterns have to be dealt with and variable selection is needed at the same time, we propose a general approach for linking variable selection between several Cox models. Specifically, we combine score statistics for each covariate across models by Fisher's method as a basis for variable selection. This principle is implemented for a stepwise forward selection approach as well as for a regularized regression technique. In an application to data from hepatocellular carcinoma patients, the coupled stepwise approach is seen to facilitate joint interpretation of the different cause-specific Cox models. In conditional survival models at landmark times, which address updates of prediction as time progresses and both treatment and other potential explanatory variables may change, the coupled regularized regression approach identifies potentially important, stably selected covariates together with their effect time pattern, despite having only a small number of events. These results highlight the promise of the proposed approach for coupling variable selection between Cox models, which is particularly relevant for modeling for clinical cancer registries with their complex event patterns. Copyright © 2014 John Wiley & Sons

  17. Commuters’ valuation of travel time variability in Barcelona

    OpenAIRE

    Javier Asensio; Anna Matas

    2007-01-01

    The value given by commuters to the variability of travel times is empirically analysed using stated preference data from Barcelona (Spain). Respondents are asked to choose between alternatives that differ in terms of cost, average travel time, variability of travel times and departure time. Different specifications of a scheduling choice model are used to measure the influence of various socioeconomic characteristics. Our results show that travel time variability.

  18. Portfolio Selection Based on Distance between Fuzzy Variables

    Directory of Open Access Journals (Sweden)

    Weiyi Qian

    2014-01-01

    Full Text Available This paper researches portfolio selection problem in fuzzy environment. We introduce a new simple method in which the distance between fuzzy variables is used to measure the divergence of fuzzy investment return from a prior one. Firstly, two new mathematical models are proposed by expressing divergence as distance, investment return as expected value, and risk as variance and semivariance, respectively. Secondly, the crisp forms of the new models are also provided for different types of fuzzy variables. Finally, several numerical examples are given to illustrate the effectiveness of the proposed approach.

  19. Using Random Forests to Select Optimal Input Variables for Short-Term Wind Speed Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-10-01

    Full Text Available Achieving relatively high-accuracy short-term wind speed forecasting estimates is a precondition for the construction and grid-connected operation of wind power forecasting systems for wind farms. Currently, most research is focused on the structure of forecasting models and does not consider the selection of input variables, which can have significant impacts on forecasting performance. This paper presents an input variable selection method for wind speed forecasting models. The candidate input variables for various leading periods are selected and random forests (RF is employed to evaluate the importance of all variable as features. The feature subset with the best evaluation performance is selected as the optimal feature set. Then, kernel-based extreme learning machine is constructed to evaluate the performance of input variables selection based on RF. The results of the case study show that by removing the uncorrelated and redundant features, RF effectively extracts the most strongly correlated set of features from the candidate input variables. By finding the optimal feature combination to represent the original information, RF simplifies the structure of the wind speed forecasting model, shortens the training time required, and substantially improves the model’s accuracy and generalization ability, demonstrating that the input variables selected by RF are effective.

  20. Spot Welding Characterizations With Time Variable

    International Nuclear Information System (INIS)

    Abdul Hafid; Pinitoyo, A.; History; Paidjo, Andryansyah; Sagino, Sudarmin; Tamzil, M.

    2001-01-01

    For obtain spot welding used effective data, this research is made, so that time operational of machine increasing. Welding parameters are material classification, electrical current, and weld time. All of the factors are determined welding quality. If the plate more thick, the time must be longer when the current constant. Another factor as determined welding quality are surface condition of electrode, surface condition of weld material, and material classifications. In this research, the weld machine type IP32A2 VI (110 V), Rivoira trademark is characterized

  1. Protein construct storage: Bayesian variable selection and prediction with mixtures.

    Science.gov (United States)

    Clyde, M A; Parmigiani, G

    1998-07-01

    Determining optimal conditions for protein storage while maintaining a high level of protein activity is an important question in pharmaceutical research. A designed experiment based on a space-filling design was conducted to understand the effects of factors affecting protein storage and to establish optimal storage conditions. Different model-selection strategies to identify important factors may lead to very different answers about optimal conditions. Uncertainty about which factors are important, or model uncertainty, can be a critical issue in decision-making. We use Bayesian variable selection methods for linear models to identify important variables in the protein storage data, while accounting for model uncertainty. We also use the Bayesian framework to build predictions based on a large family of models, rather than an individual model, and to evaluate the probability that certain candidate storage conditions are optimal.

  2. Mahalanobis distance and variable selection to optimize dose response

    International Nuclear Information System (INIS)

    Moore, D.H. II; Bennett, D.E.; Wyrobek, A.J.; Kranzler, D.

    1979-01-01

    A battery of statistical techniques are combined to improve detection of low-level dose response. First, Mahalanobis distances are used to classify objects as normal or abnormal. Then the proportion classified abnormal is regressed on dose. Finally, a subset of regressor variables is selected which maximizes the slope of the dose response line. Use of the techniques is illustrated by application to mouse sperm damaged by low doses of x-rays

  3. STEPWISE SELECTION OF VARIABLES IN DEA USING CONTRIBUTION LOADS

    Directory of Open Access Journals (Sweden)

    Fernando Fernandez-Palacin

    Full Text Available ABSTRACT In this paper, we propose a new methodology for variable selection in Data Envelopment Analysis (DEA. The methodology is based on an internal measure which evaluates the contribution of each variable in the calculation of the efficiency scores of DMUs. In order to apply the proposed method, an algorithm, known as “ADEA”, was developed and implemented in R. Step by step, the algorithm maximizes the load of the variable (input or output which contribute least to the calculation of the efficiency scores, redistributing the weights of the variables without altering the efficiency scores of the DMUs. Once the weights have been redistributed, if the lower contribution does not reach a previously given critical value, a variable with minimum contribution will be removed from the model and, as a result, the DEA will be solved again. The algorithm will stop when all variables reach a given contribution load to the DEA or until no more variables can be removed. In this way and contrary to what is usual, the algorithm provides a clear stop rule. In both cases, the efficiencies obtained from the DEA will be considered suitable and rightly interpreted in terms of the remaining variables, indicating the load themselves; moreover, the algorithm will provide a sequence of alternative nested models - potential solutions - that could be evaluated according to external criterion. To illustrate the procedure, we have applied the methodology proposed to obtain a research ranking of Spanish public universities. In this case, at each step of the algorithm, the critical value is obtained based on a simulation study.

  4. Constructing Proxy Variables to Measure Adult Learners' Time Management Strategies in LMS

    Science.gov (United States)

    Jo, Il-Hyun; Kim, Dongho; Yoon, Meehyun

    2015-01-01

    This study describes the process of constructing proxy variables from recorded log data within a Learning Management System (LMS), which represents adult learners' time management strategies in an online course. Based on previous research, three variables of total login time, login frequency, and regularity of login interval were selected as…

  5. Variability-based active galactic nucleus selection using image subtraction in the SDSS and LSST era

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Yumi; Gibson, Robert R.; Becker, Andrew C.; Ivezić, Željko; Connolly, Andrew J.; Ruan, John J.; Anderson, Scott F. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); MacLeod, Chelsea L., E-mail: ymchoi@astro.washington.edu [Physics Department, U.S. Naval Academy, 572 Holloway Road, Annapolis, MD 21402 (United States)

    2014-02-10

    With upcoming all-sky surveys such as LSST poised to generate a deep digital movie of the optical sky, variability-based active galactic nucleus (AGN) selection will enable the construction of highly complete catalogs with minimum contamination. In this study, we generate g-band difference images and construct light curves (LCs) for QSO/AGN candidates listed in Sloan Digital Sky Survey Stripe 82 public catalogs compiled from different methods, including spectroscopy, optical colors, variability, and X-ray detection. Image differencing excels at identifying variable sources embedded in complex or blended emission regions such as Type II AGNs and other low-luminosity AGNs that may be omitted from traditional photometric or spectroscopic catalogs. To separate QSOs/AGNs from other sources using our difference image LCs, we explore several LC statistics and parameterize optical variability by the characteristic damping timescale (τ) and variability amplitude. By virtue of distinguishable variability parameters of AGNs, we are able to select them with high completeness of 93.4% and efficiency (i.e., purity) of 71.3%. Based on optical variability, we also select highly variable blazar candidates, whose infrared colors are consistent with known blazars. One-third of them are also radio detected. With the X-ray selected AGN candidates, we probe the optical variability of X-ray detected optically extended sources using their difference image LCs for the first time. A combination of optical variability and X-ray detection enables us to select various types of host-dominated AGNs. Contrary to the AGN unification model prediction, two Type II AGN candidates (out of six) show detectable variability on long-term timescales like typical Type I AGNs. This study will provide a baseline for future optical variability studies of extended sources.

  6. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  7. Is Reaction Time Variability in ADHD Mainly at Low Frequencies?

    Science.gov (United States)

    Karalunas, Sarah L.; Huang-Pollock, Cynthia L.; Nigg, Joel T.

    2013-01-01

    Background: Intraindividual variability in reaction times (RT variability) has garnered increasing interest as an indicator of cognitive and neurobiological dysfunction in children with attention deficit hyperactivity disorder (ADHD). Recent theory and research has emphasized specific low-frequency patterns of RT variability. However, whether…

  8. A Simple K-Map Based Variable Selection Scheme in the Direct ...

    African Journals Online (AJOL)

    A multiplexer with (n-l) data select inputs can realise directly a function of n variables. In this paper, a simple k-map based variable selection scheme is proposed such that an n variable logic function can be synthesised using a multiplexer with (n-q) data input variables and q data select variables. The procedure is based on ...

  9. Time-scales of stellar rotational variability and starspot diagnostics

    Science.gov (United States)

    Arkhypov, Oleksiy V.; Khodachenko, Maxim L.; Lammer, Helmut; Güdel, Manuel; Lüftinger, Teresa; Johnstone, Colin P.

    2018-01-01

    The difference in stability of starspot distribution on the global and hemispherical scales is studied in the rotational spot variability of 1998 main-sequence stars observed by Kepler mission. It is found that the largest patterns are much more stable than smaller ones for cool, slow rotators, whereas the difference is less pronounced for hotter stars and/or faster rotators. This distinction is interpreted in terms of two mechanisms: (1) the diffusive decay of long-living spots in activity complexes of stars with saturated magnetic dynamos, and (2) the spot emergence, which is modulated by gigantic turbulent flows in convection zones of stars with a weaker magnetism. This opens a way for investigation of stellar deep convection, which is yet inaccessible for asteroseismology. Moreover, a subdiffusion in stellar photospheres was revealed from observations for the first time. A diagnostic diagram was proposed that allows differentiation and selection of stars for more detailed studies of these phenomena.

  10. Isoenzymatic variability in tropical maize populations under reciprocal recurrent selection

    Directory of Open Access Journals (Sweden)

    Pinto Luciana Rossini

    2003-01-01

    Full Text Available Maize (Zea mays L. is one of the crops in which the genetic variability has been extensively studied at isoenzymatic loci. The genetic variability of the maize populations BR-105 and BR-106, and the synthetics IG-3 and IG-4, obtained after one cycle of a high-intensity reciprocal recurrent selection (RRS, was investigated at seven isoenzymatic loci. A total of twenty alleles were identified, and most of the private alleles were found in the BR-106 population. One cycle of reciprocal recurrent selection (RRS caused reductions of 12% in the number of alleles in both populations. Changes in allele frequencies were also observed between populations and synthetics, mainly for the Est 2 locus. Populations presented similar values for the number of alleles per locus, percentage of polymorphic loci, and observed and expected heterozygosities. A decrease of the genetic variation values was observed for the synthetics as a consequence of genetic drift effects and reduction of the effective population sizes. The distribution of the genetic diversity within and between populations revealed that most of the diversity was maintained within them, i.e. BR-105 x BR-106 (G ST = 3.5% and IG-3 x IG-4 (G ST = 4.0%. The genetic distances between populations and synthetics increased approximately 21%. An increase in the genetic divergence between the populations occurred without limiting new selection procedures.

  11. Chaotic Dynamical State Variables Selection Procedure Based Image Encryption Scheme

    Directory of Open Access Journals (Sweden)

    Zia Bashir

    2017-12-01

    Full Text Available Nowadays, in the modern digital era, the use of computer technologies such as smartphones, tablets and the Internet, as well as the enormous quantity of confidential information being converted into digital form have resulted in raised security issues. This, in turn, has led to rapid developments in cryptography, due to the imminent need for system security. Low-dimensional chaotic systems have low complexity and key space, yet they achieve high encryption speed. An image encryption scheme is proposed that, without compromising the security, uses reasonable resources. We introduced a chaotic dynamic state variables selection procedure (CDSVSP to use all state variables of a hyper-chaotic four-dimensional dynamical system. As a result, less iterations of the dynamical system are required, and resources are saved, thus making the algorithm fast and suitable for practical use. The simulation results of security and other miscellaneous tests demonstrate that the suggested algorithm excels at robustness, security and high speed encryption.

  12. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  13. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable

  14. Selection for altruism through random drift in variable size populations

    Directory of Open Access Journals (Sweden)

    Houchmandzadeh Bahram

    2012-05-01

    Full Text Available Abstract Background Altruistic behavior is defined as helping others at a cost to oneself and a lowered fitness. The lower fitness implies that altruists should be selected against, which is in contradiction with their widespread presence is nature. Present models of selection for altruism (kin or multilevel show that altruistic behaviors can have ‘hidden’ advantages if the ‘common good’ produced by altruists is restricted to some related or unrelated groups. These models are mostly deterministic, or assume a frequency dependent fitness. Results Evolutionary dynamics is a competition between deterministic selection pressure and stochastic events due to random sampling from one generation to the next. We show here that an altruistic allele extending the carrying capacity of the habitat can win by increasing the random drift of “selfish” alleles. In other terms, the fixation probability of altruistic genes can be higher than those of a selfish ones, even though altruists have a smaller fitness. Moreover when populations are geographically structured, the altruists advantage can be highly amplified and the fixation probability of selfish genes can tend toward zero. The above results are obtained both by numerical and analytical calculations. Analytical results are obtained in the limit of large populations. Conclusions The theory we present does not involve kin or multilevel selection, but is based on the existence of random drift in variable size populations. The model is a generalization of the original Fisher-Wright and Moran models where the carrying capacity depends on the number of altruists.

  15. Spatial and temporal variability of interhemispheric transport times

    Science.gov (United States)

    Wu, Xiaokang; Yang, Huang; Waugh, Darryn W.; Orbe, Clara; Tilmes, Simone; Lamarque, Jean-Francois

    2018-05-01

    The seasonal and interannual variability of transport times from the northern midlatitude surface into the Southern Hemisphere is examined using simulations of three idealized age tracers: an ideal age tracer that yields the mean transit time from northern midlatitudes and two tracers with uniform 50- and 5-day decay. For all tracers the largest seasonal and interannual variability occurs near the surface within the tropics and is generally closely coupled to movement of the Intertropical Convergence Zone (ITCZ). There are, however, notable differences in variability between the different tracers. The largest seasonal and interannual variability in the mean age is generally confined to latitudes spanning the ITCZ, with very weak variability in the southern extratropics. In contrast, for tracers subject to spatially uniform exponential loss the peak variability tends to be south of the ITCZ, and there is a smaller contrast between tropical and extratropical variability. These differences in variability occur because the distribution of transit times from northern midlatitudes is very broad and tracers with more rapid loss are more sensitive to changes in fast transit times than the mean age tracer. These simulations suggest that the seasonal-interannual variability in the southern extratropics of trace gases with predominantly NH midlatitude sources may differ depending on the gases' chemical lifetimes.

  16. Predictor Variables for Marathon Race Time in Recreational Female Runners

    OpenAIRE

    Schmid, Wiebke; Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2012-01-01

    Purpose We intended to determine predictor variables of anthropometry and training for marathon race time in recreational female runners in order to predict marathon race time for future novice female runners. Methods Anthropometric characteristics such as body mass, body height, body mass index, circumferences of limbs, thicknesses of skin-folds and body fat as well as training variables such as volume and speed in running training were related to marathon race time using bi- and multi-varia...

  17. Predictive and Descriptive CoMFA Models: The Effect of Variable Selection.

    Science.gov (United States)

    Sepehri, Bakhtyar; Omidikia, Nematollah; Kompany-Zareh, Mohsen; Ghavami, Raouf

    2018-01-01

    Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  18. Effects of carprofen or meloxicam on selected haemostatic variables in miniature pigs after orthopaedic surgery

    Directory of Open Access Journals (Sweden)

    Petr Raušer

    2011-01-01

    Full Text Available The aim of the study was to detect and compare the haemostatic variables and bleeding after 7‑days administration of carprofen or meloxicam in clinically healthy miniature pigs. Twenty-one clinically healthy Göttingen miniature pigs were divided into 3 groups. Selected haemostatic variables such as platelet count, prothrombin time, activated partial thromboplastin time, thrombin time, fibrinogen, serum biochemical variables such as total protein, bilirubin, urea, creatinine, alkaline phosphatase, alanine aminotransferase and gamma-glutamyltransferase and haemoglobin, haematocrit, red blood cells, white blood cells and buccal mucosal bleeding time were assessed before and 7 days after daily intramuscular administration of saline (1.5 ml per animal, control group, carprofen (2 mg·kg-1 or meloxicam (0.1 mg·kg-1. In pigs receiving carprofen or meloxicam, the thrombin time was significantly increased (p p p p < 0.05 compared to the control group. Significant differences were not detected in other haemostatic, biochemical variables or bleeding time compared to other groups or to the pretreatment values. Intramuscular administration of carprofen or meloxicam in healthy miniature pigs for 7 days causes sporadic, but not clinically important changes of selected haemostatic variables. Therefore, we can recommend them for perioperative use, e.g. for their analgesic effects, in orthopaedic or other surgical procedures without increased bleeding.

  19. Bayesian Model Selection under Time Constraints

    Science.gov (United States)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  20. Variable selection in near-infrared spectroscopy: Benchmarking of feature selection methods on biodiesel data

    International Nuclear Information System (INIS)

    Balabin, Roman M.; Smirnov, Sergey V.

    2011-01-01

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm -1 ) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic

  1. A model for AGN variability on multiple time-scales

    Science.gov (United States)

    Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.

    2018-05-01

    We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.

  2. Genome-wide prediction of traits with different genetic architecture through efficient variable selection.

    Science.gov (United States)

    Wimmer, Valentin; Lehermeier, Christina; Albrecht, Theresa; Auinger, Hans-Jürgen; Wang, Yu; Schön, Chris-Carolin

    2013-10-01

    In genome-based prediction there is considerable uncertainty about the statistical model and method required to maximize prediction accuracy. For traits influenced by a small number of quantitative trait loci (QTL), predictions are expected to benefit from methods performing variable selection [e.g., BayesB or the least absolute shrinkage and selection operator (LASSO)] compared to methods distributing effects across the genome [ridge regression best linear unbiased prediction (RR-BLUP)]. We investigate the assumptions underlying successful variable selection by combining computer simulations with large-scale experimental data sets from rice (Oryza sativa L.), wheat (Triticum aestivum L.), and Arabidopsis thaliana (L.). We demonstrate that variable selection can be successful when the number of phenotyped individuals is much larger than the number of causal mutations contributing to the trait. We show that the sample size required for efficient variable selection increases dramatically with decreasing trait heritabilities and increasing extent of linkage disequilibrium (LD). We contrast and discuss contradictory results from simulation and experimental studies with respect to superiority of variable selection methods over RR-BLUP. Our results demonstrate that due to long-range LD, medium heritabilities, and small sample sizes, superiority of variable selection methods cannot be expected in plant breeding populations even for traits like FRIGIDA gene expression in Arabidopsis and flowering time in rice, assumed to be influenced by a few major QTL. We extend our conclusions to the analysis of whole-genome sequence data and infer upper bounds for the number of causal mutations which can be identified by LASSO. Our results have major impact on the choice of statistical method needed to make credible inferences about genetic architecture and prediction accuracy of complex traits.

  3. The cost of travel time variability: three measures with properties

    DEFF Research Database (Denmark)

    Engelson, Leonid; Fosgerau, Mogens

    2016-01-01

    This paper explores the relationships between three types of measures of the cost of travel time variability: measures based on scheduling preferences and implicit departure time choice, Bernoulli type measures based on a univariate function of travel time, and mean-dispersion measures. We...

  4. Ethnic variability in adiposity and cardiovascular risk: the variable disease selection hypothesis.

    Science.gov (United States)

    Wells, Jonathan C K

    2009-02-01

    Evidence increasingly suggests that ethnic differences in cardiovascular risk are partly mediated by adipose tissue biology, which refers to the regional distribution of adipose tissue and its differential metabolic activity. This paper proposes a novel evolutionary hypothesis for ethnic genetic variability in adipose tissue biology. Whereas medical interest focuses on the harmful effect of excess fat, the value of adipose tissue is greatest during chronic energy insufficiency. Following Neel's influential paper on the thrifty genotype, proposed to have been favoured by exposure to cycles of feast and famine, much effort has been devoted to searching for genetic markers of 'thrifty metabolism'. However, whether famine-induced starvation was the primary selective pressure on adipose tissue biology has been questioned, while the notion that fat primarily represents a buffer against starvation appears inconsistent with historical records of mortality during famines. This paper reviews evidence for the role played by adipose tissue in immune function and proposes that adipose tissue biology responds to selective pressures acting through infectious disease. Different diseases activate the immune system in different ways and induce different metabolic costs. It is hypothesized that exposure to different infectious disease burdens has favoured ethnic genetic variability in the anatomical location of, and metabolic profile of, adipose tissue depots.

  5. Discrete-time BAM neural networks with variable delays

    Science.gov (United States)

    Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi

    2007-07-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.

  6. Discrete-time BAM neural networks with variable delays

    International Nuclear Information System (INIS)

    Liu Xinge; Tang Meilan; Martin, Ralph; Liu Xinbi

    2007-01-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development

  7. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  8. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  9. Combining epidemiologic and biostatistical tools to enhance variable selection in HIV cohort analyses.

    Directory of Open Access Journals (Sweden)

    Christopher Rentsch

    Full Text Available BACKGROUND: Variable selection is an important step in building a multivariate regression model for which several methods and statistical packages are available. A comprehensive approach for variable selection in complex multivariate regression analyses within HIV cohorts is explored by utilizing both epidemiological and biostatistical procedures. METHODS: Three different methods for variable selection were illustrated in a study comparing survival time between subjects in the Department of Defense's National History Study and the Atlanta Veterans Affairs Medical Center's HIV Atlanta VA Cohort Study. The first two methods were stepwise selection procedures, based either on significance tests (Score test, or on information theory (Akaike Information Criterion, while the third method employed a Bayesian argument (Bayesian Model Averaging. RESULTS: All three methods resulted in a similar parsimonious survival model. Three of the covariates previously used in the multivariate model were not included in the final model suggested by the three approaches. When comparing the parsimonious model to the previously published model, there was evidence of less variance in the main survival estimates. CONCLUSIONS: The variable selection approaches considered in this study allowed building a model based on significance tests, on an information criterion, and on averaging models using their posterior probabilities. A parsimonious model that balanced these three approaches was found to provide a better fit than the previously reported model.

  10. Sources of variability and systematic error in mouse timing behavior.

    Science.gov (United States)

    Gallistel, C R; King, Adam; McDonald, Robert

    2004-01-01

    In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.

  11. Variability in reaction time performance of younger and older adults.

    Science.gov (United States)

    Hultsch, David F; MacDonald, Stuart W S; Dixon, Roger A

    2002-03-01

    Age differences in three basic types of variability were examined: variability between persons (diversity), variability within persons across tasks (dispersion), and variability within persons across time (inconsistency). Measures of variability were based on latency performance from four measures of reaction time (RT) performed by a total of 99 younger adults (ages 17--36 years) and 763 older adults (ages 54--94 years). Results indicated that all three types of variability were greater in older compared with younger participants even when group differences in speed were statistically controlled. Quantile-quantile plots showed age and task differences in the shape of the inconsistency distributions. Measures of within-person variability (dispersion and inconsistency) were positively correlated. Individual differences in RT inconsistency correlated negatively with level of performance on measures of perceptual speed, working memory, episodic memory, and crystallized abilities. Partial set correlation analyses indicated that inconsistency predicted cognitive performance independent of level of performance. The results indicate that variability of performance is an important indicator of cognitive functioning and aging.

  12. Sex-specific selection for MHC variability in Alpine chamois

    Directory of Open Access Journals (Sweden)

    Schaschl Helmut

    2012-02-01

    Full Text Available Abstract Background In mammals, males typically have shorter lives than females. This difference is thought to be due to behavioural traits which enhance competitive abilities, and hence male reproductive success, but impair survival. Furthermore, in many species males usually show higher parasite burden than females. Consequently, the intensity of selection for genetic factors which reduce susceptibility to pathogens may differ between sexes. High variability at the major histocompatibility complex (MHC genes is believed to be advantageous for detecting and combating the range of infectious agents present in the environment. Increased heterozygosity at these immune genes is expected to be important for individual longevity. However, whether males in natural populations benefit more from MHC heterozygosity than females has rarely been investigated. We investigated this question in a long-term study of free-living Alpine chamois (Rupicapra rupicapra, a polygynous mountain ungulate. Results Here we show that male chamois survive significantly (P = 0.022 longer if heterozygous at the MHC class II DRB locus, whereas females do not. Improved survival of males was not a result of heterozygote advantage per se, as background heterozygosity (estimated across twelve microsatellite loci did not change significantly with age. Furthermore, reproductively active males depleted their body fat reserves earlier than females leading to significantly impaired survival rates in this sex (P Conclusions Increased MHC class II DRB heterozygosity with age in males, suggests that MHC heterozygous males survive longer than homozygotes. Reproductively active males appear to be less likely to survive than females most likely because of the energetic challenge of the winter rut, accompanied by earlier depletion of their body fat stores, and a generally higher parasite burden. This scenario renders the MHC-mediated immune response more important for males than for females

  13. Timing variability in children with early-treated congenital hypothyroidism

    NARCIS (Netherlands)

    Kooistra, L.; Snijders, T.A.B.; Schellekens, J.M.H.; Kalverboer, A.F.; Geuze, R.H.

    This study reports on central and peripheral determinants of timing variability in self-paced tapping by children with early-treated congenital hypothyroidism (CH). A theoretical model of the timing of repetitive movements developed by Wing and Kristofferson was applied to estimate the central

  14. Predicting travel time variability for cost-benefit analysis

    NARCIS (Netherlands)

    Peer, S.; Koopmans, C.; Verhoef, E.T.

    2010-01-01

    Unreliable travel times cause substantial costs to travelers. Nevertheless, they are not taken into account in many cost-benefit-analyses (CBA), or only in very rough ways. This paper aims at providing simple rules on how variability can be predicted, based on travel time data from Dutch highways.

  15. Birth order and selected work-related personality variables.

    Science.gov (United States)

    Phillips, A S; Bedeian, A G; Mossholder, K W; Touliatos, J

    1988-12-01

    A possible link between birth order and various individual characteristics (e. g., intelligence, potential eminence, need for achievement, sociability) has been suggested by personality theorists such as Adler for over a century. The present study examines whether birth order is associated with selected personality variables that may be related to various work outcomes. 3 of 7 hypotheses were supported and the effect sizes for these were small. Firstborns scored significantly higher than later borns on measures of dominance, good impression, and achievement via conformity. No differences between firstborns and later borns were found in managerial potential, work orientation, achievement via independence, and sociability. The study's sample consisted of 835 public, government, and industrial accountants responding to a national US survey of accounting professionals. The nature of the sample may have been partially responsible for the results obtained. Its homogeneity may have caused any birth order effects to wash out. It can be argued that successful membership in the accountancy profession requires internalization of a set of prescribed rules and standards. It may be that accountants as a group are locked in to a behavioral framework. Any differentiation would result from spurious interpersonal differences, not from predictable birth-order related characteristics. A final interpretation is that birth order effects are nonexistent or statistical artifacts. Given the present data and particularistic sample, however, the authors have insufficient information from which to draw such a conclusion.

  16. Automatic variable selection method and a comparison for quantitative analysis in laser-induced breakdown spectroscopy

    Science.gov (United States)

    Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong

    2018-05-01

    In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.

  17. A moving mesh method with variable relaxation time

    OpenAIRE

    Soheili, Ali Reza; Stockie, John M.

    2006-01-01

    We propose a moving mesh adaptive approach for solving time-dependent partial differential equations. The motion of spatial grid points is governed by a moving mesh PDE (MMPDE) in which a mesh relaxation time \\tau is employed as a regularization parameter. Previously reported results on MMPDEs have invariably employed a constant value of the parameter \\tau. We extend this standard approach by incorporating a variable relaxation time that is calculated adaptively alongside the solution in orde...

  18. Space and time evolution of two nonlinearly coupled variables

    International Nuclear Information System (INIS)

    Obayashi, H.; Totsuji, H.; Wilhelmsson, H.

    1976-12-01

    The system of two coupled linear differential equations are studied assuming that the coupling terms are proportional to the product of the dependent variables, representing e.g. intensities or populations. It is furthermore assumed that these variables experience different linear dissipation or growth. The derivations account for space as well as time dependence of the variables. It is found that certain particular solutions can be obtained to this system, whereas a full solution in space and time as an initial value problem is outside the scope of the present paper. The system has a nonlinear equilibrium solution for which the nonlinear coupling terms balance the terms of linear dissipation. The case of space and time evolution of a small perturbation of the nonlinear equilibrium state, given the initial one-dimensional spatial distribution of the perturbation, is also considered in some detail. (auth.)

  19. Verification of models for ballistic movement time and endpoint variability.

    Science.gov (United States)

    Lin, Ray F; Drury, Colin G

    2013-01-01

    A hand control movement is composed of several ballistic movements. The time required in performing a ballistic movement and its endpoint variability are two important properties in developing movement models. The purpose of this study was to test potential models for predicting these two properties. Twelve participants conducted ballistic movements of specific amplitudes using a drawing tablet. The measured data of movement time and endpoint variability were then used to verify the models. This study was successful with Hoffmann and Gan's movement time model (Hoffmann, 1981; Gan and Hoffmann 1988) predicting more than 90.7% data variance for 84 individual measurements. A new theoretically developed ballistic movement variability model, proved to be better than Howarth, Beggs, and Bowden's (1971) model, predicting on average 84.8% of stopping-variable error and 88.3% of aiming-variable errors. These two validated models will help build solid theoretical movement models and evaluate input devices. This article provides better models for predicting end accuracy and movement time of ballistic movements that are desirable in rapid aiming tasks, such as keying in numbers on a smart phone. The models allow better design of aiming tasks, for example button sizes on mobile phones for different user populations.

  20. Adaptive and Selective Time Averaging of Auditory Scenes

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; McDermott, Josh H.

    2018-01-01

    longer than previously reported integration times in the auditory system. Integration also showed signs of being restricted to sound elements attributed to a common source. The results suggest an integration process that depends on stimulus characteristics, integrating over longer extents when......To overcome variability, estimate scene characteristics, and compress sensory input, perceptual systems pool data into statistical summaries. Despite growing evidence for statistical representations in perception, the underlying mechanisms remain poorly understood. One example...... it benefits statistical estimation of variable signals and selectively integrating stimulus components likely to have a common cause in the world. Our methodology could be naturally extended to examine statistical representations of other types of sensory signals. Sound texture perception is thought...

  1. Variable selection methods in PLS regression - a comparison study on metabolomics data

    DEFF Research Database (Denmark)

    Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach

    . The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using LC-MS based metabolomic approach. References 1. Lê Cao KA, Rossouw D, Robert-Granié C, Besse P: A Sparse PLS for Variable Selection when...... integrated approach. Due to the high number of variables in data sets (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need to be related. Variable selection (or removal of irrelevant...... different strategies for variable selection on PLSR method were considered and compared with respect to selected subset of variables and the possibility for biological validation. Sparse PLSR [1] as well as PLSR with Jack-knifing [2] was applied to data in order to achieve variable selection prior...

  2. Using exogenous variables in testing for monotonic trends in hydrologic time series

    Science.gov (United States)

    Alley, William M.

    1988-01-01

    One approach that has been used in performing a nonparametric test for monotonic trend in a hydrologic time series consists of a two-stage analysis. First, a regression equation is estimated for the variable being tested as a function of an exogenous variable. A nonparametric trend test such as the Kendall test is then performed on the residuals from the equation. By analogy to stagewise regression and through Monte Carlo experiments, it is demonstrated that this approach will tend to underestimate the magnitude of the trend and to result in some loss in power as a result of ignoring the interaction between the exogenous variable and time. An alternative approach, referred to as the adjusted variable Kendall test, is demonstrated to generally have increased statistical power and to provide more reliable estimates of the trend slope. In addition, the utility of including an exogenous variable in a trend test is examined under selected conditions.

  3. Input variable selection for interpolating high-resolution climate ...

    African Journals Online (AJOL)

    Although the primary input data of climate interpolations are usually meteorological data, other related (independent) variables are frequently incorporated in the interpolation process. One such variable is elevation, which is known to have a strong influence on climate. This research investigates the potential of 4 additional ...

  4. Predictor variables for marathon race time in recreational female runners.

    Science.gov (United States)

    Schmid, Wiebke; Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2012-06-01

    We intended to determine predictor variables of anthropometry and training for marathon race time in recreational female runners in order to predict marathon race time for future novice female runners. Anthropometric characteristics such as body mass, body height, body mass index, circumferences of limbs, thicknesses of skin-folds and body fat as well as training variables such as volume and speed in running training were related to marathon race time using bi- and multi-variate analysis in 29 female runners. The marathoners completed the marathon distance within 251 (26) min, running at a speed of 10.2 (1.1) km/h. Body mass (r=0.37), body mass index (r=0.46), the circumferences of thigh (r=0.51) and calf (r=0.41), the skin-fold thicknesses of front thigh (r=0.38) and of medial calf (r=0.40), the sum of eight skin-folds (r=0.44) and body fat percentage (r=0.41) were related to marathon race time. For the variables of training, maximal distance ran per week (r=- 0.38), number of running training sessions per week (r=- 0.46) and the speed of the training sessions (r= - 0.60) were related to marathon race time. In the multi-variate analysis, the circumference of calf (P=0.02) and the speed of the training sessions (P=0.0014) were related to marathon race time. Marathon race time might be partially (r(2)=0.50) predicted by the following equation: Race time (min)=184.4 + 5.0 x (circumference calf, cm) -11.9 x (speed in running during training, km/h) for recreational female marathoners. Variables of both anthropometry and training were related to marathon race time in recreational female marathoners and cannot be reduced to one single predictor variable. For practical applications, a low circumference of calf and a high running speed in training are associated with a fast marathon race time in recreational female runners.

  5. Effective time management – selected issues

    Directory of Open Access Journals (Sweden)

    Aneta Olejniczak

    2013-03-01

    Full Text Available Deliberations included in this article contain the basic issues related to the subject of time management. As we know, people who waste their time the most, most complain about the lack of the time. We should treat our time, time of our co-workers, and friends as a valuable, but limited wealth. Principles of effective time management can be applied in any scientific and research institutions, companies or corporations. The benefits of a good and effective time management will be felt not only by ourselves but also by our friends and family. Detailed formulation of objectives, identification and elimination of time wasters and postponing work on later (Procrastination, using methods of time management and systematic control will allow for efficient use of time. A good plan is the basis for optimal and meaningful use of time.

  6. Long Pulse Integrator of Variable Integral Time Constant

    International Nuclear Information System (INIS)

    Wang Yong; Ji Zhenshan; Du Xiaoying; Wu Yichun; Li Shi; Luo Jiarong

    2010-01-01

    A kind of new long pulse integrator was designed based on the method of variable integral time constant and deducting integral drift by drift slope. The integral time constant can be changed by choosing different integral resistors, in order to improve the signal-to-noise ratio, and avoid output saturation; the slope of integral drift of a certain period of time can be calculated by digital signal processing, which can be used to deduct the drift of original integral signal in real time to reduce the integral drift. The tests show that this kind of long pulse integrator is good at reducing integral drift, which also can eliminate the effects of changing integral time constant. According to experiments, the integral time constant can be changed by remote control and manual adjustment of integral drift is avoided, which can improve the experiment efficiency greatly and can be used for electromagnetic measurement in Tokamak experiment. (authors)

  7. Inverse Ising problem in continuous time: A latent variable approach

    Science.gov (United States)

    Donner, Christian; Opper, Manfred

    2017-12-01

    We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.

  8. Increased timing variability in schizophrenia and bipolar disorder.

    Directory of Open Access Journals (Sweden)

    Amanda R Bolbecker

    Full Text Available Theoretical and empirical evidence suggests that impaired time perception and the neural circuitry underlying internal timing mechanisms may contribute to severe psychiatric disorders, including psychotic and mood disorders. The degree to which alterations in temporal perceptions reflect deficits that exist across psychosis-related phenotypes and the extent to which mood symptoms contribute to these deficits is currently unknown. In addition, compared to schizophrenia, where timing deficits have been more extensively investigated, sub-second timing has been studied relatively infrequently in bipolar disorder. The present study compared sub-second duration estimates of schizophrenia (SZ, schizoaffective disorder (SA, non-psychotic bipolar disorder (BDNP, bipolar disorder with psychotic features (BDP, and healthy non-psychiatric controls (HC on a well-established time perception task using sub-second durations. Participants included 66 SZ, 37 BDNP, 34 BDP, 31 SA, and 73 HC who participated in a temporal bisection task that required temporal judgements about auditory durations ranging from 300 to 600 milliseconds. Timing variability was significantly higher in SZ, BDP, and BDNP groups compared to healthy controls. The bisection point did not differ across groups. These findings suggest that both psychotic and mood symptoms may be associated with disruptions in internal timing mechanisms. Yet unexpected findings emerged. Specifically, the BDNP group had significantly increased variability compared to controls, but the SA group did not. In addition, these deficits appeared to exist independent of current symptom status. The absence of between group differences in bisection point suggests that increased variability in the SZ and bipolar disorder groups are due to alterations in perceptual timing in the sub-second range, possibly mediated by the cerebellum, rather than cognitive deficits.

  9. Selecting candidate predictor variables for the modelling of post ...

    African Journals Online (AJOL)

    Objectives: The objective of this project was to determine the variables most likely to be associated with post- .... (as defined subjectively by the research team) in global .... ed on their lack of knowledge of wealth scoring tools. ... HIV serology.

  10. Complexity Variability Assessment of Nonlinear Time-Varying Cardiovascular Control

    Science.gov (United States)

    Valenza, Gaetano; Citi, Luca; Garcia, Ronald G.; Taylor, Jessica Noggle; Toschi, Nicola; Barbieri, Riccardo

    2017-02-01

    The application of complex systems theory to physiology and medicine has provided meaningful information about the nonlinear aspects underlying the dynamics of a wide range of biological processes and their disease-related aberrations. However, no studies have investigated whether meaningful information can be extracted by quantifying second-order moments of time-varying cardiovascular complexity. To this extent, we introduce a novel mathematical framework termed complexity variability, in which the variance of instantaneous Lyapunov spectra estimated over time serves as a reference quantifier. We apply the proposed methodology to four exemplary studies involving disorders which stem from cardiology, neurology and psychiatry: Congestive Heart Failure (CHF), Major Depression Disorder (MDD), Parkinson’s Disease (PD), and Post-Traumatic Stress Disorder (PTSD) patients with insomnia under a yoga training regime. We show that complexity assessments derived from simple time-averaging are not able to discern pathology-related changes in autonomic control, and we demonstrate that between-group differences in measures of complexity variability are consistent across pathologies. Pathological states such as CHF, MDD, and PD are associated with an increased complexity variability when compared to healthy controls, whereas wellbeing derived from yoga in PTSD is associated with lower time-variance of complexity.

  11. Important variables in explaining real-time peak price in the independent power market of Ontario

    International Nuclear Information System (INIS)

    Rueda, I.E.A.; Marathe, A.

    2005-01-01

    This paper uses support vector machines (SVM) based learning algorithm to select important variables that help explain the real-time peak electricity price in the Ontario market. The Ontario market was opened to competition only in May 2002. Due to the limited number of observations available, finding a set of variables that can explain the independent power market of Ontario (IMO) real-time peak price is a significant challenge for the traders and analysts. The kernel regressions of the explanatory variables on the IMO real-time average peak price show that non-linear dependencies exist between the explanatory variables and the IMO price. This non-linear relationship combined with the low variable-observation ratio rule out conventional statistical analysis. Hence, we use an alternative machine learning technique to find the important explanatory variables for the IMO real-time average peak price. SVM sensitivity analysis based results find that the IMO's predispatch average peak price, the actual import peak volume, the peak load of the Ontario market and the net available supply after accounting for load (energy excess) are some of the most important variables in explaining the real-time average peak price in the Ontario electricity market. (author)

  12. A New Variable Weighting and Selection Procedure for K-Means Cluster Analysis

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    A variance-to-range ratio variable weighting procedure is proposed. We show how this weighting method is theoretically grounded in the inherent variability found in data exhibiting cluster structure. In addition, a variable selection procedure is proposed to operate in conjunction with the variable weighting technique. The performances of these…

  13. Three-Factor Market-Timing Models with Fama and French's Spread Variables

    Directory of Open Access Journals (Sweden)

    Joanna Olbryś

    2010-01-01

    Full Text Available The traditional performance measurement literature has attempted to distinguish security selection, or stock-picking ability, from market-timing, or the ability to predict overall market returns. However, the literature finds that it is not easy to separate ability into such dichotomous categories. Some researchers have developed models that allow the decomposition of manager performance into market-timing and selectivity skills. The main goal of this paper is to present modified versions of classical market-timing models with Fama and French’s spread variables SMB and HML, in the case of Polish equity mutual funds. (original abstract

  14. Pathogen-mediated selection for MHC variability in wild zebrafish

    Czech Academy of Sciences Publication Activity Database

    Smith, C.; Ondračková, Markéta; Spence, R.; Adams, S.; Betts, D. S.; Mallon, E.

    2011-01-01

    Roč. 13, č. 6 (2011), s. 589-605 ISSN 1522-0613 Institutional support: RVO:68081766 Keywords : digenean * frequency-dependent selection * heterozygote advantage * major histocompatibility complex * metazoan parasite * pathogen-driven selection Subject RIV: EG - Zoology Impact factor: 1.029, year: 2011

  15. Variable selection in multiple linear regression: The influence of ...

    African Journals Online (AJOL)

    provide an indication of whether the fit of the selected model improves or ... and calculate M(−i); quantify the influence of case i in terms of a function, f(•), of M and ..... [21] Venter JH & Snyman JLJ, 1997, Linear model selection based on risk ...

  16. The XRF spectrometer and the selection of analysis conditions (instrumental variables)

    International Nuclear Information System (INIS)

    Willis, J.P.

    2002-01-01

    Full text: This presentation will begin with a brief discussion of EDXRF and flat- and curved-crystal WDXRF spectrometers, contrasting the major differences between the three types. The remainder of the presentation will contain a detailed overview of the choice and settings of the many instrumental variables contained in a modern WDXRF spectrometer, and will discuss critically the choices facing the analyst in setting up a WDXRF spectrometer for different elements and applications. In particular it will discuss the choice of tube target (when a choice is possible), the kV and mA settings, tube filters, collimator masks, collimators, analyzing crystals, secondary collimators, detectors, pulse height selection, X-ray path medium (air, nitrogen, vacuum or helium), counting times for peak and background positions and their effect on counting statistics and lower limit of detection (LLD). The use of Figure of Merit (FOM) calculations to objectively choose the best combination of instrumental variables also will be discussed. This presentation will be followed by a shorter session on a subsequent day entitled - A Selection of XRF Conditions - Practical Session, where participants will be given the opportunity to discuss in groups the selection of the best instrumental variables for three very diverse applications. Copyright (2002) Australian X-ray Analytical Association Inc

  17. Selected Macroeconomic Variables and Stock Market Movements: Empirical evidence from Thailand

    Directory of Open Access Journals (Sweden)

    Joseph Ato Forson

    2014-06-01

    Full Text Available This paper investigates and analyzes the long-run equilibrium relationship between the Thai stock Exchange Index (SETI and selected macroeconomic variables using monthly time series data that cover a 20-year period from January 1990 to December 2009. The following macroeconomic variables are included in our analysis: money supply (MS, the consumer price index (CPI, interest rate (IR and the industrial production index (IP (as a proxy for GDP. Our findings prove that the SET Index and the selected macroeconomic variables are cointegrated at I (1 and have a significant equilibrium relationship over the long run. Money supply demonstrates a strong positive relationship with the SET Index over the long run, whereas the industrial production index and consumer price index show negative long-run relationships with the SET Index. Furthermore, in non-equilibrium situations, the error correction mechanism suggests that the consumer price index, industrial production index and money supply each contribute in some way to restore equilibrium. In addition, using Toda and Yamamoto’s augmented Granger causality test, we identify a bi-causal relationship between industrial production and money supply and unilateral causal relationships between CPI and IR, IP and CPI, MS and CPI, and IP and SETI, indicating that all of these variables are sensitive to Thai stock market movements. The policy implications of these findings are also discussed.

  18. First-Passage-Time Distribution for Variable-Diffusion Processes

    Science.gov (United States)

    Barney, Liberty; Gunaratne, Gemunu H.

    2017-05-01

    First-passage-time distribution, which presents the likelihood of a stock reaching a pre-specified price at a given time, is useful in establishing the value of financial instruments and in designing trading strategies. First-passage-time distribution for Wiener processes has a single peak, while that for stocks exhibits a notable second peak within a trading day. This feature has only been discussed sporadically—often dismissed as due to insufficient/incorrect data or circumvented by conversion to tick time—and to the best of our knowledge has not been explained in terms of the underlying stochastic process. It was shown previously that intra-day variations in the market can be modeled by a stochastic process containing two variable-diffusion processes (Hua et al. in, Physica A 419:221-233, 2015). We show here that the first-passage-time distribution of this two-stage variable-diffusion model does exhibit a behavior similar to the empirical observation. In addition, we find that an extended model incorporating overnight price fluctuations exhibits intra- and inter-day behavior similar to those of empirical first-passage-time distributions.

  19. Automatic classification of time-variable X-ray sources

    International Nuclear Information System (INIS)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M.

    2014-01-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  20. Automatic classification of time-variable X-ray sources

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M. [Sydney Institute for Astronomy, School of Physics, The University of Sydney, Sydney, NSW 2006 (Australia)

    2014-05-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  1. Rainfall trends and variability in selected areas of Ethiopian Somali ...

    African Journals Online (AJOL)

    Moreover, proper spatial distribution of meteorological stations together with early warning system are required to further support local adaptive and coping strategies that the community designed towards rainfall variability in particular and climate change/disaster and risk at large. Keywords: Ethiopian Somali Region, Gode, ...

  2. Numerical counting ratemeter with variable time constant and integrated circuits

    International Nuclear Information System (INIS)

    Kaiser, J.; Fuan, J.

    1967-01-01

    We present here the prototype of a numerical counting ratemeter which is a special version of variable time-constant frequency meter (1). The originality of this work lies in the fact that the change in the time constant is carried out automatically. Since the criterion for this change is the accuracy in the annunciated result, the integration time is varied as a function of the frequency. For the prototype described in this report, the time constant varies from 1 sec to 1 millisec. for frequencies in the range 10 Hz to 10 MHz. This prototype is built entirely of MECL-type integrated circuits from Motorola and is thus contained in two relatively small boxes. (authors) [fr

  3. HEART RATE VARIABILITY CLASSIFICATION USING SADE-ELM CLASSIFIER WITH BAT FEATURE SELECTION

    Directory of Open Access Journals (Sweden)

    R Kavitha

    2017-07-01

    Full Text Available The electrical activity of the human heart is measured by the vital bio medical signal called ECG. This electrocardiogram is employed as a crucial source to gather the diagnostic information of a patient’s cardiopathy. The monitoring function of cardiac disease is diagnosed by documenting and handling the electrocardiogram (ECG impulses. In the recent years many research has been done and developing an enhanced method to identify the risk in the patient’s body condition by processing and analysing the ECG signal. This analysis of the signal helps to find the cardiac abnormalities, arrhythmias, and many other heart problems. ECG signal is processed to detect the variability in heart rhythm; heart rate variability is calculated based on the time interval between heart beats. Heart Rate Variability HRV is measured by the variation in the beat to beat interval. The Heart rate Variability (HRV is an essential aspect to diagnose the properties of the heart. Recent development enhances the potential with the aid of non-linear metrics in reference point with feature selection. In this paper, the fundamental elements are taken from the ECG signal for feature selection process where Bat algorithm is employed for feature selection to predict the best feature and presented to the classifier for accurate classification. The popular machine learning algorithm ELM is taken for classification, integrated with evolutionary algorithm named Self- Adaptive Differential Evolution Extreme Learning Machine SADEELM to improve the reliability of classification. It combines Effective Fuzzy Kohonen clustering network (EFKCN to be able to increase the accuracy of the effect for HRV transmission classification. Hence, it is observed that the experiment carried out unveils that the precision is improved by the SADE-ELM method and concurrently optimizes the computation time.

  4. The new Toyota variable valve timing and lift system

    Energy Technology Data Exchange (ETDEWEB)

    Shimizu, K.; Fuwa, N.; Yoshihara, Y. [Toyota Motor Corporation (Japan); Hori, K. [Toyota Boshoku Corporation (Japan)

    2007-07-01

    A continuously variable valve timing (duration and phase) and lift system was developed. This system was applied to the valvetrain of a new 2.0L L4 engine (3ZRFAE) for the Japanese market. The system has rocker arms, which allow continuously variable timing and lift, situated between a conventional roller-rocker arm and the camshaft, an electromotor actuator to drive it and a phase mechanism for intake and exhaust camshafts (Dual VVT-i). The rocking center of the rocker arm is stationary, and the axial linear motion of a helical spline changes the initial phase of the rocker arm which varies the timing and lift. The linear motion mechanism uses an original planetary roller screw and is driven by a brushless motor with a built-in electric control unit. Since the rocking center and the linear motion helical spline center coincide, a compact cylinder head design was possible, and the cylinder head is a common design with a conventional engine. Since the ECU controls intake valve duration and timing, a fuel economy gain of maximum 10% (depending on driving condition) is obtained by reducing light to medium load pumping losses. Also intake efficiency was maximized throughout the speed range, resulting in a power gain of 10%. Further, HC emissions were reduced due to increased air speed at low valve lift. (orig.)

  5. Quadratic time dependent Hamiltonians and separation of variables

    International Nuclear Information System (INIS)

    Anzaldo-Meneses, A.

    2017-01-01

    Time dependent quantum problems defined by quadratic Hamiltonians are solved using canonical transformations. The Green’s function is obtained and a comparison with the classical Hamilton–Jacobi method leads to important geometrical insights like exterior differential systems, Monge cones and time dependent Gaussian metrics. The Wei–Norman approach is applied using unitary transformations defined in terms of generators of the associated Lie groups, here the semi-direct product of the Heisenberg group and the symplectic group. A new explicit relation for the unitary transformations is given in terms of a finite product of elementary transformations. The sequential application of adequate sets of unitary transformations leads naturally to a new separation of variables method for time dependent Hamiltonians, which is shown to be related to the Inönü–Wigner contraction of Lie groups. The new method allows also a better understanding of interacting particles or coupled modes and opens an alternative way to analyze topological phases in driven systems. - Highlights: • Exact unitary transformation reducing time dependent quadratic quantum Hamiltonian to zero. • New separation of variables method and simultaneous uncoupling of modes. • Explicit examples of transformations for one to four dimensional problems. • New general evolution equation for quadratic form in the action, respectively Green’s function.

  6. Joint Variable Selection and Classification with Immunohistochemical Data

    Directory of Open Access Journals (Sweden)

    Debashis Ghosh

    2009-01-01

    Full Text Available To determine if candidate cancer biomarkers have utility in a clinical setting, validation using immunohistochemical methods is typically done. Most analyses of such data have not incorporated the multivariate nature of the staining profiles. In this article, we consider modelling such data using recently developed ideas from the machine learning community. In particular, we consider the joint goals of feature selection and classification. We develop estimation procedures for the analysis of immunohistochemical profiles using the least absolute selection and shrinkage operator. These lead to novel and flexible models and algorithms for the analysis of compositional data. The techniques are illustrated using data from a cancer biomarker study.

  7. The quasar luminosity function from a variability-selected sample

    Science.gov (United States)

    Hawkins, M. R. S.; Veron, P.

    1993-01-01

    A sample of quasars is selected from a 10-yr sequence of 30 UK Schmidt plates. Luminosity functions are derived in several redshift intervals, which in each case show a featureless power-law rise towards low luminosities. There is no sign of the 'break' found in the recent UVX sample of Boyle et al. It is suggested that reasons for the disagreement are connected with biases in the selection of the UVX sample. The question of the nature of quasar evolution appears to be still unresolved.

  8. The Use of Variable Q1 Isolation Windows Improves Selectivity in LC-SWATH-MS Acquisition.

    Science.gov (United States)

    Zhang, Ying; Bilbao, Aivett; Bruderer, Tobias; Luban, Jeremy; Strambio-De-Castillia, Caterina; Lisacek, Frédérique; Hopfgartner, Gérard; Varesio, Emmanuel

    2015-10-02

    As tryptic peptides and metabolites are not equally distributed along the mass range, the probability of cross fragment ion interference is higher in certain windows when fixed Q1 SWATH windows are applied. We evaluated the benefits of utilizing variable Q1 SWATH windows with regards to selectivity improvement. Variable windows based on equalizing the distribution of either the precursor ion population (PIP) or the total ion current (TIC) within each window were generated by an in-house software, swathTUNER. These two variable Q1 SWATH window strategies outperformed, with respect to quantification and identification, the basic approach using a fixed window width (FIX) for proteomic profiling of human monocyte-derived dendritic cells (MDDCs). Thus, 13.8 and 8.4% additional peptide precursors, which resulted in 13.1 and 10.0% more proteins, were confidently identified by SWATH using the strategy PIP and TIC, respectively, in the MDDC proteomic sample. On the basis of the spectral library purity score, some improvement warranted by variable Q1 windows was also observed, albeit to a lesser extent, in the metabolomic profiling of human urine. We show that the novel concept of "scheduled SWATH" proposed here, which incorporates (i) variable isolation windows and (ii) precursor retention time segmentation further improves both peptide and metabolite identifications.

  9. Prewhitening of hydroclimatic time series? Implications for inferred change and variability across time scales

    Science.gov (United States)

    Razavi, Saman; Vogel, Richard

    2018-02-01

    Prewhitening, the process of eliminating or reducing short-term stochastic persistence to enable detection of deterministic change, has been extensively applied to time series analysis of a range of geophysical variables. Despite the controversy around its utility, methodologies for prewhitening time series continue to be a critical feature of a variety of analyses including: trend detection of hydroclimatic variables and reconstruction of climate and/or hydrology through proxy records such as tree rings. With a focus on the latter, this paper presents a generalized approach to exploring the impact of a wide range of stochastic structures of short- and long-term persistence on the variability of hydroclimatic time series. Through this approach, we examine the impact of prewhitening on the inferred variability of time series across time scales. We document how a focus on prewhitened, residual time series can be misleading, as it can drastically distort (or remove) the structure of variability across time scales. Through examples with actual data, we show how such loss of information in prewhitened time series of tree rings (so-called "residual chronologies") can lead to the underestimation of extreme conditions in climate and hydrology, particularly droughts, reconstructed for centuries preceding the historical period.

  10. Effect of balance exercise on selected kinematic gait variables in ...

    African Journals Online (AJOL)

    The purpose of this study was to investigate the effect of balance exercise on some selected kinematic gait parameters in patients with knee joint osteoarthritis. Forty subjects (18 men and 22 women) participated in the study.They were divided into two groups: Group 1 (experimental) that was treated with balance exercises, ...

  11. The Relationship between Attitudes toward Censorship and Selected Academic Variables.

    Science.gov (United States)

    Dwyer, Edward J.; Summy, Mary K.

    1989-01-01

    To examine characteristics of subjects relative to their attitudes toward censorship, a study surveyed 98 college students selected from students in a public university in the southeastern United States. A 24-item Likert-style censorship scale was used to measure attitudes toward censorship. Strong agreement with affirmative items would suggest…

  12. Valuing travel time variability: Characteristics of the travel time distribution on an urban road

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Fukuda, Daisuke

    2012-01-01

    This paper provides a detailed empirical investigation of the distribution of travel times on an urban road for valuation of travel time variability. Our investigation is premised on the use of a theoretical model with a number of desirable properties. The definition of the value of travel time...... variability depends on certain properties of the distribution of random travel times that require empirical verification. Applying a range of nonparametric statistical techniques to data giving minute-by-minute travel times for a congested urban road over a period of five months, we show that the standardized...... travel time is roughly independent of the time of day as required by the theory. Except for the extreme right tail, a stable distribution seems to fit the data well. The travel time distributions on consecutive links seem to share a common stability parameter such that the travel time distribution...

  13. Time variable cosmological constants from the age of universe

    International Nuclear Information System (INIS)

    Xu Lixin; Lu Jianbo; Li Wenbo

    2010-01-01

    In this Letter, time variable cosmological constant, dubbed age cosmological constant, is investigated motivated by the fact: any cosmological length scale and time scale can introduce a cosmological constant or vacuum energy density into Einstein's theory. The age cosmological constant takes the form ρ Λ =3c 2 M P 2 /t Λ 2 , where t Λ is the age or conformal age of our universe. The effective equation of state (EoS) of age cosmological constant are w Λ eff =-1+2/3 (√(Ω Λ ))/c and w Λ eff =-1+2/3 (√(Ω Λ ))/c (1+z) when the age and conformal age of universe are taken as the role of cosmological time scales respectively. The EoS are the same as the so-called agegraphic dark energy models. However, the evolution histories are different from the agegraphic ones for their different evolution equations.

  14. Time and space variability of spectral estimates of atmospheric pressure

    Science.gov (United States)

    Canavero, Flavio G.; Einaudi, Franco

    1987-01-01

    The temporal and spatial behaviors of atmospheric pressure spectra over the northern Italy and the Alpine massif were analyzed using data on surface pressure measurements carried out at two microbarograph stations in the Po Valley, one 50 km south of the Alps, the other in the foothills of the Dolomites. The first 15 days of the study overlapped with the Alpex Intensive Observation Period. The pressure records were found to be intrinsically nonstationary and were found to display substantial time variability, implying that the statistical moments depend on time. The shape and the energy content of spectra depended on different time segments. In addition, important differences existed between spectra obtained at the two stations, indicating a substantial effect of topography, particularly for periods less than 40 min.

  15. Quadratic time dependent Hamiltonians and separation of variables

    Science.gov (United States)

    Anzaldo-Meneses, A.

    2017-06-01

    Time dependent quantum problems defined by quadratic Hamiltonians are solved using canonical transformations. The Green's function is obtained and a comparison with the classical Hamilton-Jacobi method leads to important geometrical insights like exterior differential systems, Monge cones and time dependent Gaussian metrics. The Wei-Norman approach is applied using unitary transformations defined in terms of generators of the associated Lie groups, here the semi-direct product of the Heisenberg group and the symplectic group. A new explicit relation for the unitary transformations is given in terms of a finite product of elementary transformations. The sequential application of adequate sets of unitary transformations leads naturally to a new separation of variables method for time dependent Hamiltonians, which is shown to be related to the Inönü-Wigner contraction of Lie groups. The new method allows also a better understanding of interacting particles or coupled modes and opens an alternative way to analyze topological phases in driven systems.

  16. Time perception, attention, and memory: a selective review.

    Science.gov (United States)

    Block, Richard A; Gruber, Ronald P

    2014-06-01

    This article provides a selective review of time perception research, mainly focusing on the authors' research. Aspects of psychological time include simultaneity, successiveness, temporal order, and duration judgments. In contrast to findings at interstimulus intervals or durations less than 3.0-5.0 s, there is little evidence for an "across-senses" effect of perceptual modality (visual vs. auditory) at longer intervals or durations. In addition, the flow of time (events) is a pervasive perceptual illusion, and we review evidence on that. Some temporal information is encoded All rights reserved. relatively automatically into memory: People can judge time-related attributes such as recency, frequency, temporal order, and duration of events. Duration judgments in prospective and retrospective paradigms reveal differences between them, as well as variables that moderate the processes involved. An attentional-gate model is needed to account for prospective judgments, and a contextual-change model is needed to account for retrospective judgments. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Variable dead time counters: 2. A computer simulation

    International Nuclear Information System (INIS)

    Hooton, B.W.; Lees, E.W.

    1980-09-01

    A computer model has been developed to give a pulse train which simulates that generated by a variable dead time counter (VDC) used in safeguards determination of Pu mass. The model is applied to two algorithms generally used for VDC analysis. It is used to determine their limitations at high counting rates and to investigate the effects of random neutrons from (α,n) reactions. Both algorithms are found to be deficient for use with masses of 240 Pu greater than 100g and one commonly used algorithm is shown, by use of the model and also by theory, to yield a result which is dependent on the random neutron intensity. (author)

  18. The use of vector bootstrapping to improve variable selection precision in Lasso models

    NARCIS (Netherlands)

    Laurin, C.; Boomsma, D.I.; Lubke, G.H.

    2016-01-01

    The Lasso is a shrinkage regression method that is widely used for variable selection in statistical genetics. Commonly, K-fold cross-validation is used to fit a Lasso model. This is sometimes followed by using bootstrap confidence intervals to improve precision in the resulting variable selections.

  19. Social variables exert selective pressures in the evolution and form of primate mimetic musculature.

    Science.gov (United States)

    Burrows, Anne M; Li, Ly; Waller, Bridget M; Micheletta, Jerome

    2016-04-01

    Mammals use their faces in social interactions more so than any other vertebrates. Primates are an extreme among most mammals in their complex, direct, lifelong social interactions and their frequent use of facial displays is a means of proximate visual communication with conspecifics. The available repertoire of facial displays is primarily controlled by mimetic musculature, the muscles that move the face. The form of these muscles is, in turn, limited by and influenced by phylogenetic inertia but here we use examples, both morphological and physiological, to illustrate the influence that social variables may exert on the evolution and form of mimetic musculature among primates. Ecomorphology is concerned with the adaptive responses of morphology to various ecological variables such as diet, foliage density, predation pressures, and time of day activity. We present evidence that social variables also exert selective pressures on morphology, specifically using mimetic muscles among primates as an example. Social variables include group size, dominance 'style', and mating systems. We present two case studies to illustrate the potential influence of social behavior on adaptive morphology of mimetic musculature in primates: (1) gross morphology of the mimetic muscles around the external ear in closely related species of macaque (Macaca mulatta and Macaca nigra) characterized by varying dominance styles and (2) comparative physiology of the orbicularis oris muscle among select ape species. This muscle is used in both facial displays/expressions and in vocalizations/human speech. We present qualitative observations of myosin fiber-type distribution in this muscle of siamang (Symphalangus syndactylus), chimpanzee (Pan troglodytes), and human to demonstrate the potential influence of visual and auditory communication on muscle physiology. In sum, ecomorphologists should be aware of social selective pressures as well as ecological ones, and that observed morphology might

  20. Electrical Activity in a Time-Delay Four-Variable Neuron Model under Electromagnetic Induction

    Directory of Open Access Journals (Sweden)

    Keming Tang

    2017-11-01

    Full Text Available To investigate the effect of electromagnetic induction on the electrical activity of neuron, the variable for magnetic flow is used to improve Hindmarsh–Rose neuron model. Simultaneously, due to the existence of time-delay when signals are propagated between neurons or even in one neuron, it is important to study the role of time-delay in regulating the electrical activity of the neuron. For this end, a four-variable neuron model is proposed to investigate the effects of electromagnetic induction and time-delay. Simulation results suggest that the proposed neuron model can show multiple modes of electrical activity, which is dependent on the time-delay and external forcing current. It means that suitable discharge mode can be obtained by selecting the time-delay or external forcing current, which could be helpful for further investigation of electromagnetic radiation on biological neuronal system.

  1. Variable selection and estimation for longitudinal survey data

    KAUST Repository

    Wang, Li; Wang, Suojin; Wang, Guannan

    2014-01-01

    There is wide interest in studying longitudinal surveys where sample subjects are observed successively over time. Longitudinal surveys have been used in many areas today, for example, in the health and social sciences, to explore relationships

  2. Variable slip wind generator modeling for real-time simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gagnon, R.; Brochu, J.; Turmel, G. [Hydro-Quebec, Varennes, PQ (Canada). IREQ

    2006-07-01

    A model of a wind turbine using a variable slip wound-rotor induction machine was presented. The model was created as part of a library of generic wind generator models intended for wind integration studies. The stator winding of the wind generator was connected directly to the grid and the rotor was driven by the turbine through a drive train. The variable resistors was synthesized by an external resistor in parallel with a diode rectifier. A forced-commutated power electronic device (IGBT) was connected to the wound rotor by slip rings and brushes. Simulations were conducted in a Matlab/Simulink environment using SimPowerSystems blocks to model power systems elements and Simulink blocks to model the turbine, control system and drive train. Detailed descriptions of the turbine, the drive train and the control system were provided. The model's implementation in the simulator was also described. A case study demonstrating the real-time simulation of a wind generator connected at the distribution level of a power system was presented. Results of the case study were then compared with results obtained from the SimPowerSystems off-line simulation. Results showed good agreement between the waveforms, demonstrating the conformity of the real-time and the off-line simulations. The capability of Hypersim for real-time simulation of wind turbines with power electronic converters in a distribution network was demonstrated. It was concluded that hardware-in-the-loop (HIL) simulation of wind turbine controllers for wind integration studies in power systems is now feasible. 5 refs., 1 tab., 6 figs.

  3. Locating disease genes using Bayesian variable selection with the Haseman-Elston method

    Directory of Open Access Journals (Sweden)

    He Qimei

    2003-12-01

    Full Text Available Abstract Background We applied stochastic search variable selection (SSVS, a Bayesian model selection method, to the simulated data of Genetic Analysis Workshop 13. We used SSVS with the revisited Haseman-Elston method to find the markers linked to the loci determining change in cholesterol over time. To study gene-gene interaction (epistasis and gene-environment interaction, we adopted prior structures, which incorporate the relationship among the predictors. This allows SSVS to search in the model space more efficiently and avoid the less likely models. Results In applying SSVS, instead of looking at the posterior distribution of each of the candidate models, which is sensitive to the setting of the prior, we ranked the candidate variables (markers according to their marginal posterior probability, which was shown to be more robust to the prior. Compared with traditional methods that consider one marker at a time, our method considers all markers simultaneously and obtains more favorable results. Conclusions We showed that SSVS is a powerful method for identifying linked markers using the Haseman-Elston method, even for weak effects. SSVS is very effective because it does a smart search over the entire model space.

  4. DYNAMIC RESPONSE OF THICK PLATES ON TWO PARAMETER ELASTIC FOUNDATION UNDER TIME VARIABLE LOADING

    OpenAIRE

    Ozgan, Korhan; Daloglu, Ayse T.

    2014-01-01

    In this paper, behavior of foundation plates with transverse shear deformation under time variable loading is presented using modified Vlasov foundation model. Finite element formulation of thick plates on elastic foundation is derived by using an 8-noded finite element based on Mindlin plate theory. Selective reduced integration technique is used to avoid shear locking problem which arises when smaller plate thickness is considered for the evaluation of the stiffness matrices. After comparis...

  5. Variability of interconnected wind plants: correlation length and its dependence on variability time scale

    Science.gov (United States)

    St. Martin, Clara M.; Lundquist, Julie K.; Handschy, Mark A.

    2015-04-01

    The variability in wind-generated electricity complicates the integration of this electricity into the electrical grid. This challenge steepens as the percentage of renewably-generated electricity on the grid grows, but variability can be reduced by exploiting geographic diversity: correlations between wind farms decrease as the separation between wind farms increases. But how far is far enough to reduce variability? Grid management requires balancing production on various timescales, and so consideration of correlations reflective of those timescales can guide the appropriate spatial scales of geographic diversity grid integration. To answer ‘how far is far enough,’ we investigate the universal behavior of geographic diversity by exploring wind-speed correlations using three extensive datasets spanning continents, durations and time resolution. First, one year of five-minute wind power generation data from 29 wind farms span 1270 km across Southeastern Australia (Australian Energy Market Operator). Second, 45 years of hourly 10 m wind-speeds from 117 stations span 5000 km across Canada (National Climate Data Archive of Environment Canada). Finally, four years of five-minute wind-speeds from 14 meteorological towers span 350 km of the Northwestern US (Bonneville Power Administration). After removing diurnal cycles and seasonal trends from all datasets, we investigate dependence of correlation length on time scale by digitally high-pass filtering the data on 0.25-2000 h timescales and calculating correlations between sites for each high-pass filter cut-off. Correlations fall to zero with increasing station separation distance, but the characteristic correlation length varies with the high-pass filter applied: the higher the cut-off frequency, the smaller the station separation required to achieve de-correlation. Remarkable similarities between these three datasets reveal behavior that, if universal, could be particularly useful for grid management. For high

  6. Continuous-variable quantum computing in optical time-frequency modes using quantum memories.

    Science.gov (United States)

    Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A

    2014-09-26

    We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.

  7. Random forest variable selection in spatial malaria transmission modelling in Mpumalanga Province, South Africa

    Directory of Open Access Journals (Sweden)

    Thandi Kapwata

    2016-11-01

    Full Text Available Malaria is an environmentally driven disease. In order to quantify the spatial variability of malaria transmission, it is imperative to understand the interactions between environmental variables and malaria epidemiology at a micro-geographic level using a novel statistical approach. The random forest (RF statistical learning method, a relatively new variable-importance ranking method, measures the variable importance of potentially influential parameters through the percent increase of the mean squared error. As this value increases, so does the relative importance of the associated variable. The principal aim of this study was to create predictive malaria maps generated using the selected variables based on the RF algorithm in the Ehlanzeni District of Mpumalanga Province, South Africa. From the seven environmental variables used [temperature, lag temperature, rainfall, lag rainfall, humidity, altitude, and the normalized difference vegetation index (NDVI], altitude was identified as the most influential predictor variable due its high selection frequency. It was selected as the top predictor for 4 out of 12 months of the year, followed by NDVI, temperature and lag rainfall, which were each selected twice. The combination of climatic variables that produced the highest prediction accuracy was altitude, NDVI, and temperature. This suggests that these three variables have high predictive capabilities in relation to malaria transmission. Furthermore, it is anticipated that the predictive maps generated from predictions made by the RF algorithm could be used to monitor the progression of malaria and assist in intervention and prevention efforts with respect to malaria.

  8. Real-time laser cladding control with variable spot size

    Science.gov (United States)

    Arias, J. L.; Montealegre, M. A.; Vidal, F.; Rodríguez, J.; Mann, S.; Abels, P.; Motmans, F.

    2014-03-01

    Laser cladding processing has been used in different industries to improve the surface properties or to reconstruct damaged pieces. In order to cover areas considerably larger than the diameter of the laser beam, successive partially overlapping tracks are deposited. With no control over the process variables this conduces to an increase of the temperature, which could decrease mechanical properties of the laser cladded material. Commonly, the process is monitored and controlled by a PC using cameras, but this control suffers from a lack of speed caused by the image processing step. The aim of this work is to design and develop a FPGA-based laser cladding control system. This system is intended to modify the laser beam power according to the melt pool width, which is measured using a CMOS camera. All the control and monitoring tasks are carried out by a FPGA, taking advantage of its abundance of resources and speed of operation. The robustness of the image processing algorithm is assessed, as well as the control system performance. Laser power is decreased as substrate temperature increases, thus maintaining a constant clad width. This FPGA-based control system is integrated in an adaptive laser cladding system, which also includes an adaptive optical system that will control the laser focus distance on the fly. The whole system will constitute an efficient instrument for part repair with complex geometries and coating selective surfaces. This will be a significant step forward into the total industrial implementation of an automated industrial laser cladding process.

  9. Time-to-code converter with selection of time intervals on duration

    International Nuclear Information System (INIS)

    Atanasov, I.Kh.; Rusanov, I.R.; )

    2001-01-01

    Identification of elementary particles on the basis of time-of-flight represents the important approach of the preliminary selection procedure. Paper describes a time-to-code converter with preliminary selection of the measured time intervals as to duration. It consists of a time-to-amplitude converter, an analog-to-digital converter, a unit of selection of time intervals as to duration, a unit of total reset and CAMAC command decoder. The time-to-code converter enables to measure time intervals with 100 ns accuracy within 0-100 ns range. Output code capacity is of 10. Selection time constitutes 50 ns [ru

  10. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  11. BAYESIAN TECHNIQUES FOR COMPARING TIME-DEPENDENT GRMHD SIMULATIONS TO VARIABLE EVENT HORIZON TELESCOPE OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Junhan; Marrone, Daniel P.; Chan, Chi-Kwan; Medeiros, Lia; Özel, Feryal; Psaltis, Dimitrios, E-mail: junhankim@email.arizona.edu [Department of Astronomy and Steward Observatory, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721 (United States)

    2016-12-01

    The Event Horizon Telescope (EHT) is a millimeter-wavelength, very-long-baseline interferometry (VLBI) experiment that is capable of observing black holes with horizon-scale resolution. Early observations have revealed variable horizon-scale emission in the Galactic Center black hole, Sagittarius A* (Sgr A*). Comparing such observations to time-dependent general relativistic magnetohydrodynamic (GRMHD) simulations requires statistical tools that explicitly consider the variability in both the data and the models. We develop here a Bayesian method to compare time-resolved simulation images to variable VLBI data, in order to infer model parameters and perform model comparisons. We use mock EHT data based on GRMHD simulations to explore the robustness of this Bayesian method and contrast it to approaches that do not consider the effects of variability. We find that time-independent models lead to offset values of the inferred parameters with artificially reduced uncertainties. Moreover, neglecting the variability in the data and the models often leads to erroneous model selections. We finally apply our method to the early EHT data on Sgr A*.

  12. Perfectionistic Cognitions: Stability, Variability, and Changes Over Time.

    Science.gov (United States)

    Prestele, Elisabeth; Altstötter-Gleich, Christine

    2018-02-01

    The construct of perfectionistic cognitions is defined as a state-like construct resulting from a perfectionistic self-schema and activated by specific situational demands. Only a few studies have investigated whether and how perfectionistic cognitions change across different situations and whether they reflect stable between-person differences or also within-person variations over time. We conducted 2 studies to investigate the variability and stability of 3 dimensions of perfectionistic cognitions while situational demands changed (Study 1) and on a daily level during a highly demanding period of time (Study 2). The results of both studies revealed that stable between-person differences accounted for the largest proportion of variance in the dimensions of perfectionistic cognitions and that these differences were validly associated with between-person differences in affect. The frequency of perfectionistic cognitions increased during students' first semester at university, and these average within-person changes were different for the 3 dimensions of perfectionistic cognitions (Study 1). In addition, there were between-person differences in the within-person changes that were validly associated with concurrent changes in closely related constructs (unpleasant mood and tense arousal). Within-person variations in perfectionistic cognitions were also validly associated with variations in unpleasant mood and tense arousal from day to day (Study 2).

  13. Chaos synchronization in time-delayed systems with parameter mismatches and variable delay times

    International Nuclear Information System (INIS)

    Shahverdiev, E.M.; Nuriev, R.A.; Hashimov, R.H.; Shore, K.A.

    2004-06-01

    We investigate synchronization between two undirectionally linearly coupled chaotic nonidentical time-delayed systems and show that parameter mismatches are of crucial importance to achieve synchronization. We establish that independent of the relation between the delay time in the coupled systems and the coupling delay time, only retarded synchronization with the coupling delay time is obtained. We show that with parameter mismatch or without it neither complete nor anticipating synchronization occurs. We derive existence and stability conditions for the retarded synchronization manifold. We demonstrate our approach using examples of the Ikeda and Mackey Glass models. Also for the first time we investigate chaos synchronization in time-delayed systems with variable delay time and find both existence and sufficient stability conditions for the retarded synchronization manifold with the coupling-delay lag time. (author)

  14. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  15. A fast chaos-based image encryption scheme with a dynamic state variables selection mechanism

    Science.gov (United States)

    Chen, Jun-xin; Zhu, Zhi-liang; Fu, Chong; Yu, Hai; Zhang, Li-bo

    2015-03-01

    In recent years, a variety of chaos-based image cryptosystems have been investigated to meet the increasing demand for real-time secure image transmission. Most of them are based on permutation-diffusion architecture, in which permutation and diffusion are two independent procedures with fixed control parameters. This property results in two flaws. (1) At least two chaotic state variables are required for encrypting one plain pixel, in permutation and diffusion stages respectively. Chaotic state variables produced with high computation complexity are not sufficiently used. (2) The key stream solely depends on the secret key, and hence the cryptosystem is vulnerable against known/chosen-plaintext attacks. In this paper, a fast chaos-based image encryption scheme with a dynamic state variables selection mechanism is proposed to enhance the security and promote the efficiency of chaos-based image cryptosystems. Experimental simulations and extensive cryptanalysis have been carried out and the results prove the superior security and high efficiency of the scheme.

  16. Analysis of time variable gravity data over Africa

    International Nuclear Information System (INIS)

    Barletta, Valentina R.; Aoudia, Abdelkarim

    2010-01-01

    Africa, in principle, is a unique laboratory where to address the individual contribution of the different facets of the Earth system as well as their interactions. However, it shows both a rich hydrology that exhibits complex characteristics of rivers and wide basins of different sizes in addition to the hydrology of lakes, and other wetlands and storage reservoirs and groundwater aquifers, and continuous and discontinuous changes in the physical properties of the Earth interior. Stretching and heating processes are accompanied by punctuated episodes of faulting and/or volcanism, and longer-term changes in surface elevation that disrupt river drainage and climate. Space gravity missions GRACE, flying since 2002, was expressly designed to detect the time-dependent gravity field in order to study the hydrological cycle of the Earth, but has also evidenced Solid Earth phenomena such as Post Glacial Rebound (PGR) and the signature of a giant earthquake such as the 2004 Sumatra. Hence the idea to analyze time variable gravity data over Africa in order to retrieve fingerprints of geophysical phenomena. The exploitation of the GRACE data for geophysics, however, is not straightforward. Indeed, the quality of the signal is not uniform worldwide and gravity is always the superposition of contributions from solid Earth as well as climate-related phenomena, that cannot be easily distinguished, at a first glance, both in time and space. In the present study we show that mass changes cannot be classified simply as trends or periodic signals. We follow an alternative way to separate complementary components, periodic and non-periodic signals, without loosing information. We show that the a priori periodic and linear trend fitting function is not everywhere appropriate and in some cases it is even so poor to result in misinterpreting the data. Variations in long term behavior and periodicities higher than the usual annual (and semi-annual) indeed occur, related to geophysical

  17. Bayesian Multiresolution Variable Selection for Ultra-High Dimensional Neuroimaging Data.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Long, Qi

    2018-01-01

    Ultra-high dimensional variable selection has become increasingly important in analysis of neuroimaging data. For example, in the Autism Brain Imaging Data Exchange (ABIDE) study, neuroscientists are interested in identifying important biomarkers for early detection of the autism spectrum disorder (ASD) using high resolution brain images that include hundreds of thousands voxels. However, most existing methods are not feasible for solving this problem due to their extensive computational costs. In this work, we propose a novel multiresolution variable selection procedure under a Bayesian probit regression framework. It recursively uses posterior samples for coarser-scale variable selection to guide the posterior inference on finer-scale variable selection, leading to very efficient Markov chain Monte Carlo (MCMC) algorithms. The proposed algorithms are computationally feasible for ultra-high dimensional data. Also, our model incorporates two levels of structural information into variable selection using Ising priors: the spatial dependence between voxels and the functional connectivity between anatomical brain regions. Applied to the resting state functional magnetic resonance imaging (R-fMRI) data in the ABIDE study, our methods identify voxel-level imaging biomarkers highly predictive of the ASD, which are biologically meaningful and interpretable. Extensive simulations also show that our methods achieve better performance in variable selection compared to existing methods.

  18. Comparison of Sparse and Jack-knife partial least squares regression methods for variable selection

    DEFF Research Database (Denmark)

    Karaman, Ibrahim; Qannari, El Mostafa; Martens, Harald

    2013-01-01

    The objective of this study was to compare two different techniques of variable selection, Sparse PLSR and Jack-knife PLSR, with respect to their predictive ability and their ability to identify relevant variables. Sparse PLSR is a method that is frequently used in genomics, whereas Jack-knife PL...

  19. Protecting chips against hold time violations due to variability

    CERN Document Server

    Neuberger, Gustavo; Reis, Ricardo

    2013-01-01

    With the development of Very-Deep Sub-Micron technologies, process variability is becoming increasingly important and is a very important issue in the design of complex circuits. Process variability is the statistical variation of process parameters, meaning that these parameters do not have always the same value, but become a random variable, with a given mean value and standard deviation. This effect can lead to several issues in digital circuit design.The logical consequence of this parameter variation is that circuit characteristics, as delay and power, also become random variables. Becaus

  20. Punishment induced behavioural and neurophysiological variability reveals dopamine-dependent selection of kinematic movement parameters

    Science.gov (United States)

    Galea, Joseph M.; Ruge, Diane; Buijink, Arthur; Bestmann, Sven; Rothwell, John C.

    2013-01-01

    Action selection describes the high-level process which selects between competing movements. In animals, behavioural variability is critical for the motor exploration required to select the action which optimizes reward and minimizes cost/punishment, and is guided by dopamine (DA). The aim of this study was to test in humans whether low-level movement parameters are affected by punishment and reward in ways similar to high-level action selection. Moreover, we addressed the proposed dependence of behavioural and neurophysiological variability on DA, and whether this may underpin the exploration of kinematic parameters. Participants performed an out-and-back index finger movement and were instructed that monetary reward and punishment were based on its maximal acceleration (MA). In fact, the feedback was not contingent on the participant’s behaviour but pre-determined. Blocks highly-biased towards punishment were associated with increased MA variability relative to blocks with either reward or without feedback. This increase in behavioural variability was positively correlated with neurophysiological variability, as measured by changes in cortico-spinal excitability with transcranial magnetic stimulation over the primary motor cortex. Following the administration of a DA-antagonist, the variability associated with punishment diminished and the correlation between behavioural and neurophysiological variability no longer existed. Similar changes in variability were not observed when participants executed a pre-determined MA, nor did DA influence resting neurophysiological variability. Thus, under conditions of punishment, DA-dependent processes influence the selection of low-level movement parameters. We propose that the enhanced behavioural variability reflects the exploration of kinematic parameters for less punishing, or conversely more rewarding, outcomes. PMID:23447607

  1. Multi-omics facilitated variable selection in Cox-regression model for cancer prognosis prediction.

    Science.gov (United States)

    Liu, Cong; Wang, Xujun; Genchev, Georgi Z; Lu, Hui

    2017-07-15

    New developments in high-throughput genomic technologies have enabled the measurement of diverse types of omics biomarkers in a cost-efficient and clinically-feasible manner. Developing computational methods and tools for analysis and translation of such genomic data into clinically-relevant information is an ongoing and active area of investigation. For example, several studies have utilized an unsupervised learning framework to cluster patients by integrating omics data. Despite such recent advances, predicting cancer prognosis using integrated omics biomarkers remains a challenge. There is also a shortage of computational tools for predicting cancer prognosis by using supervised learning methods. The current standard approach is to fit a Cox regression model by concatenating the different types of omics data in a linear manner, while penalty could be added for feature selection. A more powerful approach, however, would be to incorporate data by considering relationships among omics datatypes. Here we developed two methods: a SKI-Cox method and a wLASSO-Cox method to incorporate the association among different types of omics data. Both methods fit the Cox proportional hazards model and predict a risk score based on mRNA expression profiles. SKI-Cox borrows the information generated by these additional types of omics data to guide variable selection, while wLASSO-Cox incorporates this information as a penalty factor during model fitting. We show that SKI-Cox and wLASSO-Cox models select more true variables than a LASSO-Cox model in simulation studies. We assess the performance of SKI-Cox and wLASSO-Cox using TCGA glioblastoma multiforme and lung adenocarcinoma data. In each case, mRNA expression, methylation, and copy number variation data are integrated to predict the overall survival time of cancer patients. Our methods achieve better performance in predicting patients' survival in glioblastoma and lung adenocarcinoma. Copyright © 2017. Published by Elsevier

  2. Variable selectivity and the role of nutritional quality in food selection by a planktonic rotifer

    International Nuclear Information System (INIS)

    Sierszen, M.E.

    1990-01-01

    To investigate the potential for selective feeding to enhance fitness, I test the hypothesis that an herbivorous zooplankter selects those food items that best support its reproduction. Under this hypothesis, growth and reproduction on selected food items should be higher than on less preferred items. The hypothesis is not supported. In situ selectivity by the rotifer Keratella taurocephala for Cryptomonas relative to Chlamydomonas goes through a seasonal cycle, in apparent response to fluctuating Cryptomonas populations. However, reproduction on a unialgal diet of Cryptomonas is consistently high and similar to that on Chlamydomonas. Oocystis, which also supports reproduction equivalent to that supported by Chlamydomonas, is sometimes rejected by K. taurocephala. In addition, K. taurocephala does not discriminate between Merismopedia and Chlamydomonas even though Merismopedia supports virtually no reproduction by the rotifer. Selection by K. taurocephala does not simply maximize the intake of food items that yield high reproduction. Selectivity is a complex, dynamic process, one function of which may be the exploitation of locally or seasonally abundant foods. (author)

  3. The Effects of Variability and Risk in Selection Utility Analysis: An Empirical Comparison.

    Science.gov (United States)

    Rich, Joseph R.; Boudreau, John W.

    1987-01-01

    Investigated utility estimate variability for the selection utility of using the Programmer Aptitude Test to select computer programmers. Comparison of Monte Carlo results to other risk assessment approaches (sensitivity analysis, break-even analysis, algebraic derivation of the distribtion) suggests that distribution information provided by Monte…

  4. Feature Selection Criteria for Real Time EKF-SLAM Algorithm

    Directory of Open Access Journals (Sweden)

    Fernando Auat Cheein

    2010-02-01

    Full Text Available This paper presents a seletion procedure for environmet features for the correction stage of a SLAM (Simultaneous Localization and Mapping algorithm based on an Extended Kalman Filter (EKF. This approach decreases the computational time of the correction stage which allows for real and constant-time implementations of the SLAM. The selection procedure consists in chosing the features the SLAM system state covariance is more sensible to. The entire system is implemented on a mobile robot equipped with a range sensor laser. The features extracted from the environment correspond to lines and corners. Experimental results of the real time SLAM algorithm and an analysis of the processing-time consumed by the SLAM with the feature selection procedure proposed are shown. A comparison between the feature selection approach proposed and the classical sequential EKF-SLAM along with an entropy feature selection approach is also performed.

  5. Comparison of Three Plot Selection Methods for Estimating Change in Temporally Variable, Spatially Clustered Populations.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, William L. [Bonneville Power Administration, Portland, OR (US). Environment, Fish and Wildlife

    2001-07-01

    Monitoring population numbers is important for assessing trends and meeting various legislative mandates. However, sampling across time introduces a temporal aspect to survey design in addition to the spatial one. For instance, a sample that is initially representative may lose this attribute if there is a shift in numbers and/or spatial distribution in the underlying population that is not reflected in later sampled plots. Plot selection methods that account for this temporal variability will produce the best trend estimates. Consequently, I used simulation to compare bias and relative precision of estimates of population change among stratified and unstratified sampling designs based on permanent, temporary, and partial replacement plots under varying levels of spatial clustering, density, and temporal shifting of populations. Permanent plots produced more precise estimates of change than temporary plots across all factors. Further, permanent plots performed better than partial replacement plots except for high density (5 and 10 individuals per plot) and 25% - 50% shifts in the population. Stratified designs always produced less precise estimates of population change for all three plot selection methods, and often produced biased change estimates and greatly inflated variance estimates under sampling with partial replacement. Hence, stratification that remains fixed across time should be avoided when monitoring populations that are likely to exhibit large changes in numbers and/or spatial distribution during the study period. Key words: bias; change estimation; monitoring; permanent plots; relative precision; sampling with partial replacement; temporary plots.

  6. Novel Harmonic Regularization Approach for Variable Selection in Cox’s Proportional Hazards Model

    Directory of Open Access Journals (Sweden)

    Ge-Jin Chu

    2014-01-01

    Full Text Available Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2select key risk factors in the Cox’s proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL, the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  7. A survey of variable selection methods in two Chinese epidemiology journals

    Directory of Open Access Journals (Sweden)

    Lynn Henry S

    2010-09-01

    Full Text Available Abstract Background Although much has been written on developing better procedures for variable selection, there is little research on how it is practiced in actual studies. This review surveys the variable selection methods reported in two high-ranking Chinese epidemiology journals. Methods Articles published in 2004, 2006, and 2008 in the Chinese Journal of Epidemiology and the Chinese Journal of Preventive Medicine were reviewed. Five categories of methods were identified whereby variables were selected using: A - bivariate analyses; B - multivariable analysis; e.g. stepwise or individual significance testing of model coefficients; C - first bivariate analyses, followed by multivariable analysis; D - bivariate analyses or multivariable analysis; and E - other criteria like prior knowledge or personal judgment. Results Among the 287 articles that reported using variable selection methods, 6%, 26%, 30%, 21%, and 17% were in categories A through E, respectively. One hundred sixty-three studies selected variables using bivariate analyses, 80% (130/163 via multiple significance testing at the 5% alpha-level. Of the 219 multivariable analyses, 97 (44% used stepwise procedures, 89 (41% tested individual regression coefficients, but 33 (15% did not mention how variables were selected. Sixty percent (58/97 of the stepwise routines also did not specify the algorithm and/or significance levels. Conclusions The variable selection methods reported in the two journals were limited in variety, and details were often missing. Many studies still relied on problematic techniques like stepwise procedures and/or multiple testing of bivariate associations at the 0.05 alpha-level. These deficiencies should be rectified to safeguard the scientific validity of articles published in Chinese epidemiology journals.

  8. Market timing and selectivity performance of mutual funds in Ghana

    Directory of Open Access Journals (Sweden)

    Abubakar Musah

    2014-07-01

    Full Text Available The growing interest in mutual funds in Ghana has been tremendous over the last decade as evidenced by the continuous increases in number and total funds under management. However, no empirical work has been done on the selectivity and timing ability of the mutual fund managers. Using monthly returns data hand-collected from the reports of the mutual fund managers for the period January 2007-December 2012, this paper examines the market timing and selectivity ability of mutual fund managers in Ghana using the classic Treynor-Mazuy (1966 model and Henriksson- Merton (1981 model. The results suggest that, in general mutual fund managers in Ghana are not able to effectively select stocks and also are not able to predict both the magnitude and direction of future market returns. More specifically, all of the sample mutual fund managers attain significant negative selectivity coefficients and also most of them attain insignificant negative timing coefficients.

  9. The impact of selected organizational variables and managerial leadership on radiation therapists' organizational commitment

    International Nuclear Information System (INIS)

    Akroyd, Duane; Legg, Jeff; Jackowski, Melissa B.; Adams, Robert D.

    2009-01-01

    The purpose of this study was to examine the impact of selected organizational factors and the leadership behavior of supervisors on radiation therapists' commitment to their organizations. The population for this study consists of all full time clinical radiation therapists registered by the American Registry of Radiologic Technologists (ARRT) in the United States. A random sample of 800 radiation therapists was obtained from the ARRT for this study. Questionnaires were mailed to all participants and measured organizational variables; managerial leadership variable and three components of organizational commitment (affective, continuance and normative). It was determined that organizational support, and leadership behavior of supervisors each had a significant and positive affect on normative and affective commitment of radiation therapists and each of the models predicted over 40% of the variance in radiation therapists organizational commitment. This study examined radiation therapists' commitment to their organizations and found that affective (emotional attachment to the organization) and normative (feelings of obligation to the organization) commitments were more important than continuance commitment (awareness of the costs of leaving the organization). This study can help radiation oncology administrators and physicians to understand the values their radiation therapy employees hold that are predictive of their commitment to the organization. A crucial result of the study is the importance of the perceived support of the organization and the leadership skills of managers/supervisors on radiation therapists' commitment to the organization.

  10. The impact of selected organizational variables and managerial leadership on radiation therapists' organizational commitment

    Energy Technology Data Exchange (ETDEWEB)

    Akroyd, Duane [Department of Adult and Community College Education, College of Education, Campus Box 7801, North Carolina State University, Raleigh, NC 27695 (United States)], E-mail: duane_akroyd@ncsu.edu; Legg, Jeff [Department of Radiologic Sciences, Virginia Commonwealth University, Richmond, VA 23284 (United States); Jackowski, Melissa B. [Division of Radiologic Sciences, University of North Carolina School of Medicine 27599 (United States); Adams, Robert D. [Department of Radiation Oncology, University of North Carolina School of Medicine 27599 (United States)

    2009-05-15

    The purpose of this study was to examine the impact of selected organizational factors and the leadership behavior of supervisors on radiation therapists' commitment to their organizations. The population for this study consists of all full time clinical radiation therapists registered by the American Registry of Radiologic Technologists (ARRT) in the United States. A random sample of 800 radiation therapists was obtained from the ARRT for this study. Questionnaires were mailed to all participants and measured organizational variables; managerial leadership variable and three components of organizational commitment (affective, continuance and normative). It was determined that organizational support, and leadership behavior of supervisors each had a significant and positive affect on normative and affective commitment of radiation therapists and each of the models predicted over 40% of the variance in radiation therapists organizational commitment. This study examined radiation therapists' commitment to their organizations and found that affective (emotional attachment to the organization) and normative (feelings of obligation to the organization) commitments were more important than continuance commitment (awareness of the costs of leaving the organization). This study can help radiation oncology administrators and physicians to understand the values their radiation therapy employees hold that are predictive of their commitment to the organization. A crucial result of the study is the importance of the perceived support of the organization and the leadership skills of managers/supervisors on radiation therapists' commitment to the organization.

  11. Taking time seriously. A theory of socioemotional selectivity.

    Science.gov (United States)

    Carstensen, L L; Isaacowitz, D M; Charles, S T

    1999-03-01

    Socioemotional selectivity theory claims that the perception of time plays a fundamental role in the selection and pursuit of social goals. According to the theory, social motives fall into 1 of 2 general categories--those related to the acquisition of knowledge and those related to the regulation of emotion. When time is perceived as open-ended, knowledge-related goals are prioritized. In contrast, when time is perceived as limited, emotional goals assume primacy. The inextricable association between time left in life and chronological age ensures age-related differences in social goals. Nonetheless, the authors show that the perception of time is malleable, and social goals change in both younger and older people when time constraints are imposed. The authors argue that time perception is integral to human motivation and suggest potential implications for multiple subdisciplines and research interests in social, developmental, cultural, cognitive, and clinical psychology.

  12. Utilizing Response Time Distributions for Item Selection in CAT

    Science.gov (United States)

    Fan, Zhewen; Wang, Chun; Chang, Hua-Hua; Douglas, Jeffrey

    2012-01-01

    Traditional methods for item selection in computerized adaptive testing only focus on item information without taking into consideration the time required to answer an item. As a result, some examinees may receive a set of items that take a very long time to finish, and information is not accrued as efficiently as possible. The authors propose two…

  13. Leisure time physical activity, screen time, social background, and environmental variables in adolescents.

    Science.gov (United States)

    Mota, Jorge; Gomes, Helena; Almeida, Mariana; Ribeiro, José Carlos; Santos, Maria Paula

    2007-08-01

    This study analyzes the relationships between leisure time physical activity (LTPA), sedentary behaviors, socioeconomic status, and perceived environmental variables. The sample comprised 815 girls and 746 boys. In girls, non-LTPA participants reported significantly more screen time. Girls with safety concerns were more likely to be in the non-LTPA group (OR = 0.60) and those who agreed with the importance of aesthetics were more likely to be in the active-LTPA group (OR = 1.59). In girls, an increase of 1 hr of TV watching was a significant predictor of non-LTPA (OR = 0.38). LTPA for girls, but not for boys, seems to be influenced by certain modifiable factors of the built environment, as well as by time watching TV.

  14. Effects of spring temperatures on the strength of selection on timing of reproduction in a long-distance migratory bird

    NARCIS (Netherlands)

    Visser, Marcel E; Gienapp, Phillip; Husby, Arild; Morrisey, Michael; de la Hera, Iván; Pulido, Francisco; Both, Christiaan

    Climate change has differentially affected the timing of seasonal events for interacting trophic levels, and this has often led to increased selection on seasonal timing. Yet, the environmental variables driving this selection have rarely been identified, limiting our ability to predict future

  15. Countermovement jump height: gender and sport-specific differences in the force-time variables.

    Science.gov (United States)

    Laffaye, Guillaume; Wagner, Phillip P; Tombleson, Tom I L

    2014-04-01

    The goal of this study was to assess (a) the eccentric rate of force development, the concentric force, and selected time variables on vertical performance during countermovement jump, (b) the existence of gender differences in these variables, and (c) the sport-specific differences. The sample was composed of 189 males and 84 females, all elite athletes involved in college and professional sports (primarily football, basketball, baseball, and volleyball). The subjects performed a series of 6 countermovement jumps on a force plate (500 Hz). Average eccentric rate of force development (ECC-RFD), total time (TIME), eccentric time (ECC-T), Ratio between eccentric and total time (ECC-T:T) and average force (CON-F) were extracted from force-time curves and the vertical jumping performance, measured by impulse momentum. Results show that CON-F (r = 0.57; p differ between both sexes (p differ, showing a similar temporal structure. The best way to jump high is to increase CON-F and ECC-RFD thus minimizing the ECC-T. Principal component analysis (PCA) accounted for 76.8% of the JH variance and revealed that JH is predicted by a temporal and a force component. Furthermore, the PCA comparison made among athletes revealed sport-specific signatures: volleyball players revealed a temporal-prevailing profile, a weak-force with large ECC-T:T for basketball players and explosive and powerful profiles for football and baseball players.

  16. Variable selection in PLSR and extensions to a multi-block setting for metabolomics data

    DEFF Research Database (Denmark)

    Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach

    When applying LC-MS or NMR spectroscopy in metabolomics studies, high-dimensional data are generated and effective tools for variable selection are needed in order to detect the important metabolites. Methods based on sparsity combined with PLSR have recently attracted attention in the field...... of genomics [1]. They became quickly well established in the field of statistics because a close relationship to elastic net has been established. In sparse variable selection combined with PLSR, a soft thresholding is applied on each loading weight separately. In the field of chemometrics Jack-knifing has...... been introduced for variable selection in PLSR [2]. Jack-knifing has been frequently applied in the field of spectroscopy and is implemented in software tools like The Unscrambler. In Jack-knifing uncertainty estimates of regression coefficients are estimated and a t-test is applied on these estimates...

  17. Real-time flavor tagging selection in ATLAS

    CERN Document Server

    Madaffari, D

    2016-01-01

    In high-energy physics experiments the online selection is crucial to reject the overwhelming uninteresting collisions. In particular the ATLAS experiment includes b-jet selections in its trigger, in order to select final states with significant heavy-flavor content. Dedicated selections are developed to timely identifying fully hadronic final states containing b-jets and maintaining affordable trigger rates. ATLAS successfully operated b-jet trigger selections during both 2011 and 2012 Large Hadron Collider data-taking campaigns. Work is on-going now to improve the performance of online tagging algorithms to be deployed in Run 2 in 2015. An overview of the Run 1 ATLAS b-jet trigger strategy along with future prospects is presented in this paper. Data-driven techniques to extract the online b-tagging performance, a key ingredient for all analysis relying on such triggers, are also discussed and preliminary results presented.

  18. Variability of Cost and Time Delivery of Educational Buildings in Nigeria

    Directory of Open Access Journals (Sweden)

    Aghimien, Douglas Omoregie

    2017-09-01

    Full Text Available Cost and time overrun in construction projects has become a reoccurring problem in construction industries around the world especially in developing countries. This situation is unhealthy for public educational buildings which are executed with limited government funds, and are in most cases time sensitive, as they need to cater for the influx of students into the institutions. This study therefore assessed the variability of cost and time delivery of educational buildings in Nigeria, using a study of selected educational buildings within the country. A pro forma was used to gather cost and time data on selected building projects, while structured questionnaire was used to harness information on the possible measures for reducing the variability from the construction participants that were involved in the delivery of these projects. Paired sample t-test, percentage, relative importance index, and Kruskal-Walis test were adopted for data analyses. The study reveals that there is a significant difference between the initial and final cost of delivering educational buildings, as an average of 4.87% deviation, with a sig. p-value of 0.000 was experienced on all assessed projects. For time delivery, there is also a significant difference between the initial estimated time and final time of construction as a whopping 130% averaged deviation with a sig. p-value of 0.000 was discovered. To remedy these problems, the study revealed that prompt payment for executed works, predicting market price fluctuation and inculcating it into the initial estimate, and owner’s involvement at the planning and design phase are some of the possible measures to be adopted.

  19. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  20. Selection of variables for neural network analysis. Comparisons of several methods with high energy physics data

    International Nuclear Information System (INIS)

    Proriol, J.

    1994-01-01

    Five different methods are compared for selecting the most important variables with a view to classifying high energy physics events with neural networks. The different methods are: the F-test, Principal Component Analysis (PCA), a decision tree method: CART, weight evaluation, and Optimal Cell Damage (OCD). The neural networks use the variables selected with the different methods. We compare the percentages of events properly classified by each neural network. The learning set and the test set are the same for all the neural networks. (author)

  1. Curve fitting and modeling with splines using statistical variable selection techniques

    Science.gov (United States)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  2. Seleção de variáveis em QSAR Variable selection in QSAR

    Directory of Open Access Journals (Sweden)

    Márcia Miguel Castro Ferreira

    2002-05-01

    Full Text Available The process of building mathematical models in quantitative structure-activity relationship (QSAR studies is generally limited by the size of the dataset used to select variables from. For huge datasets, the task of selecting a given number of variables that produces the best linear model can be enormous, if not unfeasible. In this case, some methods can be used to separate good parameter combinations from the bad ones. In this paper three methodologies are analyzed: systematic search, genetic algorithm and chemometric methods. These methods have been exposed and discussed through practical examples.

  3. Sparse Reduced-Rank Regression for Simultaneous Dimension Reduction and Variable Selection

    KAUST Repository

    Chen, Lisha

    2012-12-01

    The reduced-rank regression is an effective method in predicting multiple response variables from the same set of predictor variables. It reduces the number of model parameters and takes advantage of interrelations between the response variables and hence improves predictive accuracy. We propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty. We apply a group-lasso type penalty that treats each row of the matrix of the regression coefficients as a group and show that this penalty satisfies certain desirable invariance properties. We develop two numerical algorithms to solve the penalized regression problem and establish the asymptotic consistency of the proposed method. In particular, the manifold structure of the reduced-rank regression coefficient matrix is considered and studied in our theoretical analysis. In our simulation study and real data analysis, the new method is compared with several existing variable selection methods for multivariate regression and exhibits competitive performance in prediction and variable selection. © 2012 American Statistical Association.

  4. THE TIME-DOMAIN SPECTROSCOPIC SURVEY: UNDERSTANDING THE OPTICALLY VARIABLE SKY WITH SEQUELS IN SDSS-III

    Energy Technology Data Exchange (ETDEWEB)

    Ruan, John J.; Anderson, Scott F.; Davenport, James R. A. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Green, Paul J.; Morganson, Eric [Harvard Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Eracleous, Michael; Brandt, William N. [Department of Astronomy and Astrophysics, 525 Davey Lab, The Pennsylvania State University, University Park, PA 16802 (United States); Myers, Adam D. [Department of Physics and Astronomy 3905, University of Wyoming, 1000 E. University, Laramie, WY 82071 (United States); Badenes, Carles [Department of Physics and Astronomy and Pittsburgh Particle Physics, Astrophysics, and Cosmology Center (PITT-PACC), University of Pittsburgh (United States); Bershady, Matthew A. [Department of Astronomy, University of Wisconsin-Madison, 475 N. Charter Street, Madison, WI 53706 (United States); Chambers, Kenneth C.; Flewelling, Heather; Kaiser, Nick [Institute for Astronomy, University of Hawaii at Manoa, Honolulu, HI 96822 (United States); Dawson, Kyle S. [Department of Physics and Astronomy, University of Utah, Salt Lake City, UT 84112 (United States); Heckman, Timothy M. [Center for Astrophysical Sciences, Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218 (United States); Isler, Jedidah C. [Department of Physics and Astronomy, Vanderbilt University, Nashville, TN 37235 (United States); Kneib, Jean-Paul [Laboratoire d’astrophysique, Ecole Polytechnique Fédérale de Lausanne Observatoire de Sauverny, 1290 Versoix (Switzerland); MacLeod, Chelsea L.; Ross, Nicholas P. [Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh, EH9 3HJ (United Kingdom); Paris, Isabelle, E-mail: jruan@astro.washington.edu [INAF—Osservatorio Astronomico di Trieste, Via G. B. Tiepolo 11, I-34131 Trieste (Italy); and others

    2016-07-10

    The Time-Domain Spectroscopic Survey (TDSS) is an SDSS-IV eBOSS subproject primarily aimed at obtaining identification spectra of ∼220,000 optically variable objects systematically selected from SDSS/Pan-STARRS1 multi-epoch imaging. We present a preview of the science enabled by TDSS, based on TDSS spectra taken over ∼320 deg{sup 2} of sky as part of the SEQUELS survey in SDSS-III, which is in part a pilot survey for eBOSS in SDSS-IV. Using the 15,746 TDSS-selected single-epoch spectra of photometrically variable objects in SEQUELS, we determine the demographics of our variability-selected sample and investigate the unique spectral characteristics inherent in samples selected by variability. We show that variability-based selection of quasars complements color-based selection by selecting additional redder quasars and mitigates redshift biases to produce a smooth quasar redshift distribution over a wide range of redshifts. The resulting quasar sample contains systematically higher fractions of blazars and broad absorption line quasars than from color-selected samples. Similarly, we show that M dwarfs in the TDSS-selected stellar sample have systematically higher chromospheric active fractions than the underlying M-dwarf population based on their H α emission. TDSS also contains a large number of RR Lyrae and eclipsing binary stars with main-sequence colors, including a few composite-spectrum binaries. Finally, our visual inspection of TDSS spectra uncovers a significant number of peculiar spectra, and we highlight a few cases of these interesting objects. With a factor of ∼15 more spectra, the main TDSS survey in SDSS-IV will leverage the lessons learned from these early results for a variety of time-domain science applications.

  5. THE TIME-DOMAIN SPECTROSCOPIC SURVEY: UNDERSTANDING THE OPTICALLY VARIABLE SKY WITH SEQUELS IN SDSS-III

    International Nuclear Information System (INIS)

    Ruan, John J.; Anderson, Scott F.; Davenport, James R. A.; Green, Paul J.; Morganson, Eric; Eracleous, Michael; Brandt, William N.; Myers, Adam D.; Badenes, Carles; Bershady, Matthew A.; Chambers, Kenneth C.; Flewelling, Heather; Kaiser, Nick; Dawson, Kyle S.; Heckman, Timothy M.; Isler, Jedidah C.; Kneib, Jean-Paul; MacLeod, Chelsea L.; Ross, Nicholas P.; Paris, Isabelle

    2016-01-01

    The Time-Domain Spectroscopic Survey (TDSS) is an SDSS-IV eBOSS subproject primarily aimed at obtaining identification spectra of ∼220,000 optically variable objects systematically selected from SDSS/Pan-STARRS1 multi-epoch imaging. We present a preview of the science enabled by TDSS, based on TDSS spectra taken over ∼320 deg 2 of sky as part of the SEQUELS survey in SDSS-III, which is in part a pilot survey for eBOSS in SDSS-IV. Using the 15,746 TDSS-selected single-epoch spectra of photometrically variable objects in SEQUELS, we determine the demographics of our variability-selected sample and investigate the unique spectral characteristics inherent in samples selected by variability. We show that variability-based selection of quasars complements color-based selection by selecting additional redder quasars and mitigates redshift biases to produce a smooth quasar redshift distribution over a wide range of redshifts. The resulting quasar sample contains systematically higher fractions of blazars and broad absorption line quasars than from color-selected samples. Similarly, we show that M dwarfs in the TDSS-selected stellar sample have systematically higher chromospheric active fractions than the underlying M-dwarf population based on their H α emission. TDSS also contains a large number of RR Lyrae and eclipsing binary stars with main-sequence colors, including a few composite-spectrum binaries. Finally, our visual inspection of TDSS spectra uncovers a significant number of peculiar spectra, and we highlight a few cases of these interesting objects. With a factor of ∼15 more spectra, the main TDSS survey in SDSS-IV will leverage the lessons learned from these early results for a variety of time-domain science applications.

  6. Time step size selection for radiation diffusion calculations

    International Nuclear Information System (INIS)

    Rider, W.J.; Knoll, D.A.

    1999-01-01

    The purpose of this note is to describe a time step control technique as applied to radiation diffusion. Standard practice only provides a heuristic criteria related to the relative change in the dependent variables. The authors propose an alternative based on relatively simple physical principles. This time step control applies to methods of solution that are unconditionally stable and converges nonlinearities within a time step in the governing equations. Commonly, nonlinearities in the governing equations are evaluated using existing (old time) data. The authors refer to this as the semi-implicit (SI) method. When a method converges nonlinearities within a time step, the entire governing equation including all nonlinearities is self-consistently evaluated using advance time data (with appropriate time centering for accuracy)

  7. Dynamics of macroeconomic and financial variables in different time horizons

    OpenAIRE

    Kim Karlsson, Hyunjoo

    2012-01-01

    This dissertation consists of an introductory chapter and four papers dealing with financial issues of open economies, which can be in two broad categorizations: 1) exchange rate movements and 2) stock market interdependence. The first paper covers how the exchange rate changes affect the prices of internationally traded goods. With the variables (the price of exports in exporters’ currency and the exchange rate, both of which are in logarithmic form) being cointegrated, a model with both lon...

  8. Current Debates on Variability in Child Welfare Decision-Making: A Selected Literature Review

    Directory of Open Access Journals (Sweden)

    Emily Keddell

    2014-11-01

    Full Text Available This article considers selected drivers of decision variability in child welfare decision-making and explores current debates in relation to these drivers. Covering the related influences of national orientation, risk and responsibility, inequality and poverty, evidence-based practice, constructions of abuse and its causes, domestic violence and cognitive processes, it discusses the literature in regards to how each of these influences decision variability. It situates these debates in relation to the ethical issue of variability and the equity issues that variability raises. I propose that despite the ecological complexity that drives decision variability, that improving internal (within-country decision consistency is still a valid goal. It may be that the use of annotated case examples, kind learning systems, and continued commitments to the social justice issues of inequality and individualisation can contribute to this goal.

  9. Evaluating Selection and Timing Ability of a Mutual Fund

    Directory of Open Access Journals (Sweden)

    Duguleană L.

    2009-12-01

    Full Text Available The paper presents the methodology and a case study to evaluate the performance of a mutual fund by taking a look at the timing and selection abilities of a portfolio manager. Separating the timing and selection abilities of the fund manager is taken into consideration by two major models. The data about the mutual fund chosen for study is the German blue chip fund “DWS Deutsche Aktien Typ O”, which includes most of the DAX 30 companies. The data consists of 117 monthly observations of the fund returns from January 1999 to September 2008. We used EViews to analyse the data.

  10. The Salience of Selected Variables on Choice for Movie Attendance among High School Students.

    Science.gov (United States)

    Austin, Bruce A.

    A questionnaire was designed for a study assessing both the importance of 28 variables in movie attendance and the importance of movie-going as a leisure-time activity. Respondents were 130 ninth and twelfth grade students. The 28 variables were broadly organized into eight categories: movie production personnel, production elements, advertising,…

  11. Quantifying Selection with Pool-Seq Time Series Data.

    Science.gov (United States)

    Taus, Thomas; Futschik, Andreas; Schlötterer, Christian

    2017-11-01

    Allele frequency time series data constitute a powerful resource for unraveling mechanisms of adaptation, because the temporal dimension captures important information about evolutionary forces. In particular, Evolve and Resequence (E&R), the whole-genome sequencing of replicated experimentally evolving populations, is becoming increasingly popular. Based on computer simulations several studies proposed experimental parameters to optimize the identification of the selection targets. No such recommendations are available for the underlying parameters selection strength and dominance. Here, we introduce a highly accurate method to estimate selection parameters from replicated time series data, which is fast enough to be applied on a genome scale. Using this new method, we evaluate how experimental parameters can be optimized to obtain the most reliable estimates for selection parameters. We show that the effective population size (Ne) and the number of replicates have the largest impact. Because the number of time points and sequencing coverage had only a minor effect, we suggest that time series analysis is feasible without major increase in sequencing costs. We anticipate that time series analysis will become routine in E&R studies. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  12. PLS-based and regularization-based methods for the selection of relevant variables in non-targeted metabolomics data

    Directory of Open Access Journals (Sweden)

    Renata Bujak

    2016-07-01

    Full Text Available Non-targeted metabolomics constitutes a part of systems biology and aims to determine many metabolites in complex biological samples. Datasets obtained in non-targeted metabolomics studies are multivariate and high-dimensional due to the sensitivity of mass spectrometry-based detection methods as well as complexity of biological matrices. Proper selection of variables which contribute into group classification is a crucial step, especially in metabolomics studies which are focused on searching for disease biomarker candidates. In the present study, three different statistical approaches were tested using two metabolomics datasets (RH and PH study. Orthogonal projections to latent structures-discriminant analysis (OPLS-DA without and with multiple testing correction as well as least absolute shrinkage and selection operator (LASSO were tested and compared. For the RH study, OPLS-DA model built without multiple testing correction, selected 46 and 218 variables based on VIP criteria using Pareto and UV scaling, respectively. In the case of the PH study, 217 and 320 variables were selected based on VIP criteria using Pareto and UV scaling, respectively. In the RH study, OPLS-DA model built with multiple testing correction, selected 4 and 19 variables as statistically significant in terms of Pareto and UV scaling, respectively. For PH study, 14 and 18 variables were selected based on VIP criteria in terms of Pareto and UV scaling, respectively. Additionally, the concept and fundaments of the least absolute shrinkage and selection operator (LASSO with bootstrap procedure evaluating reproducibility of results, was demonstrated. In the RH and PH study, the LASSO selected 14 and 4 variables with reproducibility between 99.3% and 100%. However, apart from the popularity of PLS-DA and OPLS-DA methods in metabolomics, it should be highlighted that they do not control type I or type II error, but only arbitrarily establish a cut-off value for PLS-DA loadings

  13. EFFECT OF CORE TRAINING ON SELECTED HEMATOLOGICAL VARIABLES AMONG BASKETBALL PLAYERS

    OpenAIRE

    K. Rejinadevi; Dr. C. Ramesh

    2017-01-01

    The purpose of the study was to find out the effect of core training on selected haematological variables among basketball players. For the purpose of the study forty men basketball players were selected as subjects from S.V.N College and Arul Anandar College, Madurai, Tamilnadu at random and their age ranged from 18 to 25 years. The selected subjects are divided in to two groups of twenty subjects each. Group I acted as core training group and Group II acted as control group. The experimenta...

  14. Sparse Reduced-Rank Regression for Simultaneous Dimension Reduction and Variable Selection

    KAUST Repository

    Chen, Lisha; Huang, Jianhua Z.

    2012-01-01

    and hence improves predictive accuracy. We propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty. We apply a group-lasso type penalty that treats each row of the matrix of the regression coefficients as a group

  15. Meta-Statistics for Variable Selection: The R Package BioMark

    Directory of Open Access Journals (Sweden)

    Ron Wehrens

    2012-11-01

    Full Text Available Biomarker identification is an ever more important topic in the life sciences. With the advent of measurement methodologies based on microarrays and mass spectrometry, thousands of variables are routinely being measured on complex biological samples. Often, the question is what makes two groups of samples different. Classical hypothesis testing suffers from the multiple testing problem; however, correcting for this often leads to a lack of power. In addition, choosing α cutoff levels remains somewhat arbitrary. Also in a regression context, a model depending on few but relevant variables will be more accurate and precise, and easier to interpret biologically.We propose an R package, BioMark, implementing two meta-statistics for variable selection. The first, higher criticism, presents a data-dependent selection threshold for significance, instead of a cookbook value of α = 0.05. It is applicable in all cases where two groups are compared. The second, stability selection, is more general, and can also be applied in a regression context. This approach uses repeated subsampling of the data in order to assess the variability of the model coefficients and selects those that remain consistently important. It is shown using experimental spike-in data from the field of metabolomics that both approaches work well with real data. BioMark also contains functionality for simulating data with specific characteristics for algorithm development and testing.

  16. A Robust Supervised Variable Selection for Noisy High-Dimensional Data

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan; Schlenker, Anna

    2015-01-01

    Roč. 2015, Article 320385 (2015), s. 1-10 ISSN 2314-6133 R&D Projects: GA ČR GA13-17187S Institutional support: RVO:67985807 Keywords : dimensionality reduction * variable selection * robustness Subject RIV: BA - General Mathematics Impact factor: 2.134, year: 2015

  17. Sparse supervised principal component analysis (SSPCA) for dimension reduction and variable selection

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara; Ghodsi, Ali; Clemmensen, Line H.

    2017-01-01

    Principal component analysis (PCA) is one of the main unsupervised pre-processing methods for dimension reduction. When the training labels are available, it is worth using a supervised PCA strategy. In cases that both dimension reduction and variable selection are required, sparse PCA (SPCA...

  18. Cataclysmic variables from a ROSAT/2MASS selection - I. Four new intermediate polars

    NARCIS (Netherlands)

    Gänsicke, B.T.; Marsh, T.R.; Edge, A.; Rodríguez-Gil, P.; Steeghs, D.; Araujo-Betancor, S.; Harlaftis, E.; Giannakis, O.; Pyrzas, S.; Morales-Rueda, L.; Aungwerojwit, A.

    2005-01-01

    We report the first results from a new search for cataclysmic variables (CVs) using a combined X-ray (ROSAT)/infrared (2MASS) target selection that discriminates against background active galactic nuclei. Identification spectra were obtained at the Isaac Newton Telescope for a total of 174 targets,

  19. A QSAR Study of Environmental Estrogens Based on a Novel Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Aiqian Zhang

    2012-05-01

    Full Text Available A large number of descriptors were employed to characterize the molecular structure of 53 natural, synthetic, and environmental chemicals which are suspected of disrupting endocrine functions by mimicking or antagonizing natural hormones and may thus pose a serious threat to the health of humans and wildlife. In this work, a robust quantitative structure-activity relationship (QSAR model with a novel variable selection method has been proposed for the effective estrogens. The variable selection method is based on variable interaction (VSMVI with leave-multiple-out cross validation (LMOCV to select the best subset. During variable selection, model construction and assessment, the Organization for Economic Co-operation and Development (OECD principles for regulation of QSAR acceptability were fully considered, such as using an unambiguous multiple-linear regression (MLR algorithm to build the model, using several validation methods to assessment the performance of the model, giving the define of applicability domain and analyzing the outliers with the results of molecular docking. The performance of the QSAR model indicates that the VSMVI is an effective, feasible and practical tool for rapid screening of the best subset from large molecular descriptors.

  20. On time-frequence analysis of heart rate variability

    NARCIS (Netherlands)

    H.G. van Steenis (Hugo)

    2002-01-01

    textabstractThe aim of this research is to develop a time-frequency method suitable to study HRV in greater detail. The following approach was used: • two known time-frequency representations were applied to HRV to understand its advantages and disadvantages in describing HRV in frequency and in

  1. Stroop interference and the timing of selective response activation.

    NARCIS (Netherlands)

    Lansbergen, M.M.; Kenemans, J.L.

    2008-01-01

    OBJECTIVE: To examine the exact timing of selective response activation in a manual color-word Stroop task. METHODS: Healthy individuals performed two versions of a manual color-word Stroop task, varying in the probability of incongruent color-words, while EEG was recorded. RESULTS: Stroop

  2. Multivariate time series modeling of selected childhood diseases in ...

    African Journals Online (AJOL)

    This paper is focused on modeling the five most prevalent childhood diseases in Akwa Ibom State using a multivariate approach to time series. An aggregate of 78,839 reported cases of malaria, upper respiratory tract infection (URTI), Pneumonia, anaemia and tetanus were extracted from five randomly selected hospitals in ...

  3. Variable selection in the explorative analysis of several data blocks in metabolomics

    DEFF Research Database (Denmark)

    Karaman, İbrahim; Nørskov, Natalja; Yde, Christian Clement

    highly correlated data sets in one integrated approach. Due to the high number of variables in data sets from metabolomics (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need...... to be related. Tools for the handling of mental overflow minimising false discovery rates both by using statistical and biological validation in an integrative approach are needed. In this paper different strategies for variable selection were considered with respect to false discovery and the possibility...... for biological validation. The data set used in this study is metabolomics data from an animal intervention study. The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using NMR and LC-MS based...

  4. Multivariate fault isolation of batch processes via variable selection in partial least squares discriminant analysis.

    Science.gov (United States)

    Yan, Zhengbing; Kuang, Te-Hui; Yao, Yuan

    2017-09-01

    In recent years, multivariate statistical monitoring of batch processes has become a popular research topic, wherein multivariate fault isolation is an important step aiming at the identification of the faulty variables contributing most to the detected process abnormality. Although contribution plots have been commonly used in statistical fault isolation, such methods suffer from the smearing effect between correlated variables. In particular, in batch process monitoring, the high autocorrelations and cross-correlations that exist in variable trajectories make the smearing effect unavoidable. To address such a problem, a variable selection-based fault isolation method is proposed in this research, which transforms the fault isolation problem into a variable selection problem in partial least squares discriminant analysis and solves it by calculating a sparse partial least squares model. As different from the traditional methods, the proposed method emphasizes the relative importance of each process variable. Such information may help process engineers in conducting root-cause diagnosis. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Penalized regression procedures for variable selection in the potential outcomes framework.

    Science.gov (United States)

    Ghosh, Debashis; Zhu, Yeying; Coffman, Donna L

    2015-05-10

    A recent topic of much interest in causal inference is model selection. In this article, we describe a framework in which to consider penalized regression approaches to variable selection for causal effects. The framework leads to a simple 'impute, then select' class of procedures that is agnostic to the type of imputation algorithm as well as penalized regression used. It also clarifies how model selection involves a multivariate regression model for causal inference problems and that these methods can be applied for identifying subgroups in which treatment effects are homogeneous. Analogies and links with the literature on machine learning methods, missing data, and imputation are drawn. A difference least absolute shrinkage and selection operator algorithm is defined, along with its multiple imputation analogs. The procedures are illustrated using a well-known right-heart catheterization dataset. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Not accounting for interindividual variability can mask habitat selection patterns: a case study on black bears.

    Science.gov (United States)

    Lesmerises, Rémi; St-Laurent, Martin-Hugues

    2017-11-01

    Habitat selection studies conducted at the population scale commonly aim to describe general patterns that could improve our understanding of the limiting factors in species-habitat relationships. Researchers often consider interindividual variation in selection patterns to control for its effects and avoid pseudoreplication by using mixed-effect models that include individuals as random factors. Here, we highlight common pitfalls and possible misinterpretations of this strategy by describing habitat selection of 21 black bears Ursus americanus. We used Bayesian mixed-effect models and compared results obtained when using random intercept (i.e., population level) versus calculating individual coefficients for each independent variable (i.e., individual level). We then related interindividual variability to individual characteristics (i.e., age, sex, reproductive status, body condition) in a multivariate analysis. The assumption of comparable behavior among individuals was verified only in 40% of the cases in our seasonal best models. Indeed, we found strong and opposite responses among sampled bears and individual coefficients were linked to individual characteristics. For some covariates, contrasted responses canceled each other out at the population level. In other cases, interindividual variability was concealed by the composition of our sample, with the majority of the bears (e.g., old individuals and bears in good physical condition) driving the population response (e.g., selection of young forest cuts). Our results stress the need to consider interindividual variability to avoid misinterpretation and uninformative results, especially for a flexible and opportunistic species. This study helps to identify some ecological drivers of interindividual variability in bear habitat selection patterns.

  7. Handling Time-dependent Variables : Antibiotics and Antibiotic Resistance

    NARCIS (Netherlands)

    Munoz-Price, L. Silvia; Frencken, Jos F.; Tarima, Sergey; Bonten, Marc

    2016-01-01

    Elucidating quantitative associations between antibiotic exposure and antibiotic resistance development is important. In the absence of randomized trials, observational studies are the next best alternative to derive such estimates. Yet, as antibiotics are prescribed for varying time periods,

  8. Effects of environmental variables on invasive amphibian activity: Using model selection on quantiles for counts

    Science.gov (United States)

    Muller, Benjamin J.; Cade, Brian S.; Schwarzkoph, Lin

    2018-01-01

    Many different factors influence animal activity. Often, the value of an environmental variable may influence significantly the upper or lower tails of the activity distribution. For describing relationships with heterogeneous boundaries, quantile regressions predict a quantile of the conditional distribution of the dependent variable. A quantile count model extends linear quantile regression methods to discrete response variables, and is useful if activity is quantified by trapping, where there may be many tied (equal) values in the activity distribution, over a small range of discrete values. Additionally, different environmental variables in combination may have synergistic or antagonistic effects on activity, so examining their effects together, in a modeling framework, is a useful approach. Thus, model selection on quantile counts can be used to determine the relative importance of different variables in determining activity, across the entire distribution of capture results. We conducted model selection on quantile count models to describe the factors affecting activity (numbers of captures) of cane toads (Rhinella marina) in response to several environmental variables (humidity, temperature, rainfall, wind speed, and moon luminosity) over eleven months of trapping. Environmental effects on activity are understudied in this pest animal. In the dry season, model selection on quantile count models suggested that rainfall positively affected activity, especially near the lower tails of the activity distribution. In the wet season, wind speed limited activity near the maximum of the distribution, while minimum activity increased with minimum temperature. This statistical methodology allowed us to explore, in depth, how environmental factors influenced activity across the entire distribution, and is applicable to any survey or trapping regime, in which environmental variables affect activity.

  9. Variable selection models for genomic selection using whole-genome sequence data and singular value decomposition.

    Science.gov (United States)

    Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen

    2017-12-27

    effects (SNP-BLUP model). When reducing marker density from WGS data to 30 K, SNP-BLUP tended to yield the highest accuracies, at least in the short term. Based on SVD of the genotype matrix, we developed a direct method for the calculation of BayesC estimates of marker effects. Although SVD- and MCMC-based marker effects differed slightly, their prediction accuracies were similar. Assuming that the SVD of the marker genotype matrix is already performed for other reasons (e.g. for SNP-BLUP), computation times for the BayesC predictions were comparable to those of SNP-BLUP.

  10. Uninformative variable elimination assisted by Gram-Schmidt Orthogonalization/successive projection algorithm for descriptor selection in QSAR

    DEFF Research Database (Denmark)

    Omidikia, Nematollah; Kompany-Zareh, Mohsen

    2013-01-01

    Employment of Uninformative Variable Elimination (UVE) as a robust variable selection method is reported in this study. Each regression coefficient represents the contribution of the corresponding variable in the established model, but in the presence of uninformative variables as well as colline......Employment of Uninformative Variable Elimination (UVE) as a robust variable selection method is reported in this study. Each regression coefficient represents the contribution of the corresponding variable in the established model, but in the presence of uninformative variables as well...... as collinearity reliability of the regression coefficient's magnitude is suspicious. Successive Projection Algorithm (SPA) and Gram-Schmidt Orthogonalization (GSO) were implemented as pre-selection technique for removing collinearity and redundancy among variables in the model. Uninformative variable elimination...

  11. Surgeon and type of anesthesia predict variability in surgical procedure times.

    Science.gov (United States)

    Strum, D P; Sampson, A R; May, J H; Vargas, L G

    2000-05-01

    Variability in surgical procedure times increases the cost of healthcare delivery by increasing both the underutilization and overutilization of expensive surgical resources. To reduce variability in surgical procedure times, we must identify and study its sources. Our data set consisted of all surgeries performed over a 7-yr period at a large teaching hospital, resulting in 46,322 surgical cases. To study factors associated with variability in surgical procedure times, data mining techniques were used to segment and focus the data so that the analyses would be both technically and intellectually feasible. The data were subdivided into 40 representative segments of manageable size and variability based on headers adopted from the common procedural terminology classification. Each data segment was then analyzed using a main-effects linear model to identify and quantify specific sources of variability in surgical procedure times. The single most important source of variability in surgical procedure times was surgeon effect. Type of anesthesia, age, gender, and American Society of Anesthesiologists risk class were additional sources of variability. Intrinsic case-specific variability, unexplained by any of the preceding factors, was found to be highest for shorter surgeries relative to longer procedures. Variability in procedure times among surgeons was a multiplicative function (proportionate to time) of surgical time and total procedure time, such that as procedure times increased, variability in surgeons' surgical time increased proportionately. Surgeon-specific variability should be considered when building scheduling heuristics for longer surgeries. Results concerning variability in surgical procedure times due to factors such as type of anesthesia, age, gender, and American Society of Anesthesiologists risk class may be extrapolated to scheduling in other institutions, although specifics on individual surgeons may not. This research identifies factors associated

  12. Selection of entropy-measure parameters for knowledge discovery in heart rate variability data.

    Science.gov (United States)

    Mayer, Christopher C; Bachler, Martin; Hörtenhuber, Matthias; Stocker, Christof; Holzinger, Andreas; Wassertheurer, Siegfried

    2014-01-01

    Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical

  13. Concentrated Hitting Times of Randomized Search Heuristics with Variable Drift

    DEFF Research Database (Denmark)

    Lehre, Per Kristian; Witt, Carsten

    2014-01-01

    Drift analysis is one of the state-of-the-art techniques for the runtime analysis of randomized search heuristics (RSHs) such as evolutionary algorithms (EAs), simulated annealing etc. The vast majority of existing drift theorems yield bounds on the expected value of the hitting time for a target...

  14. Generating k-independent variables in constant time

    DEFF Research Database (Denmark)

    Christiani, Tobias Lybecker; Pagh, Rasmus

    2014-01-01

    The generation of pseudorandom elements over finite fields is fundamental to the time, space and randomness complexity of randomized algorithms and data structures. We consider the problem of generating k-independent random values over a finite field F in a word RAM model equipped with constant...

  15. Stability Criteria for Differential Equations with Variable Time Delays

    Science.gov (United States)

    Schley, D.; Shail, R.; Gourley, S. A.

    2002-01-01

    Time delays are an important aspect of mathematical modelling, but often result in highly complicated equations which are difficult to treat analytically. In this paper it is shown how careful application of certain undergraduate tools such as the Method of Steps and the Principle of the Argument can yield significant results. Certain delay…

  16. Walking speed-related changes in stride time variability: effects of decreased speed

    Directory of Open Access Journals (Sweden)

    Dubost Veronique

    2009-08-01

    Full Text Available Abstract Background Conflicting results have been reported regarding the relationship between stride time variability (STV and walking speed. While some studies failed to establish any relationship, others reported either a linear or a non-linear relationship. We therefore sought to determine the extent to which decrease in self-selected walking speed influenced STV among healthy young adults. Methods The mean value, the standard deviation and the coefficient of variation of stride time, as well as the mean value of stride velocity were recorded while steady-state walking using the GAITRite® system in 29 healthy young adults who walked consecutively at 88%, 79%, 71%, 64%, 58%, 53%, 46% and 39% of their preferred walking speed. Results The decrease in stride velocity increased significantly mean values, SD and CoV of stride time (p Conclusion The results support the assumption that gait variability increases while walking speed decreases and, thus, gait might be more unstable when healthy subjects walk slower compared with their preferred walking speed. Furthermore, these results highlight that a decrease in walking speed can be a potential confounder while evaluating STV.

  17. Time-variable medical education innovation in context

    Directory of Open Access Journals (Sweden)

    Stamy CD

    2018-06-01

    Full Text Available Christopher D Stamy,1 Christine C Schwartz,1 Danielle A Phillips,2 Aparna S Ajjarapu,3 Kristi J Ferguson,4,5 Debra A Schwinn6–8 1University of Iowa Carver College of Medicine, Iowa City, 2Des Moines University Osteopathic Medical Center, Des Moines, 3University of Iowa, 4Office of Consultation & Research in Medical Education, University of Iowa Carver College of Medicine, 5Department of Internal Medicine, 6Department of Anesthesia, 7Department of Biochemistry, 8Department of Pharmacology, University of Iowa Health Care, Iowa City, IA, USA Background: Medical education is undergoing robust curricular reform with several innovative models emerging. In this study, we examined current trends in 3-year Doctor of Medicine (MD education and place these programs in context. Methods: A survey was conducted among Deans of U.S. allopathic medical schools using structured phone interview regarding current availability of a 3-year MD pathway, and/or other variations in curricular innovation, within their institution. Those with 3-year programs answered additional questions. Results: Data from 107 institutions were obtained (75% survey response rate. The most common variation in length of medical education today is the accelerated 3-year pathway. Since 2010, 9 medical schools have introduced parallel 3-year MD programs and another 4 are actively developing such programs. However, the total number of students in 3-year MD tracks remains small (n=199 students, or 0.2% total medical students. Family medicine and general internal medicine are the most common residency programs selected. Benefits of 3-year MD programs generally include reduction in student debt, stability of guaranteed residency positions, and potential for increasing physician numbers in rural/underserved areas. Drawbacks include concern about fatigue/burnout, difficulty in providing guaranteed residency positions, and additional expense in teaching 2 parallel curricula. Four vignettes of

  18. Start time variability and predictability in railroad train and engine freight and passenger service employees.

    Science.gov (United States)

    2014-04-01

    Start time variability in work schedules is often hypothesized to be a cause of railroad employee fatigue because unpredictable work start times prevent employees from planning sleep and personal activities. This report examines work start time diffe...

  19. Fixation times in evolutionary games under weak selection

    International Nuclear Information System (INIS)

    Altrock, Philipp M; Traulsen, Arne

    2009-01-01

    In evolutionary game dynamics, reproductive success increases with the performance in an evolutionary game. If strategy A performs better than strategy B, strategy A will spread in the population. Under stochastic dynamics, a single mutant will sooner or later take over the entire population or go extinct. We analyze the mean exit times (or average fixation times) associated with this process. We show analytically that these times depend on the payoff matrix of the game in an amazingly simple way under weak selection, i.e. strong stochasticity: the payoff difference Δπ is a linear function of the number of A individuals i, Δπ=u i+v. The unconditional mean exit time depends only on the constant term v. Given that a single A mutant takes over the population, the corresponding conditional mean exit time depends only on the density dependent term u. We demonstrate this finding for two commonly applied microscopic evolutionary processes.

  20. Quantum mechanics of time travel through post-selected teleportation

    International Nuclear Information System (INIS)

    Lloyd, Seth; Garcia-Patron, Raul; Maccone, Lorenzo; Giovannetti, Vittorio; Shikano, Yutaka

    2011-01-01

    This paper discusses the quantum mechanics of closed-timelike curves (CTCs) and of other potential methods for time travel. We analyze a specific proposal for such quantum time travel, the quantum description of CTCs based on post-selected teleportation (P-CTCs). We compare the theory of P-CTCs to previously proposed quantum theories of time travel: the theory is inequivalent to Deutsch's theory of CTCs, but it is consistent with path-integral approaches (which are the best suited for analyzing quantum-field theory in curved space-time). We derive the dynamical equations that a chronology-respecting system interacting with a CTC will experience. We discuss the possibility of time travel in the absence of general-relativistic closed-timelike curves, and investigate the implications of P-CTCs for enhancing the power of computation.

  1. Selecting minimum dataset soil variables using PLSR as a regressive multivariate method

    Science.gov (United States)

    Stellacci, Anna Maria; Armenise, Elena; Castellini, Mirko; Rossi, Roberta; Vitti, Carolina; Leogrande, Rita; De Benedetto, Daniela; Ferrara, Rossana M.; Vivaldi, Gaetano A.

    2017-04-01

    Long-term field experiments and science-based tools that characterize soil status (namely the soil quality indices, SQIs) assume a strategic role in assessing the effect of agronomic techniques and thus in improving soil management especially in marginal environments. Selecting key soil variables able to best represent soil status is a critical step for the calculation of SQIs. Current studies show the effectiveness of statistical methods for variable selection to extract relevant information deriving from multivariate datasets. Principal component analysis (PCA) has been mainly used, however supervised multivariate methods and regressive techniques are progressively being evaluated (Armenise et al., 2013; de Paul Obade et al., 2016; Pulido Moncada et al., 2014). The present study explores the effectiveness of partial least square regression (PLSR) in selecting critical soil variables, using a dataset comparing conventional tillage and sod-seeding on durum wheat. The results were compared to those obtained using PCA and stepwise discriminant analysis (SDA). The soil data derived from a long-term field experiment in Southern Italy. On samples collected in April 2015, the following set of variables was quantified: (i) chemical: total organic carbon and nitrogen (TOC and TN), alkali-extractable C (TEC and humic substances - HA-FA), water extractable N and organic C (WEN and WEOC), Olsen extractable P, exchangeable cations, pH and EC; (ii) physical: texture, dry bulk density (BD), macroporosity (Pmac), air capacity (AC), and relative field capacity (RFC); (iii) biological: carbon of the microbial biomass quantified with the fumigation-extraction method. PCA and SDA were previously applied to the multivariate dataset (Stellacci et al., 2016). PLSR was carried out on mean centered and variance scaled data of predictors (soil variables) and response (wheat yield) variables using the PLS procedure of SAS/STAT. In addition, variable importance for projection (VIP

  2. Holocene Climate Variability on the Centennial and Millennial Time Scale

    Directory of Open Access Journals (Sweden)

    Eun Hee Lee

    2014-12-01

    Full Text Available There have been many suggestions and much debate about climate variability during the Holocene. However, their complex forcing factors and mechanisms have not yet been clearly identified. In this paper, we have examined the Holocene climate cycles and features based on the wavelet analyses of 14C, 10Be, and 18O records. The wavelet results of the 14C and 10Be data show that the cycles of ~2180-2310, ~970, ~500-520, ~350-360, and ~210-220 years are dominant, and the ~1720 and ~1500 year cycles are relatively weak and subdominant. In particular, the ~2180-2310 year periodicity corresponding to the Hallstatt cycle is constantly significant throughout the Holocene, while the ~970 year cycle corresponding to the Eddy cycle is mainly prominent in the early half of the Holocene. In addition, distinctive signals of the ~210-220 year period corresponding to the de Vries cycle appear recurrently in the wavelet distribution of 14C and 10Be, which coincide with the grand solar minima periods. These de Vries cycle events occurred every ~2270 years on average, implying a connection with the Hallstatt cycle. In contrast, the wavelet results of 18O data show that the cycles of ~1900-2000, ~900-1000, and ~550-560 years are dominant, while the ~2750 and ~2500 year cycles are subdominant. The periods of ~2750, ~2500, and ~1900 years being derived from the 18O records of NGRIP, GRIP and GISP2 ice cores, respectively, are rather longer or shorter than the Hallstatt cycle derived from the 14C and 10Be records. The records of these three sites all show the ~900-1000 year periodicity corresponding to the Eddy cycle in the early half of the Holocene.

  3. Multi-Objective Flexible Flow Shop Scheduling Problem Considering Variable Processing Time due to Renewable Energy

    Directory of Open Access Journals (Sweden)

    Xiuli Wu

    2018-03-01

    Full Text Available Renewable energy is an alternative to non-renewable energy to reduce the carbon footprint of manufacturing systems. Finding out how to make an alternative energy-efficient scheduling solution when renewable and non-renewable energy drives production is of great importance. In this paper, a multi-objective flexible flow shop scheduling problem that considers variable processing time due to renewable energy (MFFSP-VPTRE is studied. First, the optimization model of the MFFSP-VPTRE is formulated considering the periodicity of renewable energy and the limitations of energy storage capacity. Then, a hybrid non-dominated sorting genetic algorithm with variable local search (HNSGA-II is proposed to solve the MFFSP-VPTRE. An operation and machine-based encoding method is employed. A low-carbon scheduling algorithm is presented. Besides the crossover and mutation, a variable local search is used to improve the offspring’s Pareto set. The offspring and the parents are combined and those that dominate more are selected to continue evolving. Finally, two groups of experiments are carried out. The results show that the low-carbon scheduling algorithm can effectively reduce the carbon footprint under the premise of makespan optimization and the HNSGA-II outperforms the traditional NSGA-II and can solve the MFFSP-VPTRE effectively and efficiently.

  4. Real-time fiber selection using the Wii remote

    Science.gov (United States)

    Klein, Jan; Scholl, Mike; Köhn, Alexander; Hahn, Horst K.

    2010-02-01

    In the last few years, fiber tracking tools have become popular in clinical contexts, e.g., for pre- and intraoperative neurosurgical planning. The efficient, intuitive, and reproducible selection of fiber bundles still constitutes one of the main issues. In this paper, we present a framework for a real-time selection of axonal fiber bundles using a Wii remote control, a wireless controller for Nintendo's gaming console. It enables the user to select fiber bundles without any other input devices. To achieve a smooth interaction, we propose a novel spacepartitioning data structure for efficient 3D range queries in a data set consisting of precomputed fibers. The data structure which is adapted to the special geometry of fiber tracts allows for queries that are many times faster compared with previous state-of-the-art approaches. In order to extract reliably fibers for further processing, e.g., for quantification purposes or comparisons with preoperatively tracked fibers, we developed an expectationmaximization clustering algorithm that can refine the range queries. Our initial experiments have shown that white matter fiber bundles can be reliably selected within a few seconds by the Wii, which has been placed in a sterile plastic bag to simulate usage under surgical conditions.

  5. Sensor combination and chemometric variable selection for online monitoring of Streptomyces coelicolor fed-batch cultivations

    DEFF Research Database (Denmark)

    Ödman, Peter; Johansen, C.L.; Olsson, L.

    2010-01-01

    of biomass and substrate (casamino acids) concentrations, respectively. The effect of combination of fluorescence and gas analyzer data as well as of different variable selection methods was investigated. Improved prediction models were obtained by combination of data from the two sensors and by variable......Fed-batch cultivations of Streptomyces coelicolor, producing the antibiotic actinorhodin, were monitored online by multiwavelength fluorescence spectroscopy and off-gas analysis. Partial least squares (PLS), locally weighted regression, and multilinear PLS (N-PLS) models were built for prediction...

  6. Describing temporal variability of the mean Estonian precipitation series in climate time scale

    Science.gov (United States)

    Post, P.; Kärner, O.

    2009-04-01

    Applicability of the random walk type models to represent the temporal variability of various atmospheric temperature series has been successfully demonstrated recently (e.g. Kärner, 2002). Main problem in the temperature modeling is connected to the scale break in the generally self similar air temperature anomaly series (Kärner, 2005). The break separates short-range strong non-stationarity from nearly stationary longer range variability region. This is an indication of the fact that several geophysical time series show a short-range non-stationary behaviour and a stationary behaviour in longer range (Davis et al., 1996). In order to model series like that the choice of time step appears to be crucial. To characterize the long-range variability we can neglect the short-range non-stationary fluctuations, provided that we are able to model properly the long-range tendencies. The structure function (Monin and Yaglom, 1975) was used to determine an approximate segregation line between the short and the long scale in terms of modeling. The longer scale can be called climate one, because such models are applicable in scales over some decades. In order to get rid of the short-range fluctuations in daily series the variability can be examined using sufficiently long time step. In the present paper, we show that the same philosophy is useful to find a model to represent a climate-scale temporal variability of the Estonian daily mean precipitation amount series over 45 years (1961-2005). Temporal variability of the obtained daily time series is examined by means of an autoregressive and integrated moving average (ARIMA) family model of the type (0,1,1). This model is applicable for daily precipitation simulating if to select an appropriate time step that enables us to neglet the short-range non-stationary fluctuations. A considerably longer time step than one day (30 days) is used in the current paper to model the precipitation time series variability. Each ARIMA (0

  7. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    International Nuclear Information System (INIS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-01-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction. (paper)

  8. Variable valve timing in a homogenous charge compression ignition engine

    Science.gov (United States)

    Lawrence, Keith E.; Faletti, James J.; Funke, Steven J.; Maloney, Ronald P.

    2004-08-03

    The present invention relates generally to the field of homogenous charge compression ignition engines, in which fuel is injected when the cylinder piston is relatively close to the bottom dead center position for its compression stroke. The fuel mixes with air in the cylinder during the compression stroke to create a relatively lean homogeneous mixture that preferably ignites when the piston is relatively close to the top dead center position. However, if the ignition event occurs either earlier or later than desired, lowered performance, engine misfire, or even engine damage, can result. The present invention utilizes internal exhaust gas recirculation and/or compression ratio control to control the timing of ignition events and combustion duration in homogeneous charge compression ignition engines. Thus, at least one electro-hydraulic assist actuator is provided that is capable of mechanically engaging at least one cam actuated intake and/or exhaust valve.

  9. Calibration Variable Selection and Natural Zero Determination for Semispan and Canard Balances

    Science.gov (United States)

    Ulbrich, Norbert M.

    2013-01-01

    Independent calibration variables for the characterization of semispan and canard wind tunnel balances are discussed. It is shown that the variable selection for a semispan balance is determined by the location of the resultant normal and axial forces that act on the balance. These two forces are the first and second calibration variable. The pitching moment becomes the third calibration variable after the normal and axial forces are shifted to the pitch axis of the balance. Two geometric distances, i.e., the rolling and yawing moment arms, are the fourth and fifth calibration variable. They are traditionally substituted by corresponding moments to simplify the use of calibration data during a wind tunnel test. A canard balance is related to a semispan balance. It also only measures loads on one half of a lifting surface. However, the axial force and yawing moment are of no interest to users of a canard balance. Therefore, its calibration variable set is reduced to the normal force, pitching moment, and rolling moment. The combined load diagrams of the rolling and yawing moment for a semispan balance are discussed. They may be used to illustrate connections between the wind tunnel model geometry, the test section size, and the calibration load schedule. Then, methods are reviewed that may be used to obtain the natural zeros of a semispan or canard balance. In addition, characteristics of three semispan balance calibration rigs are discussed. Finally, basic requirements for a full characterization of a semispan balance are reviewed.

  10. The Selection, Use, and Reporting of Control Variables in International Business Research

    DEFF Research Database (Denmark)

    Nielsen, Bo Bernhard; Raswant, Arpit

    2018-01-01

    This study explores the selection, use, and reporting of control variables in studies published in the leading international business (IB) research journals. We review a sample of 246 empirical studies published in the top five IB journals over the period 2012–2015 with particular emphasis...... on selection, use, and reporting of controls. Approximately 83% of studies included only half of what we consider Minimum Standard of Practice with regards to controls, whereas only 38% of the studies met the 75% threshold. We provide recommendations on how to effectively identify, use and report controls...

  11. Joint Bayesian variable and graph selection for regression models with network-structured predictors

    Science.gov (United States)

    Peterson, C. B.; Stingo, F. C.; Vannucci, M.

    2015-01-01

    In this work, we develop a Bayesian approach to perform selection of predictors that are linked within a network. We achieve this by combining a sparse regression model relating the predictors to a response variable with a graphical model describing conditional dependencies among the predictors. The proposed method is well-suited for genomic applications since it allows the identification of pathways of functionally related genes or proteins which impact an outcome of interest. In contrast to previous approaches for network-guided variable selection, we infer the network among predictors using a Gaussian graphical model and do not assume that network information is available a priori. We demonstrate that our method outperforms existing methods in identifying network-structured predictors in simulation settings, and illustrate our proposed model with an application to inference of proteins relevant to glioblastoma survival. PMID:26514925

  12. Demographic Variables and Selective, Sustained Attention and Planning through Cognitive Tasks among Healthy Adults

    Directory of Open Access Journals (Sweden)

    Afsaneh Zarghi

    2011-04-01

    Full Text Available Introduction: Cognitive tasks are considered to be applicable and appropriate in assessing cognitive domains. The purpose of our study is to determine the relationship existence between variables of age, sex and education with selective, sustained attention and planning abilities by means of computerized cognitive tasks among healthy adults. Methods: A cross-sectional study was implemented during 6 months from June to November, 2010 on 84 healthy adults (42 male and 42 female. The whole participants performed computerized CPT, STROOP and TOL tests after being content and trained. Results: The obtained data indicate that there is a significant correlation coefficient between age, sex and education variables (p<0.05. Discussion: The above-mentioned tests can be used to assess selective, sustained attention and planning.

  13. [Modelling the effect of local climatic variability on dengue transmission in Medellin (Colombia) by means of time series analysis].

    Science.gov (United States)

    Rúa-Uribe, Guillermo L; Suárez-Acosta, Carolina; Chauca, José; Ventosilla, Palmira; Almanza, Rita

    2013-09-01

    Dengue fever is a major impact on public health vector-borne disease, and its transmission is influenced by entomological, sociocultural and economic factors. Additionally, climate variability plays an important role in the transmission dynamics. A large scientific consensus has indicated that the strong association between climatic variables and disease could be used to develop models to explain the incidence of the disease. To develop a model that provides a better understanding of dengue transmission dynamics in Medellin and predicts increases in the incidence of the disease. The incidence of dengue fever was used as dependent variable, and weekly climatic factors (maximum, mean and minimum temperature, relative humidity and precipitation) as independent variables. Expert Modeler was used to develop a model to better explain the behavior of the disease. Climatic variables with significant association to the dependent variable were selected through ARIMA models. The model explains 34% of observed variability. Precipitation was the climatic variable showing statistically significant association with the incidence of dengue fever, but with a 20 weeks delay. In Medellin, the transmission of dengue fever was influenced by climate variability, especially precipitation. The strong association dengue fever/precipitation allowed the construction of a model to help understand dengue transmission dynamics. This information will be useful to develop appropriate and timely strategies for dengue control.

  14. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    Directory of Open Access Journals (Sweden)

    C. Fernandez-Lozano

    2013-01-01

    Full Text Available Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM. Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA, the most representative variables for a specific classification problem can be selected.

  15. Demographic Variables and Selective, Sustained Attention and Planning through Cognitive Tasks among Healthy Adults

    OpenAIRE

    Afsaneh Zarghi; Zali; A; Tehranidost; M; Mohammad Reza Zarindast; Ashrafi; F; Doroodgar; Khodadadi

    2011-01-01

    Introduction: Cognitive tasks are considered to be applicable and appropriate in assessing cognitive domains. The purpose of our study is to determine the relationship existence between variables of age, sex and education with selective, sustained attention and planning abilities by means of computerized cognitive tasks among healthy adults. Methods: A cross-sectional study was implemented during 6 months from June to November, 2010 on 84 healthy adults (42 male and 42 female). The whole part...

  16. gamboostLSS: An R Package for Model Building and Variable Selection in the GAMLSS Framework

    OpenAIRE

    Hofner, Benjamin; Mayr, Andreas; Schmid, Matthias

    2014-01-01

    Generalized additive models for location, scale and shape are a flexible class of regression models that allow to model multiple parameters of a distribution function, such as the mean and the standard deviation, simultaneously. With the R package gamboostLSS, we provide a boosting method to fit these models. Variable selection and model choice are naturally available within this regularized regression framework. To introduce and illustrate the R package gamboostLSS and its infrastructure, we...

  17. Development of a time-variable nuclear pulser for half life measurements

    International Nuclear Information System (INIS)

    Zahn, Guilherme S.; Domienikan, Claudio; Carvalhaes, Roberto P. M.; Genezini, Frederico A.

    2013-01-01

    In this work a time-variable pulser system with an exponentially-decaying pulse frequency is presented, which was developed using the low-cost, open-source Arduino microcontroler plataform. In this system, the microcontroller produces a TTL signal in the selected rate and a pulse shaper board adjusts it to be entered in an amplifier as a conventional pulser signal; both the decay constant and the initial pulse rate can be adjusted using a user-friendly control software, and the pulse amplitude can be adjusted using a potentiometer in the pulse shaper board. The pulser was tested using several combinations of initial pulse rate and decay constant, and the results show that the system is stable and reliable, and is suitable to be used in half-life measurements.

  18. Development of a time-variable nuclear pulser for half life measurements

    Energy Technology Data Exchange (ETDEWEB)

    Zahn, Guilherme S.; Domienikan, Claudio; Carvalhaes, Roberto P. M.; Genezini, Frederico A. [Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP. P.O. Box 11049, Sao Paulo, 05422-970 (Brazil)

    2013-05-06

    In this work a time-variable pulser system with an exponentially-decaying pulse frequency is presented, which was developed using the low-cost, open-source Arduino microcontroler plataform. In this system, the microcontroller produces a TTL signal in the selected rate and a pulse shaper board adjusts it to be entered in an amplifier as a conventional pulser signal; both the decay constant and the initial pulse rate can be adjusted using a user-friendly control software, and the pulse amplitude can be adjusted using a potentiometer in the pulse shaper board. The pulser was tested using several combinations of initial pulse rate and decay constant, and the results show that the system is stable and reliable, and is suitable to be used in half-life measurements.

  19. Data re-arranging techniques leading to proper variable selections in high energy physics

    Science.gov (United States)

    Kůs, Václav; Bouř, Petr

    2017-12-01

    We introduce a new data based approach to homogeneity testing and variable selection carried out in high energy physics experiments, where one of the basic tasks is to test the homogeneity of weighted samples, mainly the Monte Carlo simulations (weighted) and real data measurements (unweighted). This technique is called ’data re-arranging’ and it enables variable selection performed by means of the classical statistical homogeneity tests such as Kolmogorov-Smirnov, Anderson-Darling, or Pearson’s chi-square divergence test. P-values of our variants of homogeneity tests are investigated and the empirical verification through 46 dimensional high energy particle physics data sets is accomplished under newly proposed (equiprobable) quantile binning. Particularly, the procedure of homogeneity testing is applied to re-arranged Monte Carlo samples and real DATA sets measured at the particle accelerator Tevatron in Fermilab at DØ experiment originating from top-antitop quark pair production in two decay channels (electron, muon) with 2, 3, or 4+ jets detected. Finally, the variable selections in the electron and muon channels induced by the re-arranging procedure for homogeneity testing are provided for Tevatron top-antitop quark data sets.

  20. Improving breast cancer classification with mammography, supported on an appropriate variable selection analysis

    Science.gov (United States)

    Pérez, Noel; Guevara, Miguel A.; Silva, Augusto

    2013-02-01

    This work addresses the issue of variable selection within the context of breast cancer classification with mammography. A comprehensive repository of feature vectors was used including a hybrid subset gathering image-based and clinical features. It aimed to gather experimental evidence of variable selection in terms of cardinality, type and find a classification scheme that provides the best performance over the Area Under Receiver Operating Characteristics Curve (AUC) scores using the ranked features subset. We evaluated and classified a total of 300 subsets of features formed by the application of Chi-Square Discretization, Information-Gain, One-Rule and RELIEF methods in association with Feed-Forward Backpropagation Neural Network (FFBP), Support Vector Machine (SVM) and Decision Tree J48 (DTJ48) Machine Learning Algorithms (MLA) for a comparative performance evaluation based on AUC scores. A variable selection analysis was performed for Single-View Ranking and Multi-View Ranking groups of features. Features subsets representing Microcalcifications (MCs), Masses and both MCs and Masses lesions achieved AUC scores of 0.91, 0.954 and 0.934 respectively. Experimental evidence demonstrated that classification performance was improved by combining image-based and clinical features. The most important clinical and image-based features were StromaDistortion and Circularity respectively. Other less important but worth to use due to its consistency were Contrast, Perimeter, Microcalcification, Correlation and Elongation.

  1. Variable selection based near infrared spectroscopy quantitative and qualitative analysis on wheat wet gluten

    Science.gov (United States)

    Lü, Chengxu; Jiang, Xunpeng; Zhou, Xingfan; Zhang, Yinqiao; Zhang, Naiqian; Wei, Chongfeng; Mao, Wenhua

    2017-10-01

    Wet gluten is a useful quality indicator for wheat, and short wave near infrared spectroscopy (NIRS) is a high performance technique with the advantage of economic rapid and nondestructive test. To study the feasibility of short wave NIRS analyzing wet gluten directly from wheat seed, 54 representative wheat seed samples were collected and scanned by spectrometer. 8 spectral pretreatment method and genetic algorithm (GA) variable selection method were used to optimize analysis. Both quantitative and qualitative model of wet gluten were built by partial least squares regression and discriminate analysis. For quantitative analysis, normalization is the optimized pretreatment method, 17 wet gluten sensitive variables are selected by GA, and GA model performs a better result than that of all variable model, with R2V=0.88, and RMSEV=1.47. For qualitative analysis, automatic weighted least squares baseline is the optimized pretreatment method, all variable models perform better results than those of GA models. The correct classification rates of 3 class of 30% wet gluten content are 95.45, 84.52, and 90.00%, respectively. The short wave NIRS technique shows potential for both quantitative and qualitative analysis of wet gluten for wheat seed.

  2. Effects of implementing time-variable postgraduate training programmes on the organization of teaching hospital departments.

    Science.gov (United States)

    van Rossum, Tiuri R; Scheele, Fedde; Sluiter, Henk E; Paternotte, Emma; Heyligers, Ide C

    2018-01-31

    As competency-based education has gained currency in postgraduate medical education, it is acknowledged that trainees, having individual learning curves, acquire the desired competencies at different paces. To accommodate their different learning needs, time-variable curricula have been introduced making training no longer time-bound. This paradigm has many consequences and will, predictably, impact the organization of teaching hospitals. The purpose of this study was to determine the effects of time-variable postgraduate education on the organization of teaching hospital departments. We undertook exploratory case studies into the effects of time-variable training on teaching departments' organization. We held semi-structured interviews with clinical teachers and managers from various hospital departments. The analysis yielded six effects: (1) time-variable training requires flexible and individual planning, (2) learners must be active and engaged, (3) accelerated learning sometimes comes at the expense of clinical expertise, (4) fast-track training for gifted learners jeopardizes the continuity of care, (5) time-variable training demands more of supervisors, and hence, they need protected time for supervision, and (6) hospital boards should support time-variable training. Implementing time-variable education affects various levels within healthcare organizations, including stakeholders not directly involved in medical education. These effects must be considered when implementing time-variable curricula.

  3. Real-time flavour tagging selection in ATLAS

    CERN Document Server

    Zivkovic, Lidija; The ATLAS collaboration

    2015-01-01

    In high-energy physics experiments, online selection is crucial to select interesting collisions from the large data volume. ATLAS b-jet triggers are designed to identify heavy-flavour content in real-time and provide the only option to efficiently record events with fully hadronic final states containing b-jets. In doing so, two different, but related, challenges are faced. The physics goal is to optimise as far as possible the rejection of light jets, while retaining a high efficiency on selecting b-jets and maintaining affordable trigger rates without raising jet energy thresholds. This maps into a challenging computing task, as tracks and their corresponding vertexes must be reconstructed and analysed for each jet above the desired threshold, regardless of the increasingly harsh pile-up conditions. We present an overview of the ATLAS strategy for online b-jet selection for the LHC Run 2, including the use of novel methods and sophisticated algorithms designed to face the above mentioned challenges. A firs...

  4. Real-time flavour tagging selection in ATLAS

    CERN Document Server

    \\v{Z}ivkovi{c}, Lidija; The ATLAS collaboration

    2015-01-01

    In high-energy physics experiments, online selection is crucial to select interesting collisions from the large data volume. ATLAS b-jet triggers are designed to identify heavy-flavour content in real-time and provide the only option to efficiently record events with fully hadronic final states containing b-jets. In doing so, two different, but related, challenges are faced. The physics goal is to optimise as far as possible the rejection of light jets, while retaining a high efficiency on selecting b-jets and maintaining affordable trigger rates without raising jet energy thresholds. This maps into a challenging computing task, as tracks and their corresponding vertices must be reconstructed and analysed for each jet above the desired threshold, regardless of the increasingly harsh pile-up conditions. We present an overview of the ATLAS strategy for online b-jet selection for the LHC Run 2, including the use of novel methods and sophisticated algorithms designed to face the above mentioned challenges. A firs...

  5. Discrete Analysis of Portfolio Selection with Optimal Stopping Time

    Directory of Open Access Journals (Sweden)

    Jianfeng Liang

    2009-01-01

    Full Text Available Most of the investments in practice are carried out without certain horizons. There are many factors to drive investment to a stop. In this paper, we consider a portfolio selection policy with market-related stopping time. Particularly, we assume that the investor exits the market once his wealth reaches a given investment target or falls below a bankruptcy threshold. Our objective is to minimize the expected time when the investment target is obtained, at the same time, we guarantee the probability that bankruptcy happens is no larger than a given level. We formulate the problem as a mix integer linear programming model and make analysis of the model by using a numerical example.

  6. Evaluation of Online Log Variables That Estimate Learners' Time Management in a Korean Online Learning Context

    Science.gov (United States)

    Jo, Il-Hyun; Park, Yeonjeong; Yoon, Meehyun; Sung, Hanall

    2016-01-01

    The purpose of this study was to identify the relationship between the psychological variables and online behavioral patterns of students, collected through a learning management system (LMS). As the psychological variable, time and study environment management (TSEM), one of the sub-constructs of MSLQ, was chosen to verify a set of time-related…

  7. ACTIVE LEARNING TO OVERCOME SAMPLE SELECTION BIAS: APPLICATION TO PHOTOMETRIC VARIABLE STAR CLASSIFICATION

    Energy Technology Data Exchange (ETDEWEB)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Berian James, J. [Astronomy Department, University of California, Berkeley, CA 94720-7450 (United States); Brink, Henrik [Dark Cosmology Centre, Juliane Maries Vej 30, 2100 Copenhagen O (Denmark); Long, James P.; Rice, John, E-mail: jwrichar@stat.berkeley.edu [Statistics Department, University of California, Berkeley, CA 94720-7450 (United States)

    2012-01-10

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL-where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up-is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  8. ACTIVE LEARNING TO OVERCOME SAMPLE SELECTION BIAS: APPLICATION TO PHOTOMETRIC VARIABLE STAR CLASSIFICATION

    International Nuclear Information System (INIS)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Berian James, J.; Brink, Henrik; Long, James P.; Rice, John

    2012-01-01

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  9. Active Learning to Overcome Sample Selection Bias: Application to Photometric Variable Star Classification

    Science.gov (United States)

    Richards, Joseph W.; Starr, Dan L.; Brink, Henrik; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; James, J. Berian; Long, James P.; Rice, John

    2012-01-01

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  10. Dynamic variable selection in SNP genotype autocalling from APEX microarray data

    Directory of Open Access Journals (Sweden)

    Zamar Ruben H

    2006-11-01

    Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are DNA sequence variations, occurring when a single nucleotide – adenine (A, thymine (T, cytosine (C or guanine (G – is altered. Arguably, SNPs account for more than 90% of human genetic variation. Our laboratory has developed a highly redundant SNP genotyping assay consisting of multiple probes with signals from multiple channels for a single SNP, based on arrayed primer extension (APEX. This mini-sequencing method is a powerful combination of a highly parallel microarray with distinctive Sanger-based dideoxy terminator sequencing chemistry. Using this microarray platform, our current genotype calling system (known as SNP Chart is capable of calling single SNP genotypes by manual inspection of the APEX data, which is time-consuming and exposed to user subjectivity bias. Results Using a set of 32 Coriell DNA samples plus three negative PCR controls as a training data set, we have developed a fully-automated genotyping algorithm based on simple linear discriminant analysis (LDA using dynamic variable selection. The algorithm combines separate analyses based on the multiple probe sets to give a final posterior probability for each candidate genotype. We have tested our algorithm on a completely independent data set of 270 DNA samples, with validated genotypes, from patients admitted to the intensive care unit (ICU of St. Paul's Hospital (plus one negative PCR control sample. Our method achieves a concordance rate of 98.9% with a 99.6% call rate for a set of 96 SNPs. By adjusting the threshold value for the final posterior probability of the called genotype, the call rate reduces to 94.9% with a higher concordance rate of 99.6%. We also reversed the two independent data sets in their training and testing roles, achieving a concordance rate up to 99.8%. Conclusion The strength of this APEX chemistry-based platform is its unique redundancy having multiple probes for a single SNP. Our

  11. Regional regression models of percentile flows for the contiguous United States: Expert versus data-driven independent variable selection

    Directory of Open Access Journals (Sweden)

    Geoffrey Fouad

    2018-06-01

    New hydrological insights for the region: A set of three variables selected based on an expert assessment of factors that influence percentile flows performed similarly to larger sets of variables selected using a data-driven method. Expert assessment variables included mean annual precipitation, potential evapotranspiration, and baseflow index. Larger sets of up to 37 variables contributed little, if any, additional predictive information. Variables used to describe the distribution of basin data (e.g. standard deviation were not useful, and average values were sufficient to characterize physical and climatic basin conditions. Effectiveness of the expert assessment variables may be due to the high degree of multicollinearity (i.e. cross-correlation among additional variables. A tool is provided in the Supplementary material to predict percentile flows based on the three expert assessment variables. Future work should develop new variables with a strong understanding of the processes related to percentile flows.

  12. Beyond space and time: advanced selection for seismological data

    Science.gov (United States)

    Trabant, C. M.; Van Fossen, M.; Ahern, T. K.; Casey, R. E.; Weertman, B.; Sharer, G.; Benson, R. B.

    2017-12-01

    Separating the available raw data from that useful for any given study is often a tedious step in a research project, particularly for first-order data quality problems such as broken sensors, incorrect response information, and non-continuous time series. With the ever increasing amounts of data available to researchers, this chore becomes more and more time consuming. To assist users in this pre-processing of data, the IRIS Data Management Center (DMC) has created a system called Research Ready Data Sets (RRDS). The RRDS system allows researchers to apply filters that constrain their data request using criteria related to signal quality, response correctness, and high resolution data availability. In addition to the traditional selection methods of stations at a geographic location for given time spans, RRDS will provide enhanced criteria for data selection based on many of the measurements available in the DMC's MUSTANG quality control system. This means that data may be selected based on background noise (tolerance relative to high and low noise Earth models), signal-to-noise ratio for earthquake arrivals, signal RMS, instrument response corrected signal correlation with Earth tides, time tear (gaps/overlaps) counts, timing quality (when reported in the raw data by the datalogger) and more. The new RRDS system is available as a web service designed to operate as a request filter. A request is submitted containing the traditional station and time constraints as well as data quality constraints. The request is then filtered and a report is returned that indicates 1) the request that would subsequently be submitted to a data access service, 2) a record of the quality criteria specified and 3) a record of the data rejected based on those criteria, including the relevant values. This service can be used to either filter a request prior to requesting the actual data or to explore which data match a set of enhanced criteria without downloading the data. We are

  13. Cholinergic enhancement reduces functional connectivity and BOLD variability in visual extrastriate cortex during selective attention.

    Science.gov (United States)

    Ricciardi, Emiliano; Handjaras, Giacomo; Bernardi, Giulio; Pietrini, Pietro; Furey, Maura L

    2013-01-01

    Enhancing cholinergic function improves performance on various cognitive tasks and alters neural responses in task specific brain regions. We have hypothesized that the changes in neural activity observed during increased cholinergic function reflect an increase in neural efficiency that leads to improved task performance. The current study tested this hypothesis by assessing neural efficiency based on cholinergically-mediated effects on regional brain connectivity and BOLD signal variability. Nine subjects participated in a double-blind, placebo-controlled crossover fMRI study. Following an infusion of physostigmine (1 mg/h) or placebo, echo-planar imaging (EPI) was conducted as participants performed a selective attention task. During the task, two images comprised of superimposed pictures of faces and houses were presented. Subjects were instructed periodically to shift their attention from one stimulus component to the other and to perform a matching task using hand held response buttons. A control condition included phase-scrambled images of superimposed faces and houses that were presented in the same temporal and spatial manner as the attention task; participants were instructed to perform a matching task. Cholinergic enhancement improved performance during the selective attention task, with no change during the control task. Functional connectivity analyses showed that the strength of connectivity between ventral visual processing areas and task-related occipital, parietal and prefrontal regions reduced significantly during cholinergic enhancement, exclusively during the selective attention task. Physostigmine administration also reduced BOLD signal temporal variability relative to placebo throughout temporal and occipital visual processing areas, again during the selective attention task only. Together with the observed behavioral improvement, the decreases in connectivity strength throughout task-relevant regions and BOLD variability within stimulus

  14. Selection of controlled variables in bioprocesses. Application to a SHARON-Anammox process for autotrophic nitrogen removal

    DEFF Research Database (Denmark)

    Mauricio Iglesias, Miguel; Valverde Perez, Borja; Sin, Gürkan

    Selecting the right controlled variables in a bioprocess is challenging since the objectives of the process (yields, product or substrate concentration) are difficult to relate with a given actuator. We apply here process control tools that can be used to assist in the selection of controlled var...... variables to the case of the SHARON-Anammox process for autotrophic nitrogen removal....

  15. Cortical Response Variability as a Developmental Index of Selective Auditory Attention

    Science.gov (United States)

    Strait, Dana L.; Slater, Jessica; Abecassis, Victor; Kraus, Nina

    2014-01-01

    Attention induces synchronicity in neuronal firing for the encoding of a given stimulus at the exclusion of others. Recently, we reported decreased variability in scalp-recorded cortical evoked potentials to attended compared with ignored speech in adults. Here we aimed to determine the developmental time course for this neural index of auditory…

  16. REAL-TIME FLAVOUR TAGGING SELECTION IN ATLAS

    CERN Document Server

    Bokan, Petar; The ATLAS collaboration

    2016-01-01

    The ATLAS experiment includes a well-developed trigger system that allows a selection of events which are thought to be of interest, while achieving a high overall rejection against less interesting processes. An important part of the online event selection is the ability to distinguish between jets arising from heavy-flavour quarks (b- and c-jets) and light jets (jets from u-, d-, s- and gluon jets) in real-time. This is essential for many physics analysis that include processes with large jet multiplicity and b-quarks in the final state. Many changes were implemented to the ATLAS online b-jet selection for the Run-2 of the LHC. An overview of the b-jet trigger strategy and performance during 2015 data taking is presented. The ability to use complex offline Multivariate (MV2) b-tagging algorithms directly at High Level Trigger (HLT) was tested in this period. Details on online tagging algorithms are given together with the plans on how to adapt to the new high-luminosity and increased pileup conditions by ex...

  17. Between-centre variability versus variability over time in DXA whole body measurements evaluated using a whole body phantom

    Energy Technology Data Exchange (ETDEWEB)

    Louis, Olivia [Department of Radiology, AZ-VUB, Vrije Universiteit Brussel, Laarbeeklaan 101, 1090 Brussel (Belgium)]. E-mail: olivia.louis@az.vub.ac.be; Verlinde, Siska [Belgian Study Group for Pediatric Endocrinology (Belgium); Thomas, Muriel [Belgian Study Group for Pediatric Endocrinology (Belgium); De Schepper, Jean [Department of Pediatrics, AZ-VUB, Vrije Universiteit Brussel, Laarbeeklaan 101, 1090 Brussel (Belgium)

    2006-06-15

    This study aimed to compare the variability of whole body measurements, using dual energy X-ray absorptiometry (DXA), among geographically distinct centres versus that over time in a given centre. A Hologic-designed 28 kg modular whole body phantom was used, including high density polyethylene, gray polyvinylchloride and aluminium. It was scanned on seven Hologic QDR 4500 DXA devices, located in seven centres and was also repeatedly (n = 18) scanned in the reference centre, over a time span of 5 months. The mean between-centre coefficient of variation (CV) ranged from 2.0 (lean mass) to 5.6% (fat mass) while the mean within-centre CV ranged from 0.3 (total mass) to 4.7% (total area). Between-centre variability compared well with within-centre variability for total area, bone mineral content and bone mineral density, but was significantly higher for fat (p < 0.001), lean (p < 0.005) and total mass (p < 0.001). Our results suggest that, even when using the same device, the between-centre variability remains a matter of concern, particularly where body composition is concerned.

  18. Selection of teaching content in times of changing

    DEFF Research Database (Denmark)

    Petersen, Benedikte Vilslev

    in the time between the tests? Based on the overall development towards output-control in the school system, this project focuses on investigating whether content choices today have changed in general: What criteria are underlying the teacher's choice of teaching content? Methodically the study will work......Authors: Benedikte Vilslev Petersen Institution: VIA University College Teacher Education in Aarhus Contact details: bp@via.dk Title: Selection of teaching content in times of changing. Abstract: The educational system in Denmark is currently affected by changes, which can be generally...... characterized as a development going from input-control to output-control. Increasing research in classroom focuses on conditions for effective teaching, pupil learning outcome and classroom management techniques (e.g. Hattie 2013, Grøterud and Nielsen 1997, Nordenbo 2008, Meyer 2006, Hermansen 2007). However...

  19. Embryo selection: the role of time-lapse monitoring.

    Science.gov (United States)

    Kovacs, Peter

    2014-12-15

    In vitro fertilization has been available for over 3 decades. Its use is becoming more widespread worldwide, and in the developed world, up to 5% of children have been born following IVF. It is estimated that over 5 million children have been conceived in vitro. In addition to giving hope to infertile couples to have their own family, in vitro fertilization has also introduced risks as well. The risk of multiple gestation and the associated maternal and neonatal morbidity/mortality has increased significantly over the past few decades. While stricter transfer policies have eliminated the majority of the high-order multiples, these changes have not yet had much of an impact on the incidence of twins. A twin pregnancy can be avoided by the transfer of a single embryo only. However, the traditionally used method of morphologic embryo selection is not predictive enough to allow routine single embryo transfer; therefore, new screening tools are needed. Time-lapse embryo monitoring allows continuous, non-invasive embryo observation without the need to remove the embryo from optimal culturing conditions. The extra information on the cleavage pattern, morphologic changes and embryo development dynamics could help us identify embryos with a higher implantation potential. These technologic improvements enable us to objectively select the embryo(s) for transfer based on certain algorithms. In the past 5-6 years, numerous studies have been published that confirmed the safety of time-lapse technology. In addition, various markers have already been identified that are associated with the minimal likelihood of implantation and others that are predictive of blastocyst development, implantation potential, genetic health and pregnancy. Various groups have proposed different algorithms for embryo selection based on mostly retrospective data analysis. However, large prospective trials are needed to study the full benefit of these (and potentially new) algorithms before their

  20. Norepinephrine genes predict response time variability and methylphenidate-induced changes in neuropsychological function in attention deficit hyperactivity disorder.

    Science.gov (United States)

    Kim, Bung-Nyun; Kim, Jae-Won; Cummins, Tarrant D R; Bellgrove, Mark A; Hawi, Ziarih; Hong, Soon-Beom; Yang, Young-Hui; Kim, Hyo-Jin; Shin, Min-Sup; Cho, Soo-Churl; Kim, Ji-Hoon; Son, Jung-Woo; Shin, Yun-Mi; Chung, Un-Sun; Han, Doug-Hyun

    2013-06-01

    Noradrenergic dysfunction may be associated with cognitive impairments in attention-deficit/hyperactivity disorder (ADHD), including increased response time variability, which has been proposed as a leading endophenotype for ADHD. The aim of this study was to examine the relationship between polymorphisms in the α-2A-adrenergic receptor (ADRA2A) and norepinephrine transporter (SLC6A2) genes and attentional performance in ADHD children before and after pharmacological treatment.One hundred one medication-naive ADHD children were included. All subjects were administered methylphenidate (MPH)-OROS for 12 weeks. The subjects underwent a computerized comprehensive attention test to measure the response time variability at baseline before MPH treatment and after 12 weeks. Additive regression analyses controlling for ADHD symptom severity, age, sex, IQ, and final dose of MPH examined the association between response time variability on the comprehensive attention test measures and allelic variations in single-nucleotide polymorphisms of the ADRA2A and SLC6A2 before and after MPH treatment.Increasing possession of an A allele at the G1287A polymorphism of SLC6A2 was significantly related to heightened response time variability at baseline in the sustained (P = 2.0 × 10) and auditory selective attention (P = 1.0 × 10) tasks. Response time variability at baseline increased additively with possession of the T allele at the DraI polymorphism of the ADRA2A gene in the auditory selective attention task (P = 2.0 × 10). After medication, increasing possession of a G allele at the MspI polymorphism of the ADRA2A gene was associated with increased MPH-related change in response time variability in the flanker task (P = 1.0 × 10).Our study suggested an association between norepinephrine gene variants and response time variability measured at baseline and after MPH treatment in children with ADHD. Our results add to a growing body of evidence, suggesting that response time

  1. Online Synthesis for Operation Execution Time Variability on Digital Microfluidic Biochips

    DEFF Research Database (Denmark)

    Alistar, Mirela; Pop, Paul

    2014-01-01

    have assumed that each biochemical operation in an application is characterized by a worst-case execution time (wcet). However, during the execution of the application, due to variability and randomness in biochemical reactions, operations may finish earlier than their wcets. In this paper we propose...... an online synthesis strategy that re-synthesizes the application at runtime when operations experience variability in their execution time, obtaining thus shorter application execution times. The proposed strategy has been evaluated using several benchmarks....

  2. Effects of selected design variables on three ramp, external compression inlet performance. [boundary layer control bypasses, and mass flow rate

    Science.gov (United States)

    Kamman, J. H.; Hall, C. L.

    1975-01-01

    Two inlet performance tests and one inlet/airframe drag test were conducted in 1969 at the NASA-Ames Research Center. The basic inlet system was two-dimensional, three ramp (overhead), external compression, with variable capture area. The data from these tests were analyzed to show the effects of selected design variables on the performance of this type of inlet system. The inlet design variables investigated include inlet bleed, bypass, operating mass flow ratio, inlet geometry, and variable capture area.

  3. gamboostLSS: An R Package for Model Building and Variable Selection in the GAMLSS Framework

    Directory of Open Access Journals (Sweden)

    Benjamin Hofner

    2016-10-01

    Full Text Available Generalized additive models for location, scale and shape are a flexible class of regression models that allow to model multiple parameters of a distribution function, such as the mean and the standard deviation, simultaneously. With the R package gamboostLSS, we provide a boosting method to fit these models. Variable selection and model choice are naturally available within this regularized regression framework. To introduce and illustrate the R package gamboostLSS and its infrastructure, we use a data set on stunted growth in India. In addition to the specification and application of the model itself, we present a variety of convenience functions, including methods for tuning parameter selection, prediction and visualization of results. The package gamboostLSS is available from the Comprehensive R Archive Network (CRAN at https://CRAN.R-project.org/package=gamboostLSS.

  4. Extreme precipitation variability, forage quality and large herbivore diet selection in arid environments

    Science.gov (United States)

    Cain, James W.; Gedir, Jay V.; Marshal, Jason P.; Krausman, Paul R.; Allen, Jamison D.; Duff, Glenn C.; Jansen, Brian; Morgart, John R.

    2017-01-01

    Nutritional ecology forms the interface between environmental variability and large herbivore behaviour, life history characteristics, and population dynamics. Forage conditions in arid and semi-arid regions are driven by unpredictable spatial and temporal patterns in rainfall. Diet selection by herbivores should be directed towards overcoming the most pressing nutritional limitation (i.e. energy, protein [nitrogen, N], moisture) within the constraints imposed by temporal and spatial variability in forage conditions. We investigated the influence of precipitation-induced shifts in forage nutritional quality and subsequent large herbivore responses across widely varying precipitation conditions in an arid environment. Specifically, we assessed seasonal changes in diet breadth and forage selection of adult female desert bighorn sheep Ovis canadensis mexicana in relation to potential nutritional limitations in forage N, moisture and energy content (as proxied by dry matter digestibility, DMD). Succulents were consistently high in moisture but low in N and grasses were low in N and moisture until the wet period. Nitrogen and moisture content of shrubs and forbs varied among seasons and climatic periods, whereas trees had consistently high N and moderate moisture levels. Shrubs, trees and succulents composed most of the seasonal sheep diets but had little variation in DMD. Across all seasons during drought and during summer with average precipitation, forages selected by sheep were higher in N and moisture than that of available forage. Differences in DMD between sheep diets and available forage were minor. Diet breadth was lowest during drought and increased with precipitation, reflecting a reliance on few key forage species during drought. Overall, forage selection was more strongly associated with N and moisture content than energy content. Our study demonstrates that unlike north-temperate ungulates which are generally reported to be energy-limited, N and moisture

  5. Attachment and selective attention: disorganization and emotional Stroop reaction time.

    Science.gov (United States)

    Atkinson, Leslie; Leung, Eman; Goldberg, Susan; Benoit, Diane; Poulton, Lori; Myhal, Natalie; Blokland, Kirsten; Kerr, Sheila

    2009-01-01

    Although central to attachment theory, internal working models remain a useful heuristic in need of concretization. We compared the selective attention of organized and disorganized mothers using the emotional Stroop task. Both disorganized attachment and emotional Stroop response involve the coordination of strongly conflicting motivations under conditions of emotional arousal. Furthermore, much is known about the cognitive and neuromodulatory correlates of the Stroop that may inform attempts to substantiate the internal working model construct. We assessed 47 community mothers with the Adult Attachment Interview and the Working Model of the Child Interview in the third trimester of pregnancy. At 6 and 12 months postpartum, we assessed mothers with emotional Stroop tasks involving neutral, attachment, and emotion conditions. At 12 months, we observed their infants in the Strange Situation. Results showed that: disorganized attachment is related to relative Stroop reaction time, that is, unlike organized mothers, disorganized mothers respond to negative attachment/emotion stimuli more slowly than to neutral stimuli; relative speed of response is positively related to number of times the dyad was classified disorganized, and change in relative Stroop response time from 6 to 12 months is related to the match-mismatch status of mother and infant attachment classifications. We discuss implications in terms of automatic and controlled processing and, more specifically, cognitive threat tags, parallel distributed processing, and neuromodulation through norepinephrine and dopamine.

  6. Time-grated energy-selected cold neutron radiography

    International Nuclear Information System (INIS)

    McDonald, T.E. Jr.; Brun, T.O.; Claytor, T.N.; Farnum, E.H.; Greene, G.L.; Morris, C.

    1998-01-01

    A technique is under development at the Los Alamos Neutron Science Center (LANSCE), Manuel Lujan Jr. Neutron Scattering Center (Lujan Center) for producing neutron radiography using only a narrow energy range of cold neutrons. The technique, referred to as Time-Gated Energy-Selected (TGES) neutron radiography, employs the pulsed neutron source at the Lujan Center with time of flight to obtain a neutron pulse having an energy distribution that is a function of the arrival time at the imager. The radiograph is formed on a short persistence scintillator and a gated, intensified, cooled CCD camera is employed to record the images, which are produced at the specific neutron energy range determined by the camera gate. The technique has been used to achieve a degree of material discrimination in radiographic images. For some materials, such as beryllium and carbon, at energies above the Bragg cutoff the neutron scattering cross section is relatively high while at energies below the Bragg cutoff the scattering cross section drops significantly. This difference in scattering characteristics can be recorded in the TGES radiography and, because the Bragg cutoff occurs at different energy levels for various materials, the approach can be used to differentiate among these materials. This paper outlines the TGES radiography technique and shows an example of radiography using the approach

  7. Selection of monitoring times to assess remediation performance

    Energy Technology Data Exchange (ETDEWEB)

    Kueper, B.H.; Mundle, K. [Queen' s Univ., Kingston, ON (Canada). Dept. of Civil Engineering, Geoengineering Centre

    2007-07-01

    Several factors determine the time needed for a plume to respond to non-aqueous phase liquid (NAPL) source zone remediation. Most spills of NAPLs (fuels, chlorinated solvents, PCB oils, creosote and coal tar) require mass removal in order to implement remediation technologies such as chemical oxidation, thermal treatments, alcohol flushing, surfactant flushing and hydraulic displacement. While much attention has been given to the development of these remediation technologies, little attention has been given to the response of the plume downstream of the treatment zone and selection of an appropriate monitoring time scale to adequately evaluate the impacts of remediation. For that reason, this study focused on the prevalence of diffusive sinks, the mobility of the contaminant and the hydraulic conductivity of subsurface materials. Typically, plumes in subsurface environments dominated by diffusive sinks or low permeability materials need long periods of time to detach after source removal. This paper presented generic plume response model simulations that illustrated concentration rebound following the use of in-situ chemical oxidation in fractured clay containing trichloroethylene. It was determined that approximately 2 years are needed to reach peak rebound concentration after cessation remedial action. It was concluded that downgradient monitoring well concentrations may be greatly reduced during remedial action due to the fact that oxidant occupies the fracture and because oxidant diffuses into the clay matrix, creating a short period of contaminant reduction in the area of flowing groundwater. 9 refs., 2 tabs., 7 figs.

  8. Selection of monitoring times to assess remediation performance

    International Nuclear Information System (INIS)

    Kueper, B.H.; Mundle, K.

    2007-01-01

    Several factors determine the time needed for a plume to respond to non-aqueous phase liquid (NAPL) source zone remediation. Most spills of NAPLs (fuels, chlorinated solvents, PCB oils, creosote and coal tar) require mass removal in order to implement remediation technologies such as chemical oxidation, thermal treatments, alcohol flushing, surfactant flushing and hydraulic displacement. While much attention has been given to the development of these remediation technologies, little attention has been given to the response of the plume downstream of the treatment zone and selection of an appropriate monitoring time scale to adequately evaluate the impacts of remediation. For that reason, this study focused on the prevalence of diffusive sinks, the mobility of the contaminant and the hydraulic conductivity of subsurface materials. Typically, plumes in subsurface environments dominated by diffusive sinks or low permeability materials need long periods of time to detach after source removal. This paper presented generic plume response model simulations that illustrated concentration rebound following the use of in-situ chemical oxidation in fractured clay containing trichloroethylene. It was determined that approximately 2 years are needed to reach peak rebound concentration after cessation remedial action. It was concluded that downgradient monitoring well concentrations may be greatly reduced during remedial action due to the fact that oxidant occupies the fracture and because oxidant diffuses into the clay matrix, creating a short period of contaminant reduction in the area of flowing groundwater. 9 refs., 2 tabs., 7 figs

  9. Disrupting gatekeeping practices: Journalists’ source selection in times of crisis

    Science.gov (United States)

    van der Meer, Toni G.L.A.; Verhoeven, Piet; Beentjes, Johannes W.J.; Vliegenthart, Rens

    2016-01-01

    As gatekeepers, journalists have the power to select the sources that get a voice in crisis coverage. The aim of this study is to find out how journalists select sources during a crisis. In a survey, journalists were asked how they assess the following sources during an organizational crisis: news agencies, an organization undergoing a crisis, and the general public. The sample consisted of 214 Dutch experienced journalists who at least once covered a crisis. Using structural equation modeling, sources’ likelihood of being included in the news was predicted using five source characteristics: credibility, knowledge, willingness, timeliness, and the relationship with the journalist. Findings indicated that during a crisis, news agencies are most likely to be included in the news, followed by the public, and finally the organization. The significance of the five source characteristics is dependent on source type. For example, to be used in the news, news agencies and organizations should be mainly evaluated as knowledgeable, whereas information from the public should be both credible and timely. In addition, organizations should not be seen as too willing or too eager to communicate. The findings imply that, during a crisis, journalists remain critical gatekeepers; however, they rely mainly on familiar sources. PMID:29278263

  10. Disrupting gatekeeping practices: Journalists' source selection in times of crisis.

    Science.gov (United States)

    van der Meer, Toni G L A; Verhoeven, Piet; Beentjes, Johannes W J; Vliegenthart, Rens

    2017-10-01

    As gatekeepers, journalists have the power to select the sources that get a voice in crisis coverage. The aim of this study is to find out how journalists select sources during a crisis. In a survey, journalists were asked how they assess the following sources during an organizational crisis: news agencies, an organization undergoing a crisis, and the general public. The sample consisted of 214 Dutch experienced journalists who at least once covered a crisis. Using structural equation modeling, sources' likelihood of being included in the news was predicted using five source characteristics: credibility, knowledge, willingness, timeliness, and the relationship with the journalist. Findings indicated that during a crisis, news agencies are most likely to be included in the news, followed by the public, and finally the organization. The significance of the five source characteristics is dependent on source type. For example, to be used in the news, news agencies and organizations should be mainly evaluated as knowledgeable, whereas information from the public should be both credible and timely. In addition, organizations should not be seen as too willing or too eager to communicate. The findings imply that, during a crisis, journalists remain critical gatekeepers; however, they rely mainly on familiar sources.

  11. Real-time flavour tagging selection in ATLAS

    CERN Document Server

    Varni, Carlo; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment includes a well-developed trigger system that allows a selection of events which are thought to be of interest, while achieving a high overall rejection against less interesting processes. An important part of the online event selection is the ability to distinguish between jets arising from heavy-flavour quarks (b- and c-jets) and light jets (jets from u-, d-, s- and gluon jets) in real-time. This is essential for many physics analysis that include processes with large jet multiplicity and b-quarks in the final state. An overview of the b-jet triggers with a description of the application and performance of the offline Multivariate (MV2) b-tagging algorithms at High Level Trigger (HLT) in Run 2 will be presented. During 2016 b-jet trigger menu and algorithms were adapted to use The Fast Tracker (FTK) system which will be commissioned in 2017. We will show initial expected performance of newly designed triggers and compare it with the existing HLT chains.

  12. Real-time flavour tagging selection in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407355; The ATLAS collaboration

    2015-01-01

    In high-energy physics experiments, online selection is crucial to identify the few interesting collisions from the large data volume processed. In the overall ATLAS trigger strategy, b-jet triggers are designed to identify heavy-flavour content in real-time and, in particular, provide the only option to efficiently record events with fully hadronic final states containing b-jets. In doing so, two different, but related, challenges are faced. The physics goal is to optimise as far as possible the rejection of light jets from multijet processes, while retaining a high efficiency on selecting jets from beauty, and maintaining affordable trigger rates without raising jet energy thresholds. This maps into a challenging computing task, as charged tracks and their corresponding vertexes must be reconstructed and analysed for each jet above the desired threshold, regardless of the increasingly harsh pile-up conditions. The performance of b-jet triggers during the LHC Run 1 data-taking campaigns is presented, togethe...

  13. Real-time flavour tagging selection in ATLAS

    CERN Document Server

    Bertella, Claudia; The ATLAS collaboration

    2015-01-01

    In high-energy physics experiments, online selection is crucial to identify the few interesting collisions from the large data volume processed. In the overall ATLAS trigger strategy, b-jet triggers are designed to identify heavy-flavor content in real-time and, in particular, provide the only option to efficiently record events with fully hadronic final states containing b-jets. In doing so, two different, but related, challenges are faced. The physics goal is to optimise as far as possible the rejection of light jets from QCD processes, while retaining a high efficiency on selecting jets from beauty, while maintaining affordable trigger rates without raising jet energy thresholds. This maps into a challenging computing task, as charged tracks and their corresponding vertexes must be reconstructed and analysed for each jet above the desired threshold, regardless of the increasingly harsh pile-up conditions. The performance of b-jet triggers during the LHC Run 1 data-taking campaigns is presented, together wi...

  14. Real-time flavour tagging selection in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407355; The ATLAS collaboration

    2016-01-01

    In high-energy physics experiments, online selection is crucial to identify the few interesting collisions from the large data volume processed. In the overall ATLAS trigger strategy, b-jet triggers are designed to identify heavy-flavour content in real-time and, in particular, provide the only option to efficiently record events with fully hadronic final states containing b-jets. In doing so, two different, but related, challenges are faced. The physics goal is to optimise as far as possible the rejection of light jets from multijet processes, while retaining a high efficiency on selecting jets from beauty, and maintaining affordable trigger rates without raising jet energy thresholds. This maps into a challenging computing task, as charged tracks and their corresponding vertexes must be reconstructed and analysed for each jet above the desired threshold, regardless of the increasingly harsh pile-up conditions. The performance of b-jet triggers during the LHC Run 1 data-taking campaigns is presented, togethe...

  15. Assessing the Time Variability of Jupiter's Tropospheric Properties from 1996 to 2011

    Science.gov (United States)

    Orton, G. S.; Fletcher, L. N.; Yanamandra-Fisher, P. A.; Simon-Miller, A. A.; Greco, J.; Wakefield, L.

    2012-01-01

    We acquired and analyzed mid-infrared images of Jupiter's disk at selected wavelengths from NASA's Infrared Telescope Facility (IRTF) from 1996 to 2011, including a period of large-scale changes of cloud color and albedo. We derived the 100-300 mbar temperature structure, together with tracers of vertical motion: the thickness of a 600- mbar cloud layer, the 300-mbar abundance of the condensable gas NH3, and the 400- mbar para- vs. ortho-H2 ratio. The biggest visual change was detected in the normally dark South Equatorial Belt (SEB) that 'faded' to a light color in 2010, during which both cloud thickness and NH3 abundance rose; both returned to their pre-fade levels in 2011, as the SEB regained its normal dark color. The cloud thickness in Jupiter's North Temperate Belt (NTB) increased in 2002, coincident with its visible brightening, and its NH3 abundance spiked in 2002-2003. Jupiter's Equatorial Zone (EZ), a region marked by more subtle but widespread color and albedo change, showed high cloud thickness variability between 2007 and 2009. In Jupiter's North Equatorial Belt (NEB), the cloud thickened in 2005, then slowly decreased to a minimum value in 2010-2011. No temperature variations were associated with any of these changes, but we discovered temperature oscillations of approx.2-4 K in all regions, with 4- or 8-year periods and phasing that was dissimilar in the different regions. There was also no detectable change in the para- vs. ortho-H2 ratio over time, leading to the possibility that it is driven from much deeper atmospheric levels and may be time-invariant. Our future work will continue to survey the variability of these properties through the Juno mission, which arrives at Jupiter in 2016, and to connect these observations with those made using raster-scanned images from 1980 to 1993 (Orton et al. 1996 Science 265, 625).

  16. Impact of perennial energy crops income variability on the crop selection of risk averse farmers

    International Nuclear Information System (INIS)

    Alexander, Peter; Moran, Dominic

    2013-01-01

    The UK Government policy is for the area of perennial energy crops in the UK to expand significantly. Farmers need to choose these crops in preference to conventional rotations for this to be achievable. This paper looks at the potential level and variability of perennial energy crop incomes and the relation to incomes from conventional arable crops. Assuming energy crop prices are correlated to oil prices the results suggests that incomes from them are not well correlated to conventional arable crop incomes. A farm scale mathematical programming model is then used to attempt to understand the affect on risk averse farmers crop selection. The inclusion of risk reduces the energy crop price required for the selection of these crops. However yields towards the highest of those predicted in the UK are still required to make them an optimal choice, suggesting only a small area of energy crops within the UK would be expected to be chosen to be grown. This must be regarded as a tentative conclusion, primarily due to high sensitivity found to crop yields, resulting in the proposal for further work to apply the model using spatially disaggregated data. - Highlights: ► Energy crop and conventional crop incomes suggested as uncorrelated. ► Diversification effect of energy crops investigated for a risk averse farmer. ► Energy crops indicated as optimal selection only on highest yielding UK sites. ► Large establishment grant rates to substantially alter crop selections.

  17. Cooperative Orthogonal Space-Time-Frequency Block Codes over a MIMO-OFDM Frequency Selective Channel

    Directory of Open Access Journals (Sweden)

    M. Rezaei

    2016-03-01

    Full Text Available In this paper, a cooperative algorithm to improve the orthogonal space-timefrequency block codes (OSTFBC in frequency selective channels for 2*1, 2*2, 4*1, 4*2 MIMO-OFDM systems, is presented. The algorithm of three node, a source node, a relay node and a destination node is formed, and is implemented in two stages. During the first stage, the destination and the relay antennas receive the symbols sent by the source antennas. The destination node and the relay node obtain the decision variables employing time-space-frequency decoding process by the received signals. During the second stage, the relay node transmits decision variables to the destination node. Due to the increasing diversity in the proposed algorithm, decision variables in the destination node are increased to improve system performance. The bit error rate of the proposed algorithm at high SNR is estimated by considering the BPSK modulation. The simulation results show that cooperative orthogonal space-time-frequency block coding, improves system performance and reduces the BER in a frequency selective channel.

  18. A variational conformational dynamics approach to the selection of collective variables in metadynamics

    Science.gov (United States)

    McCarty, James; Parrinello, Michele

    2017-11-01

    In this paper, we combine two powerful computational techniques, well-tempered metadynamics and time-lagged independent component analysis. The aim is to develop a new tool for studying rare events and exploring complex free energy landscapes. Metadynamics is a well-established and widely used enhanced sampling method whose efficiency depends on an appropriate choice of collective variables. Often the initial choice is not optimal leading to slow convergence. However by analyzing the dynamics generated in one such run with a time-lagged independent component analysis and the techniques recently developed in the area of conformational dynamics, we obtain much more efficient collective variables that are also better capable of illuminating the physics of the system. We demonstrate the power of this approach in two paradigmatic examples.

  19. Age and Sex Differences in Intra-Individual Variability in a Simple Reaction Time Task

    Science.gov (United States)

    Ghisletta, Paolo; Renaud, Olivier; Fagot, Delphine; Lecerf, Thierry; de Ribaupierre, Anik

    2018-01-01

    While age effects in reaction time (RT) tasks across the lifespan are well established for level of performance, analogous findings have started appearing also for indicators of intra-individual variability (IIV). Children are not only slower, but also display more variability than younger adults in RT. Yet, little is known about potential…

  20. Prediction of Placental Barrier Permeability: A Model Based on Partial Least Squares Variable Selection Procedure

    Directory of Open Access Journals (Sweden)

    Yong-Hong Zhang

    2015-05-01

    Full Text Available Assessing the human placental barrier permeability of drugs is very important to guarantee drug safety during pregnancy. Quantitative structure–activity relationship (QSAR method was used as an effective assessing tool for the placental transfer study of drugs, while in vitro human placental perfusion is the most widely used method. In this study, the partial least squares (PLS variable selection and modeling procedure was used to pick out optimal descriptors from a pool of 620 descriptors of 65 compounds and to simultaneously develop a QSAR model between the descriptors and the placental barrier permeability expressed by the clearance indices (CI. The model was subjected to internal validation by cross-validation and y-randomization and to external validation by predicting CI values of 19 compounds. It was shown that the model developed is robust and has a good predictive potential (r2 = 0.9064, RMSE = 0.09, q2 = 0.7323, rp2 = 0.7656, RMSP = 0.14. The mechanistic interpretation of the final model was given by the high variable importance in projection values of descriptors. Using PLS procedure, we can rapidly and effectively select optimal descriptors and thus construct a model with good stability and predictability. This analysis can provide an effective tool for the high-throughput screening of the placental barrier permeability of drugs.

  1. Relation between sick leave and selected exposure variables among women semiconductor workers in Malaysia

    Science.gov (United States)

    Chee, H; Rampal, K

    2003-01-01

    Aims: To determine the relation between sick leave and selected exposure variables among women semiconductor workers. Methods: This was a cross sectional survey of production workers from 18 semiconductor factories. Those selected had to be women, direct production operators up to the level of line leader, and Malaysian citizens. Sick leave and exposure to physical and chemical hazards were determined by self reporting. Three sick leave variables were used; number of sick leave days taken in the past year was the variable of interest in logistic regression models where the effects of age, marital status, work task, work schedule, work section, and duration of work in factory and work section were also explored. Results: Marital status was strongly linked to the taking of sick leave. Age, work schedule, and duration of work in the factory were significant confounders only in certain cases. After adjusting for these confounders, chemical and physical exposures, with the exception of poor ventilation and smelling chemicals, showed no significant relation to the taking of sick leave within the past year. Work section was a good predictor for taking sick leave, as wafer polishing workers faced higher odds of taking sick leave for each of the three cut off points of seven days, three days, and not at all, while parts assembly workers also faced significantly higher odds of taking sick leave. Conclusion: In Malaysia, the wafer fabrication factories only carry out a limited portion of the work processes, in particular, wafer polishing and the processes immediately prior to and following it. This study, in showing higher illness rates for workers in wafer polishing compared to semiconductor assembly, has implications for the governmental policy of encouraging the setting up of wafer fabrication plants with the full range of work processes. PMID:12660374

  2. Ultrahigh-dimensional variable selection method for whole-genome gene-gene interaction analysis

    Directory of Open Access Journals (Sweden)

    Ueki Masao

    2012-05-01

    Full Text Available Abstract Background Genome-wide gene-gene interaction analysis using single nucleotide polymorphisms (SNPs is an attractive way for identification of genetic components that confers susceptibility of human complex diseases. Individual hypothesis testing for SNP-SNP pairs as in common genome-wide association study (GWAS however involves difficulty in setting overall p-value due to complicated correlation structure, namely, the multiple testing problem that causes unacceptable false negative results. A large number of SNP-SNP pairs than sample size, so-called the large p small n problem, precludes simultaneous analysis using multiple regression. The method that overcomes above issues is thus needed. Results We adopt an up-to-date method for ultrahigh-dimensional variable selection termed the sure independence screening (SIS for appropriate handling of numerous number of SNP-SNP interactions by including them as predictor variables in logistic regression. We propose ranking strategy using promising dummy coding methods and following variable selection procedure in the SIS method suitably modified for gene-gene interaction analysis. We also implemented the procedures in a software program, EPISIS, using the cost-effective GPGPU (General-purpose computing on graphics processing units technology. EPISIS can complete exhaustive search for SNP-SNP interactions in standard GWAS dataset within several hours. The proposed method works successfully in simulation experiments and in application to real WTCCC (Wellcome Trust Case–control Consortium data. Conclusions Based on the machine-learning principle, the proposed method gives powerful and flexible genome-wide search for various patterns of gene-gene interaction.

  3. Analysis of Modal Travel Time Variability Due to Mesoscale Ocean Structure

    National Research Council Canada - National Science Library

    Smith, Amy

    1997-01-01

    .... First, for an open ocean environment away from strong boundary currents, the effects of randomly phased linear baroclinic Rossby waves on acoustic travel time are shown to produce a variable overall...

  4. Time-dependence in relativistic collisionless shocks: theory of the variable

    Energy Technology Data Exchange (ETDEWEB)

    Spitkovsky, A

    2004-02-05

    We describe results from time-dependent numerical modeling of the collisionless reverse shock terminating the pulsar wind in the Crab Nebula. We treat the upstream relativistic wind as composed of ions and electron-positron plasma embedded in a toroidal magnetic field, flowing radially outward from the pulsar in a sector around the rotational equator. The relativistic cyclotron instability of the ion gyrational orbit downstream of the leading shock in the electron-positron pairs launches outward propagating magnetosonic waves. Because of the fresh supply of ions crossing the shock, this time-dependent process achieves a limit-cycle, in which the waves are launched with periodicity on the order of the ion Larmor time. Compressions in the magnetic field and pair density associated with these waves, as well as their propagation speed, semi-quantitatively reproduce the behavior of the wisp and ring features described in recent observations obtained using the Hubble Space Telescope and the Chandra X-Ray Observatory. By selecting the parameters of the ion orbits to fit the spatial separation of the wisps, we predict the period of time variability of the wisps that is consistent with the data. When coupled with a mechanism for non-thermal acceleration of the pairs, the compressions in the magnetic field and plasma density associated with the optical wisp structure naturally account for the location of X-ray features in the Crab. We also discuss the origin of the high energy ions and their acceleration in the equatorial current sheet of the pulsar wind.

  5. Dissociable effects of practice variability on learning motor and timing skills.

    Science.gov (United States)

    Caramiaux, Baptiste; Bevilacqua, Frédéric; Wanderley, Marcelo M; Palmer, Caroline

    2018-01-01

    Motor skill acquisition inherently depends on the way one practices the motor task. The amount of motor task variability during practice has been shown to foster transfer of the learned skill to other similar motor tasks. In addition, variability in a learning schedule, in which a task and its variations are interweaved during practice, has been shown to help the transfer of learning in motor skill acquisition. However, there is little evidence on how motor task variations and variability schedules during practice act on the acquisition of complex motor skills such as music performance, in which a performer learns both the right movements (motor skill) and the right time to perform them (timing skill). This study investigated the impact of rate (tempo) variability and the schedule of tempo change during practice on timing and motor skill acquisition. Complete novices, with no musical training, practiced a simple musical sequence on a piano keyboard at different rates. Each novice was assigned to one of four learning conditions designed to manipulate the amount of tempo variability across trials (large or small tempo set) and the schedule of tempo change (randomized or non-randomized order) during practice. At test, the novices performed the same musical sequence at a familiar tempo and at novel tempi (testing tempo transfer), as well as two novel (but related) sequences at a familiar tempo (testing spatial transfer). We found that practice conditions had little effect on learning and transfer performance of timing skill. Interestingly, practice conditions influenced motor skill learning (reduction of movement variability): lower temporal variability during practice facilitated transfer to new tempi and new sequences; non-randomized learning schedule improved transfer to new tempi and new sequences. Tempo (rate) and the sequence difficulty (spatial manipulation) affected performance variability in both timing and movement. These findings suggest that there is a

  6. Optically selected GRB afterglows, a real time analysis system at the CFHT

    International Nuclear Information System (INIS)

    Malacrino, F.; Atteia, J.-L.; Klotz, A.; Boer, M.; Kavelaars, J.J.; Cuillandre, J.-C.

    2005-01-01

    We attempt to detect optical GRB afterglows on images taken by the Canada France Hawaii Telescope for the Very Wide survey, component of the Legacy Survey. To do so, a Real Time Analysis System called Optically Selected GRB Afterglows has been installed on a dedicated computer in Hawaii. This pipeline automatically and quickly analyzes Mega cam images and extracts from them a list of variable objects which is displayed on a web page far validation by a member of the collaboration. The Very Wide survey covers 1200 square degrees down to i 1 = 23.5. This paper briefly explain the RTAS process

  7. Time dependent analysis of assay comparability: a novel approach to understand intra- and inter-site variability over time

    Science.gov (United States)

    Winiwarter, Susanne; Middleton, Brian; Jones, Barry; Courtney, Paul; Lindmark, Bo; Page, Ken M.; Clark, Alan; Landqvist, Claire

    2015-09-01

    We demonstrate here a novel use of statistical tools to study intra- and inter-site assay variability of five early drug metabolism and pharmacokinetics in vitro assays over time. Firstly, a tool for process control is presented. It shows the overall assay variability but allows also the following of changes due to assay adjustments and can additionally highlight other, potentially unexpected variations. Secondly, we define the minimum discriminatory difference/ratio to support projects to understand how experimental values measured at different sites at a given time can be compared. Such discriminatory values are calculated for 3 month periods and followed over time for each assay. Again assay modifications, especially assay harmonization efforts, can be noted. Both the process control tool and the variability estimates are based on the results of control compounds tested every time an assay is run. Variability estimates for a limited set of project compounds were computed as well and found to be comparable. This analysis reinforces the need to consider assay variability in decision making, compound ranking and in silico modeling.

  8. Aggressive time step selection for the time asymptotic velocity diffusion problem

    International Nuclear Information System (INIS)

    Hewett, D.W.; Krapchev, V.B.; Hizanidis, K.; Bers, A.

    1984-12-01

    An aggressive time step selector for an ADI algorithm is preseneted that is applied to the linearized 2-D Fokker-Planck equation including an externally imposed quasilinear diffusion term. This method provides a reduction in CPU requirements by factors of two or three compared to standard ADI. More important, the robustness of the procedure greatly reduces the work load of the user. The procedure selects a nearly optimal Δt with a minimum of intervention by the user thus relieving the need to supervise the algorithm. In effect, the algorithm does its own supervision by discarding time steps made with Δt too large

  9. Time-of-flight depth image enhancement using variable integration time

    Science.gov (United States)

    Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong

    2013-03-01

    Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.

  10. An adaptive technique for multiscale approximate entropy (MAEbin) threshold (r) selection: application to heart rate variability (HRV) and systolic blood pressure variability (SBPV) under postural stress.

    Science.gov (United States)

    Singh, Amritpal; Saini, Barjinder Singh; Singh, Dilbag

    2016-06-01

    Multiscale approximate entropy (MAE) is used to quantify the complexity of a time series as a function of time scale τ. Approximate entropy (ApEn) tolerance threshold selection 'r' is based on either: (1) arbitrary selection in the recommended range (0.1-0.25) times standard deviation of time series (2) or finding maximum ApEn (ApEnmax) i.e., the point where self-matches start to prevail over other matches and choosing the corresponding 'r' (rmax) as threshold (3) or computing rchon by empirically finding the relation between rmax, SD1/SD2 ratio and N using curve fitting, where, SD1 and SD2 are short-term and long-term variability of a time series respectively. None of these methods is gold standard for selection of 'r'. In our previous study [1], an adaptive procedure for selection of 'r' is proposed for approximate entropy (ApEn). In this paper, this is extended to multiple time scales using MAEbin and multiscale cross-MAEbin (XMAEbin). We applied this to simulations i.e. 50 realizations (n = 50) of random number series, fractional Brownian motion (fBm) and MIX (P) [1] series of data length of N = 300 and short term recordings of HRV and SBPV performed under postural stress from supine to standing. MAEbin and XMAEbin analysis was performed on laboratory recorded data of 50 healthy young subjects experiencing postural stress from supine to upright. The study showed that (i) ApEnbin of HRV is more than SBPV in supine position but is lower than SBPV in upright position (ii) ApEnbin of HRV decreases from supine i.e. 1.7324 ± 0.112 (mean ± SD) to upright 1.4916 ± 0.108 due to vagal inhibition (iii) ApEnbin of SBPV increases from supine i.e. 1.5535 ± 0.098 to upright i.e. 1.6241 ± 0.101 due sympathetic activation (iv) individual and cross complexities of RRi and systolic blood pressure (SBP) series depend on time scale under consideration (v) XMAEbin calculated using ApEnmax is correlated with cross-MAE calculated using ApEn (0.1-0.26) in steps of 0

  11. Resting heart rate variability is associated with ex-Gaussian metrics of intra-individual reaction time variability.

    Science.gov (United States)

    Spangler, Derek P; Williams, DeWayne P; Speller, Lassiter F; Brooks, Justin R; Thayer, Julian F

    2018-03-01

    The relationships between vagally mediated heart rate variability (vmHRV) and the cognitive mechanisms underlying performance can be elucidated with ex-Gaussian modeling-an approach that quantifies two different forms of intra-individual variability (IIV) in reaction time (RT). To this end, the current study examined relations of resting vmHRV to whole-distribution and ex-Gaussian IIV. Subjects (N = 83) completed a 5-minute baseline while vmHRV (root mean square of successive differences; RMSSD) was measured. Ex-Gaussian (sigma, tau) and whole-distribution (standard deviation) estimates of IIV were derived from reaction times on a Stroop task. Resting vmHRV was found to be inversely related to tau (exponential IIV) but not to sigma (Gaussian IIV) or the whole-distribution standard deviation of RTs. Findings suggest that individuals with high vmHRV can better prevent attentional lapses but not difficulties with motor control. These findings inform the differential relationships of cardiac vagal control to the cognitive processes underlying human performance. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Predictor variables for a half marathon race time in recreational male runners.

    Science.gov (United States)

    Rüst, Christoph Alexander; Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Lepers, Romuald; Rosemann, Thomas

    2011-01-01

    The aim of this study was to investigate predictor variables of anthropometry, training, and previous experience in order to predict a half marathon race time for future novice recreational male half marathoners. Eighty-four male finishers in the 'Half Marathon Basel' completed the race distance within (mean and standard deviation, SD) 103.9 (16.5) min, running at a speed of 12.7 (1.9) km/h. After multivariate analysis of the anthropometric characteristics, body mass index (r = 0.56), suprailiacal (r = 0.36) and medial calf skin fold (r = 0.53) were related to race time. For the variables of training and previous experience, speed in running of the training sessions (r = -0.54) were associated with race time. After multivariate analysis of both the significant anthropometric and training variables, body mass index (P = 0.0150) and speed in running during training (P = 0.0045) were related to race time. Race time in a half marathon might be partially predicted by the following equation (r(2) = 0.44): Race time (min) = 72.91 + 3.045 * (body mass index, kg/m(2)) -3.884 * (speed in running during training, km/h) for recreational male runners. To conclude, variables of both anthropometry and training were related to half marathon race time in recreational male half marathoners and cannot be reduced to one single predictor variable.

  13. Assessment of acute pesticide toxicity with selected biochemical variables in suicide attempting subjects

    International Nuclear Information System (INIS)

    Soomro, A.M.; Seehar, G.M.; Bhanger, M.I.

    2003-01-01

    Pesticide induced changes were assessed in thirty two subjects of attempted suicide cases. Among all, the farmers and their families were recorded as most frequently suicide attempting. The values obtained from seven biochemical variables of 29 years old (average age) hospitalized subjects were compared to the same number and age matched normal volunteers. The results revealed major differences in the mean values of the selected parameters. The mean difference calculate; alkaline phosphatase (178.7 mu/l), Bilirubin (7.5 mg/dl), GPT (59.2 mu/l) and glucose (38.6 mg/dl) were higher than the controls, which indicate the hepatotoxicity induced by the pesticides in suicide attempting individuals. Increase in serum creatinine and urea indicated renal malfunction that could be linked with pesticide induced nephrotoxicity among them. (author)

  14. VARIABILITY OF AMYLOSE AND AMYLOPECTIN IN WINTER WHEAT AND SELECTION FOR SPECIAL PURPOSES

    Directory of Open Access Journals (Sweden)

    Nikolina Weg Krstičević

    2015-06-01

    Full Text Available The aim of this study was to investigate the variability of amylose and amylopectin in 24 Croatian and six foreign winter wheat varieties and to detect the potential of these varieties for special purposes. Starch composition analysis was based on the separation of amylose and amylopectin and the determination of their amounts and ratios. Analysis of the amount of amylose and amylopectin determined statistically highly significant differences between the varieties. The tested varieties are mostly bread wheat of different quality which have the usual content of amylose and amylopectin. Some varieties were identified among them with high amylopectin and low amylose content and one variety with high amylose content. They have the potential in future breeding programs and selection for special purposes.

  15. [Application of characteristic NIR variables selection in portable detection of soluble solids content of apple by near infrared spectroscopy].

    Science.gov (United States)

    Fan, Shu-Xiang; Huang, Wen-Qian; Li, Jiang-Bo; Guo, Zhi-Ming; Zhaq, Chun-Jiang

    2014-10-01

    In order to detect the soluble solids content(SSC)of apple conveniently and rapidly, a ring fiber probe and a portable spectrometer were applied to obtain the spectroscopy of apple. Different wavelength variable selection methods, including unin- formative variable elimination (UVE), competitive adaptive reweighted sampling (CARS) and genetic algorithm (GA) were pro- posed to select effective wavelength variables of the NIR spectroscopy of the SSC in apple based on PLS. The back interval LS- SVM (BiLS-SVM) and GA were used to select effective wavelength variables based on LS-SVM. Selected wavelength variables and full wavelength range were set as input variables of PLS model and LS-SVM model, respectively. The results indicated that PLS model built using GA-CARS on 50 characteristic variables selected from full-spectrum which had 1512 wavelengths achieved the optimal performance. The correlation coefficient (Rp) and root mean square error of prediction (RMSEP) for prediction sets were 0.962, 0.403°Brix respectively for SSC. The proposed method of GA-CARS could effectively simplify the portable detection model of SSC in apple based on near infrared spectroscopy and enhance the predictive precision. The study can provide a reference for the development of portable apple soluble solids content spectrometer.

  16. Bayesian variable selection for post-analytic interrogation of susceptibility loci.

    Science.gov (United States)

    Chen, Siying; Nunez, Sara; Reilly, Muredach P; Foulkes, Andrea S

    2017-06-01

    Understanding the complex interplay among protein coding genes and regulatory elements requires rigorous interrogation with analytic tools designed for discerning the relative contributions of overlapping genomic regions. To this aim, we offer a novel application of Bayesian variable selection (BVS) for classifying genomic class level associations using existing large meta-analysis summary level resources. This approach is applied using the expectation maximization variable selection (EMVS) algorithm to typed and imputed SNPs across 502 protein coding genes (PCGs) and 220 long intergenic non-coding RNAs (lncRNAs) that overlap 45 known loci for coronary artery disease (CAD) using publicly available Global Lipids Gentics Consortium (GLGC) (Teslovich et al., 2010; Willer et al., 2013) meta-analysis summary statistics for low-density lipoprotein cholesterol (LDL-C). The analysis reveals 33 PCGs and three lncRNAs across 11 loci with >50% posterior probabilities for inclusion in an additive model of association. The findings are consistent with previous reports, while providing some new insight into the architecture of LDL-cholesterol to be investigated further. As genomic taxonomies continue to evolve, additional classes such as enhancer elements and splicing regions, can easily be layered into the proposed analysis framework. Moreover, application of this approach to alternative publicly available meta-analysis resources, or more generally as a post-analytic strategy to further interrogate regions that are identified through single point analysis, is straightforward. All coding examples are implemented in R version 3.2.1 and provided as supplemental material. © 2016, The International Biometric Society.

  17. INDUCED GENETIC VARIABILITY AND SELECTION FOR HIGH YIELDING MUTANTS IN BREAD WHEAT(TRITICUM AESTIVUM L.)

    International Nuclear Information System (INIS)

    SOBIEH, S.EL-S.S.

    2007-01-01

    This study was conducted during the two winter seasons of 2004/2005 and 2005/2006 at the experimental farm belonging to Plant Research Department, Nuclear Research Centre, AEA, Egypt.The aim of this study is to determine the effect of gamma rays(150, 200 and 250 Gy) on means of yield and its attributes for exotic wheat variety (vir-25) and induction of genetic variability that permits to perform visual selection through the irradiated populations, as well as to determine difference in seed protein patterns between vir-25 parent variety and some selectants in M2 generation.The results showed that the different doses of gamma rays had non-significant effect on mean value of yield/plant and significant effect on mean values of it's attributes. 0n the other hand, the considered genetic variability was generated as result of applying gamma irradiation. The highest amount of induced genetic variability was detected for number of grains/ spike, spike length and number of spikes/plant. Additionally, these three traits exhibited strong association with grain yield/plant, hence, they were used as a criterion for selection.Some variant plants were selected from radiation treatment 250 Gy, with 2-10 spikes per plant.These variant plants exhibited increasing in spike length and number of gains/spike.The results also revealed that protein electrophoresis were varied in the number and position of bands from genotype to another and various genotypes share bands with molecular weights 31.4 and 3.2 KD.Many bands were found to be specific for the genotype and the nine wheat mutants were characterized by the presence of bands of molecular weights: 151.9, 125.7, 14.1 and 5.7 KD at M-167.4, 21.7 and 8.2 at M-299.7 KD at M-3136.1, 97.6, 49.8, 27.9 and 20.6 KD at M-4 135.2, 95.3 and 28.1 KD at M-5 135.5, 67.7, 47.1, 32.3, 21.9 and 9.6 KD at M-6 126.1, 112.1, 103.3, 58.8, 20.9 and 12.1 KD at M-7 127.7, 116.6, 93.9, 55.0 and 47.4 KD at M-8 141.7, 96.1, 79.8, 68.9, 42.1, 32.7, 22.0 and 13

  18. Spatially variable natural selection and the divergence between parapatric subspecies of lodgepole pine (Pinus contorta, Pinaceae).

    Science.gov (United States)

    Eckert, Andrew J; Shahi, Hurshbir; Datwyler, Shannon L; Neale, David B

    2012-08-01

    Plant populations arrayed across sharp environmental gradients are ideal systems for identifying the genetic basis of ecologically relevant phenotypes. A series of five uplifted marine terraces along the northern coast of California represents one such system where morphologically distinct populations of lodgepole pine (Pinus contorta) are distributed across sharp soil gradients ranging from fertile soils near the coast to podzolic soils ca. 5 km inland. A total of 92 trees was sampled across four coastal marine terraces (N = 10-46 trees/terrace) located in Mendocino County, California and sequenced for a set of 24 candidate genes for growth and responses to various soil chemistry variables. Statistical analyses relying on patterns of nucleotide diversity were employed to identify genes whose diversity patterns were inconsistent with three null models. Most genes displayed patterns of nucleotide diversity that were consistent with null models (N = 19) or with the presence of paralogs (N = 3). Two genes, however, were exceptional: an aluminum responsive ABC-transporter with F(ST) = 0.664 and an inorganic phosphate transporter characterized by divergent haplotypes segregating at intermediate frequencies in most populations. Spatially variable natural selection along gradients of aluminum and phosphate ion concentrations likely accounted for both outliers. These results shed light on some of the genetic components comprising the extended phenotype of this ecosystem, as well as highlight ecotones as fruitful study systems for the detection of adaptive genetic variants.

  19. Repeat what after whom? Exploring variable selectivity in a cross-dialectal shadowing task.

    Directory of Open Access Journals (Sweden)

    Abby eWalker

    2015-05-01

    Full Text Available Twenty women from Christchurch, New Zealand and sixteen from Columbus Ohio (dialect region U.S. Midland participated in a bimodal lexical naming task where they repeated monosyllabic words after four speakers from four regional dialects: New Zealand, Australia, U.S. Inland North and U.S. Midland. The resulting utterances were acoustically analyzed, and presented to listeners on Amazon Mechanical Turk in an AXB task. Convergence is observed, but differs depending on the dialect of the speaker, the dialect of the model, the particular word class being shadowed, and the order in which dialects are presented to participants. We argue that these patterns are generally consistent with findings that convergence is promoted by a large phonetic distance between shadower and model (Babel, 2010, contra Kim, Horton & Bradlow, 2011, and greater existing variability in a vowel class (Babel, 2012. The results also suggest that more comparisons of accommodation towards different dialects are warranted, and that the investigation of the socio-indexical meaning of specific linguistic forms in context is a promising avenue for understanding variable selectivity in convergence.

  20. The added value of time-variable microgravimetry to the understanding of how volcanoes work

    Science.gov (United States)

    Carbone, Daniele; Poland, Michael; Greco, Filippo; Diament, Michel

    2017-01-01

    During the past few decades, time-variable volcano gravimetry has shown great potential for imaging subsurface processes at active volcanoes (including some processes that might otherwise remain “hidden”), especially when combined with other methods (e.g., ground deformation, seismicity, and gas emissions). By supplying information on changes in the distribution of bulk mass over time, gravimetry can provide information regarding processes such as magma accumulation in void space, gas segregation at shallow depths, and mechanisms driving volcanic uplift and subsidence. Despite its potential, time-variable volcano gravimetry is an underexploited method, not widely adopted by volcano researchers or observatories. The cost of instrumentation and the difficulty in using it under harsh environmental conditions is a significant impediment to the exploitation of gravimetry at many volcanoes. In addition, retrieving useful information from gravity changes in noisy volcanic environments is a major challenge. While these difficulties are not trivial, neither are they insurmountable; indeed, creative efforts in a variety of volcanic settings highlight the value of time-variable gravimetry for understanding hazards as well as revealing fundamental insights into how volcanoes work. Building on previous work, we provide a comprehensive review of time-variable volcano gravimetry, including discussions of instrumentation, modeling and analysis techniques, and case studies that emphasize what can be learned from campaign, continuous, and hybrid gravity observations. We are hopeful that this exploration of time-variable volcano gravimetry will excite more scientists about the potential of the method, spurring further application, development, and innovation.

  1. Real-time variables dictionary (RTVD), and expert system for development of real-time applications in nuclear power plants

    International Nuclear Information System (INIS)

    Senra Martinez, A.; Schirru, R.; Dutra Thome Filho, Z.

    1990-01-01

    It is presented in this paper a computerized methodology based on a data dictionary managed by an expert system called Real-Time Variables Dictionary (RTVD). This system is very usefull for development of real-time applications in nuclear power plants. It is described in details the RTVD functions and its implantation in a VAX 8600 computer. It is also pointed out the concepts of artificial intelligence used in teh RTVD

  2. Preferences for travel time variability – A study of Danish car drivers

    DEFF Research Database (Denmark)

    Hjorth, Katrine; Rich, Jeppe

    Travel time variability (TTV) is a measure of the extent of unpredictability in travel times. It is generally accepted that TTV has a negative effect on travellers’ wellbeing and overall utility of travelling, and valuation of variability is an important issue in transport demand modelling...... preferences, to exclude non-traders, and to avoid complicated issues related to scheduled public transport services. The survey uses customised Internet questionnaires, containing a series of questions related to the traveller’s most recent morning trip to work, e.g.: • Travel time experienced on this day......, • Number of stops along the way, their duration, and whether these stops involved restrictions on time of day, • Restrictions regarding departure time from home or arrival time at work, • How often such a trip was made within the last month and the range of experienced travel times, • What the traveller...

  3. Stability of Delayed Hopfield Neural Networks with Variable-Time Impulses

    Directory of Open Access Journals (Sweden)

    Yangjun Pei

    2014-01-01

    Full Text Available In this paper the globally exponential stability criteria of delayed Hopfield neural networks with variable-time impulses are established. The proposed criteria can also be applied in Hopfield neural networks with fixed-time impulses. A numerical example is presented to illustrate the effectiveness of our theoretical results.

  4. Lyapunov-based constrained engine torque control using electronic throttle and variable cam timing

    NARCIS (Netherlands)

    Feru, E.; Lazar, M.; Gielen, R.H.; Kolmanovsky, I.V.; Di Cairano, S.

    2012-01-01

    In this paper, predictive control of a spark ignition engine equipped with an electronic throttle and a variable cam timing actuator is considered. The objective is to adjust the throttle angle and the engine cam timing in order to reduce the exhaust gas emissions while maintaining fast and

  5. Norm-times : a design for production time and variability reduction for Faes Cases

    NARCIS (Netherlands)

    Karandeinos, Georgios

    2008-01-01

    This project deals with the production process of Faes Cases business unit. This company is producing custom-made packaging and sells standard solutions with customized interior. During the last years, it was observed that the throughput time of the production is increasing and is hard to forecast

  6. Correlates of adolescent sleep time and variability in sleep time: the role of individual and health related characteristics.

    Science.gov (United States)

    Moore, Melisa; Kirchner, H Lester; Drotar, Dennis; Johnson, Nathan; Rosen, Carol; Redline, Susan

    2011-03-01

    Adolescents are predisposed to short sleep duration and irregular sleep patterns due to certain host characteristics (e.g., age, pubertal status, gender, ethnicity, socioeconomic class, and neighborhood distress) and health-related variables (e.g., ADHD, asthma, birth weight, and BMI). The aim of the current study was to investigate the relationship between such variables and actigraphic measures of sleep duration and variability. Cross-sectional study of 247 adolescents (48.5% female, 54.3% ethnic minority, mean age of 13.7years) involved in a larger community-based cohort study. Significant univariate predictors of sleep duration included gender, minority ethnicity, neighborhood distress, parent income, and BMI. In multivariate models, gender, minority status, and BMI were significantly associated with sleep duration (all pminority adolescents, and those of a lower BMI obtaining more sleep. Univariate models demonstrated that age, minority ethnicity, neighborhood distress, parent education, parent income, pubertal status, and BMI were significantly related to variability in total sleep time. In the multivariate model, age, minority status, and BMI were significantly related to variability in total sleep time (all pminority adolescents, and those of a lower BMI obtaining more regular sleep. These data show differences in sleep patterns in population sub-groups of adolescents which may be important in understanding pediatric health risk profiles. Sub-groups that may particularly benefit from interventions aimed at improving sleep patterns include boys, overweight, and minority adolescents. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Simulating variable-density flows with time-consistent integration of Navier-Stokes equations

    Science.gov (United States)

    Lu, Xiaoyi; Pantano, Carlos

    2017-11-01

    In this talk, we present several features of a high-order semi-implicit variable-density low-Mach Navier-Stokes solver. A new formulation to solve pressure Poisson-like equation of variable-density flows is highlighted. With this formulation of the numerical method, we are able to solve all variables with a uniform order of accuracy in time (consistent with the time integrator being used). The solver is primarily designed to perform direct numerical simulations for turbulent premixed flames. Therefore, we also address other important elements, such as energy-stable boundary conditions, synthetic turbulence generation, and flame anchoring method. Numerical examples include classical non-reacting constant/variable-density flows, as well as turbulent premixed flames.

  8. FCERI AND HISTAMINE METABOLISM GENE VARIABILITY IN SELECTIVE RESPONDERS TO NSAIDS

    Directory of Open Access Journals (Sweden)

    Gemma Amo

    2016-09-01

    Full Text Available The high-affinity IgE receptor (Fcε RI is a heterotetramer of three subunits: Fcε RIα, Fcε RIβ and Fcε RIγ (αβγ2 encoded by three genes designated as FCER1A, FCER1B (MS4A2 and FCER1G, respectively. Recent evidence points to FCERI gene variability as a relevant factor in the risk of developing allergic diseases. Because Fcε RI plays a key role in the events downstream of the triggering factors in immunological response, we hypothesized that FCERI gene variants might be related with the risk of, or with the clinical response to, selective (IgE mediated non-steroidal anti-inflammatory (NSAID hypersensitivity.From a cohort of 314 patients suffering from selective hypersensitivity to metamizole, ibuprofen, diclofenac, paracetamol, acetylsalicylic acid (ASA, propifenazone, naproxen, ketoprofen, dexketoprofen, etofenamate, aceclofenac, etoricoxib, dexibuprofen, indomethacin, oxyphenylbutazone or piroxicam, and 585 unrelated healthy controls that tolerated these NSAIDs, we analyzed the putative effects of the FCERI SNPs FCER1A rs2494262, rs2427837 and rs2251746; FCER1B rs1441586, rs569108 and rs512555; FCER1G rs11587213, rs2070901 and rs11421. Furthermore, in order to identify additional genetic markers which might be associated with the risk of developing selective NSAID hypersensitivity, or which may modify the putative association of FCERI gene variations with risk, we analyzed polymorphisms known to affect histamine synthesis or metabolism, such as rs17740607, rs2073440, rs1801105, rs2052129, rs10156191, rs1049742 and rs1049793 in the HDC, HNMT and DAO genes.No major genetic associations with risk or with clinical presentation, and no gene-gene interactions, or gene-phenotype interactions (including age, gender, IgE concentration, antecedents of atopy, culprit drug or clinical presentation were identified in patients. However, logistic regression analyses indicated that the presence of antecedents of atopy and the DAO SNP rs2052129 (GG

  9. Efficient conservative ADER schemes based on WENO reconstruction and space-time predictor in primitive variables

    Science.gov (United States)

    Zanotti, Olindo; Dumbser, Michael

    2016-01-01

    We present a new version of conservative ADER-WENO finite volume schemes, in which both the high order spatial reconstruction as well as the time evolution of the reconstruction polynomials in the local space-time predictor stage are performed in primitive variables, rather than in conserved ones. To obtain a conservative method, the underlying finite volume scheme is still written in terms of the cell averages of the conserved quantities. Therefore, our new approach performs the spatial WENO reconstruction twice: the first WENO reconstruction is carried out on the known cell averages of the conservative variables. The WENO polynomials are then used at the cell centers to compute point values of the conserved variables, which are subsequently converted into point values of the primitive variables. This is the only place where the conversion from conservative to primitive variables is needed in the new scheme. Then, a second WENO reconstruction is performed on the point values of the primitive variables to obtain piecewise high order reconstruction polynomials of the primitive variables. The reconstruction polynomials are subsequently evolved in time with a novel space-time finite element predictor that is directly applied to the governing PDE written in primitive form. The resulting space-time polynomials of the primitive variables can then be directly used as input for the numerical fluxes at the cell boundaries in the underlying conservative finite volume scheme. Hence, the number of necessary conversions from the conserved to the primitive variables is reduced to just one single conversion at each cell center. We have verified the validity of the new approach over a wide range of hyperbolic systems, including the classical Euler equations of gas dynamics, the special relativistic hydrodynamics (RHD) and ideal magnetohydrodynamics (RMHD) equations, as well as the Baer-Nunziato model for compressible two-phase flows. In all cases we have noticed that the new ADER

  10. X-ray spectra and time variability of active galactic nuclei

    International Nuclear Information System (INIS)

    Mushotzky, R.F.

    1984-02-01

    The X-ray spectra of broad line active galactic nuclei (AGN) of all types (Seyfert I's, NELG's, broadline radio galaxies) are well fit by a power law in the .5 to 100 keV band of man energy slope alpha .68 + or - .15. There is, as yet, no strong evidence for time variability of this slope in a given object. The constraints that this places on simple models of the central energy source are discussed. BL Lac objects have quite different X-ray spectral properties and show pronounced X-ray spectral variability. On time scales longer than 12 hours most radio quiet AGN do not show strong, delta I/I .5, variability. The probability of variability of these AGN seems to be inversely related to their luminosity. However characteristics timescales for variability have not been measured for many objects. This general lack of variability may imply that most AGN are well below the Eddington limit. Radio bright AGN tend to be more variable than radio quiet AGN on long, tau approx 6 month, timescales

  11. Energy decay of a variable-coefficient wave equation with nonlinear time-dependent localized damping

    Directory of Open Access Journals (Sweden)

    Jieqiong Wu

    2015-09-01

    Full Text Available We study the energy decay for the Cauchy problem of the wave equation with nonlinear time-dependent and space-dependent damping. The damping is localized in a bounded domain and near infinity, and the principal part of the wave equation has a variable-coefficient. We apply the multiplier method for variable-coefficient equations, and obtain an energy decay that depends on the property of the coefficient of the damping term.

  12. Reaction Time Variability in Children With ADHD Symptoms and/or Dyslexia

    OpenAIRE

    Gooch, Debbie; Snowling, Margaret J.; Hulme, Charles

    2012-01-01

    Reaction time (RT) variability on a Stop Signal task was examined among children with attention deficit hyperactivity disorder (ADHD) symptoms and/or dyslexia in comparison to typically developing (TD) controls. Children’s go-trial RTs were analyzed using a novel ex-Gaussian method. Children with ADHD symptoms had increased variability in the fast but not the slow portions of their RT distributions compared to those without ADHD symptoms. The RT distributions of children with d...

  13. Competency-Based, Time-Variable Education in the Health Professions: Crossroads.

    Science.gov (United States)

    Lucey, Catherine R; Thibault, George E; Ten Cate, Olle

    2018-03-01

    Health care systems around the world are transforming to align with the needs of 21st-century patients and populations. Transformation must also occur in the educational systems that prepare the health professionals who deliver care, advance discovery, and educate the next generation of physicians in these evolving systems. Competency-based, time-variable education, a comprehensive educational strategy guided by the roles and responsibilities that health professionals must assume to meet the needs of contemporary patients and communities, has the potential to catalyze optimization of educational and health care delivery systems. By designing educational and assessment programs that require learners to meet specific competencies before transitioning between the stages of formal education and into practice, this framework assures the public that every physician is capable of providing high-quality care. By engaging learners as partners in assessment, competency-based, time-variable education prepares graduates for careers as lifelong learners. While the medical education community has embraced the notion of competencies as a guiding framework for educational institutions, the structure and conduct of formal educational programs remain more aligned with a time-based, competency-variable paradigm.The authors outline the rationale behind this recommended shift to a competency-based, time-variable education system. They then introduce the other articles included in this supplement to Academic Medicine, which summarize the history of, theories behind, examples demonstrating, and challenges associated with competency-based, time-variable education in the health professions.

  14. The Steppengrille (Gryllus spec./assimilis: selective filters and signal mismatch on two time scales.

    Directory of Open Access Journals (Sweden)

    Matti Michael Rothbart

    Full Text Available In Europe, several species of crickets are available commercially as pet food. Here we investigated the calling song and phonotactic selectivity for sound patterns on the short and long time scales for one such a cricket, Gryllus spec., available as "Gryllus assimilis", the Steppengrille, originally from Ecuador. The calling song consisted of short chirps (2-3 pulses, carrier frequency: 5.0 kHz emitted with a pulse period of 30.2 ms and chirp rate of 0.43 per second. Females exhibited high selectivity on both time scales. The preference for pulse period peaked at 33 ms which was higher then the pulse period produced by males. Two consecutive pulses per chirp at the correct pulse period were already sufficient for positive phonotaxis. The preference for the chirp pattern was limited by selectivity for small chirp duty cycles and for chirp periods between 200 ms and 500 ms. The long chirp period of the songs of males was unattractive to females. On both time scales a mismatch between the song signal of the males and the preference of females was observed. The variability of song parameters as quantified by the coefficient of variation was below 50% for all temporal measures. Hence, there was not a strong indication for directional selection on song parameters by females which could account for the observed mismatch. The divergence of the chirp period and female preference may originate from a founder effect, when the Steppengrille was cultured. Alternatively the mismatch was a result of selection pressures exerted by commercial breeders on low singing activity, to satisfy customers with softly singing crickets. In the latter case the prominent divergence between male song and female preference was the result of domestication and may serve as an example of rapid evolution of song traits in acoustic communication systems.

  15. The swan song in context: long-time-scale X-ray variability of NGC 4051

    Science.gov (United States)

    Uttley, P.; McHardy, I. M.; Papadakis, I. E.; Guainazzi, M.; Fruscione, A.

    1999-07-01

    On 1998 May 9-11, the highly variable, low-luminosity Seyfert 1 galaxy NGC 4051 was observed in an unusual low-flux state by BeppoSAX, RXTE and EUVE. We present fits of the 4-15keV RXTE spectrum and BeppoSAX MECS spectrum obtained during this observation, which are consistent with the interpretation that the source had switched off, leaving only the spectrum of pure reflection from distant cold matter. We place this result in context by showing the X-ray light curve of NGC 4051 obtained by our RXTE monitoring campaign over the past two and a half years, which shows that the low state lasted for ~150d before the May observations (implying that the reflecting material is >10^17cm from the continuum source) and forms part of a light curve showing distinct variations in long-term average flux over time-scales > months. We show that the long-time-scale component to X-ray variability is intrinsic to the primary continuum and is probably distinct from the variability at shorter time-scales. The long-time-scale component to variability maybe associated with variations in the accretion flow of matter on to the central black hole. As the source approaches the low state, the variability process becomes non-linear. NGC 4051 may represent a microcosm of all X-ray variability in radio-quiet active galactic nuclei (AGNs), displaying in a few years a variety of flux states and variability properties which more luminous AGNs may pass through on time-scales of decades to thousands of years.

  16. Long time scale hard X-ray variability in Seyfert 1 galaxies

    Science.gov (United States)

    Markowitz, Alex Gary

    This dissertation examines the relationship between long-term X-ray variability characteristics, black hole mass, and luminosity of Seyfert 1 Active Galactic Nuclei. High dynamic range power spectral density functions (PSDs) have been constructed for six Seyfert 1 galaxies. These PSDs show "breaks" or characteristic time scales, typically on the order of a few days. There is resemblance to PSDs of lower-mass Galactic X-ray binaries (XRBs), with the ratios of putative black hole masses and variability time scales approximately the same (106--7) between the two classes of objects. The data are consistent with a linear correlation between Seyfert PSD break time scale and black hole mass estimate; the relation extrapolates reasonably well over 6--7 orders of magnitude to XRBs. All of this strengthens the case for a physical similarity between Seyfert galaxies and XRBs. The first six years of RXTE monitoring of Seyfert 1s have been systematically analyzed to probe hard X-ray variability on multiple time scales in a total of 19 Seyfert is in an expansion of the survey of Markowitz & Edelson (2001). Correlations between variability amplitude, luminosity, and black hole mass are explored, the data support the model of PSD movement with black hole mass suggested by the PSD survey. All of the continuum variability results are consistent with relatively more massive black holes hosting larger X-ray emission regions, resulting in 'slower' observed variability. Nearly all sources in the sample exhibit stronger variability towards softer energies, consistent with softening as they brighten. Direct time-resolved spectral fitting has been performed on continuous RXTE monitoring of seven Seyfert is to study long-term spectral variability and Fe Kalpha variability characteristics. The Fe Kalpha line displays a wide range of behavior but varies less strongly than the broadband continuum. Overall, however, there is no strong evidence for correlated variability between the line and

  17. Real-time flavor tagging selection in ATLAS

    CERN Document Server

    Sahinsoy, M; The ATLAS collaboration

    2014-01-01

    In high-energy physics experiments, online selection is crucial to reject most uninteresting collisions; in particular, b-jet selections, part of the ATLAS trigger strategy, are meant to select final states with heavy-flavor content. This is the only option to select fully hadronic final states containing b-jets, and is important to reject QCD light jets and maintain affordable trigger rates without raising jet energy thresholds. ATLAS operated b-jet triggers in both 2011 and 2012 data-taking campaigns and is now working to improve the performance of tagging algorithms for Run2. An overview of the ATLAS b-jet trigger strategy and its performance on real data is presented in this contribution, along with future prospects. Data-driven techniques to extract the online b-tagging performance, a key ingredient for all analyses relying on such triggers, are also discussed and results presented.

  18. Modelling accuracy and variability of motor timing in treated and untreated Parkinson’s disease and healthy controls

    Directory of Open Access Journals (Sweden)

    Catherine Rhian Gwyn Jones

    2011-12-01

    Full Text Available Parkinson’s disease (PD is characterised by difficulty with the timing of movements. Data collected using the synchronization-continuation paradigm, an established motor timing paradigm, have produced varying results but with most studies finding impairment. Some of this inconsistency comes from variation in the medication state tested, in the inter-stimulus intervals (ISI selected, and in changeable focus on either the synchronization (tapping in time with a tone or continuation (maintaining the rhythm in the absence of the tone phase. We sought to re-visit the paradigm by testing across four groups of participants: healthy controls, medication naïve de novo PD patients, and treated PD patients both ‘on’ and ‘off’ dopaminergic medication. Four finger tapping intervals (ISI were used: 250ms, 500ms, 1000ms and 2000ms. Categorical predictors (group, ISI, and phase were used to predict accuracy and variability using a linear mixed model. Accuracy was defined as the relative error of a tap, and variability as the deviation of the participant’s tap from group predicted relative error. Our primary finding is that the treated PD group (PD patients ‘on’ and ‘off’ dopaminergic therapy showed a significantly different pattern of accuracy compared to the de novo group and the healthy controls at the 250ms interval. At this interval, the treated PD patients performed ‘ahead’ of the beat whilst the other groups performed ‘behind’ the beat. We speculate that this ‘hastening’ relates to the clinical phenomenon of motor festination. Across all groups, variability was smallest for both phases at the 500 ms interval, suggesting an innate preference for finger tapping within this range. Tapping variability for the two phases became increasingly divergent at the longer intervals, with worse performance in the continuation phase. The data suggest that patients with PD can be best discriminated from healthy controls on measures of

  19. Neural network real time event selection for the DIRAC experiment

    CERN Document Server

    Kokkas, P; Tauscher, Ludwig; Vlachos, S

    2001-01-01

    The neural network first level trigger for the DIRAC experiment at CERN is presented. Both the neural network algorithm used and its actual hardware implementation are described. The system uses the fast plastic scintillator information of the DIRAC spectrometer. In 210 ns it selects events with two particles having low relative momentum. Such events are selected with an efficiency of more than 0.94. The corresponding rate reduction for background events is a factor of 2.5. (10 refs).

  20. Low-Energy Real-Time OS Using Voltage Scheduling Algorithm for Variable Voltage Processors

    OpenAIRE

    Okuma, Takanori; Yasuura, Hiroto

    2001-01-01

    This paper presents a real-time OS based on $ mu $ITRON using proposed voltage scheduling algorithm for variable voltage processors which can vary supply voltage dynamically. The proposed voltage scheduling algorithms assign voltage level for each task dynamically in order to minimize energy consumption under timing constraints. Using the presented real-time OS, running tasks with low supply voltage leads to drastic energy reduction. In addition, the presented voltage scheduling algorithm is ...

  1. Continuous performance task in ADHD: Is reaction time variability a key measure?

    Science.gov (United States)

    Levy, Florence; Pipingas, Andrew; Harris, Elizabeth V; Farrow, Maree; Silberstein, Richard B

    2018-01-01

    To compare the use of the Continuous Performance Task (CPT) reaction time variability (intraindividual variability or standard deviation of reaction time), as a measure of vigilance in attention-deficit hyperactivity disorder (ADHD), and stimulant medication response, utilizing a simple CPT X-task vs an A-X-task. Comparative analyses of two separate X-task vs A-X-task data sets, and subgroup analyses of performance on and off medication were conducted. The CPT X-task reaction time variability had a direct relationship to ADHD clinician severity ratings, unlike the CPT A-X-task. Variability in X-task performance was reduced by medication compared with the children's unmedicated performance, but this effect did not reach significance. When the coefficient of variation was applied, severity measures and medication response were significant for the X-task, but not for the A-X-task. The CPT-X-task is a useful clinical screening test for ADHD and medication response. In particular, reaction time variability is related to default mode interference. The A-X-task is less useful in this regard.

  2. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea

    2014-10-31

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  3. Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping

    KAUST Repository

    Bonito, Andrea; Guermond, Jean-Luc; Lee, Sanghyun

    2014-01-01

    © Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.

  4. Disruption of Brewers' yeast by hydrodynamic cavitation: Process variables and their influence on selective release.

    Science.gov (United States)

    Balasundaram, B; Harrison, S T L

    2006-06-05

    Intracellular products, not secreted from the microbial cell, are released by breaking the cell envelope consisting of cytoplasmic membrane and an outer cell wall. Hydrodynamic cavitation has been reported to cause microbial cell disruption. By manipulating the operating variables involved, a wide range of intensity of cavitation can be achieved resulting in a varying extent of disruption. The effect of the process variables including cavitation number, initial cell concentration of the suspension and the number of passes across the cavitation zone on the release of enzymes from various locations of the Brewers' yeast was studied. The release profile of the enzymes studied include alpha-glucosidase (periplasmic), invertase (cell wall bound), alcohol dehydrogenase (ADH; cytoplasmic) and glucose-6-phosphate dehydrogenase (G6PDH; cytoplasmic). An optimum cavitation number Cv of 0.13 for maximum disruption was observed across the range Cv 0.09-0.99. The optimum cell concentration was found to be 0.5% (w/v, wet wt) when varying over the range 0.1%-5%. The sustained effect of cavitation on the yeast cell wall when re-circulating the suspension across the cavitation zone was found to release the cell wall bound enzyme invertase (86%) to a greater extent than the enzymes from other locations of the cell (e.g. periplasmic alpha-glucosidase at 17%). Localised damage to the cell wall could be observed using transmission electron microscopy (TEM) of cells subjected to less intense cavitation conditions. Absence of the release of cytoplasmic enzymes to a significant extent, absence of micronisation as observed by TEM and presence of a lower number of proteins bands in the culture supernatant on SDS-PAGE analysis following hydrodynamic cavitation compared to disruption by high-pressure homogenisation confirmed the selective release offered by hydrodynamic cavitation. Copyright 2006 Wiley Periodicals, Inc.

  5. A Variable Service Broker Routing Policy for data center selection in cloud analyst

    Directory of Open Access Journals (Sweden)

    Ahmad M. Manasrah

    2017-07-01

    Full Text Available Cloud computing depends on sharing distributed computing resources to handle different services such as servers, storage and applications. The applications and infrastructures are provided as pay per use services through data center to the end user. The data centers are located at different geographic locations. However, these data centers can get overloaded with the increase number of client applications being serviced at the same time and location; this will degrade the overall QoS of the distributed services. Since different user applications may require different configuration and requirements, measuring the user applications performance of various resources is challenging. The service provider cannot make decisions for the right level of resources. Therefore, we propose a Variable Service Broker Routing Policy – VSBRP, which is a heuristic-based technique that aims to achieve minimum response time through considering the communication channel bandwidth, latency and the size of the job. The proposed service broker policy will also reduce the overloading of the data centers by redirecting the user requests to the next data center that yields better response and processing time. The simulation shows promising results in terms of response and processing time compared to other known broker policies from the literature.

  6. Selective visual scaling of time-scale processes facilitates broadband learning of isometric force frequency tracking.

    Science.gov (United States)

    King, Adam C; Newell, Karl M

    2015-10-01

    The experiment investigated the effect of selectively augmenting faster time scales of visual feedback information on the learning and transfer of continuous isometric force tracking tasks to test the generality of the self-organization of 1/f properties of force output. Three experimental groups tracked an irregular target pattern either under a standard fixed gain condition or with selectively enhancement in the visual feedback display of intermediate (4-8 Hz) or high (8-12 Hz) frequency components of the force output. All groups reduced tracking error over practice, with the error lowest in the intermediate scaling condition followed by the high scaling and fixed gain conditions, respectively. Selective visual scaling induced persistent changes across the frequency spectrum, with the strongest effect in the intermediate scaling condition and positive transfer to novel feedback displays. The findings reveal an interdependence of the timescales in the learning and transfer of isometric force output frequency structures consistent with 1/f process models of the time scales of motor output variability.

  7. Short time-scale optical variability properties of the largest AGN sample observed with Kepler/K2

    Science.gov (United States)

    Aranzana, E.; Körding, E.; Uttley, P.; Scaringi, S.; Bloemen, S.

    2018-05-01

    We present the first short time-scale (˜hours to days) optical variability study of a large sample of active galactic nuclei (AGNs) observed with the Kepler/K2 mission. The sample contains 252 AGN observed over four campaigns with ˜30 min cadence selected from the Million Quasar Catalogue with R magnitude <19. We performed time series analysis to determine their variability properties by means of the power spectral densities (PSDs) and applied Monte Carlo techniques to find the best model parameters that fit the observed power spectra. A power-law model is sufficient to describe all the PSDs of our sample. A variety of power-law slopes were found indicating that there is not a universal slope for all AGNs. We find that the rest-frame amplitude variability in the frequency range of 6 × 10-6-10-4 Hz varies from 1to10 per cent with an average of 1.7 per cent. We explore correlations between the variability amplitude and key parameters of the AGN, finding a significant correlation of rest-frame short-term variability amplitude with redshift. We attribute this effect to the known `bluer when brighter' variability of quasars combined with the fixed bandpass of Kepler data. This study also enables us to distinguish between Seyferts and blazars and confirm AGN candidates. For our study, we have compared results obtained from light curves extracted using different aperture sizes and with and without detrending. We find that limited detrending of the optimal photometric precision light curve is the best approach, although some systematic effects still remain present.

  8. Evidence for a time-invariant phase variable in human ankle control.

    Directory of Open Access Journals (Sweden)

    Robert D Gregg

    Full Text Available Human locomotion is a rhythmic task in which patterns of muscle activity are modulated by state-dependent feedback to accommodate perturbations. Two popular theories have been proposed for the underlying embodiment of phase in the human pattern generator: a time-dependent internal representation or a time-invariant feedback representation (i.e., reflex mechanisms. In either case the neuromuscular system must update or represent the phase of locomotor patterns based on the system state, which can include measurements of hundreds of variables. However, a much simpler representation of phase has emerged in recent designs for legged robots, which control joint patterns as functions of a single monotonic mechanical variable, termed a phase variable. We propose that human joint patterns may similarly depend on a physical phase variable, specifically the heel-to-toe movement of the Center of Pressure under the foot. We found that when the ankle is unexpectedly rotated to a position it would have encountered later in the step, the Center of Pressure also shifts forward to the corresponding later position, and the remaining portion of the gait pattern ensues. This phase shift suggests that the progression of the stance ankle is controlled by a biomechanical phase variable, motivating future investigations of phase variables in human locomotor control.

  9. Variability of African Farming Systems from Phenological Analysis of NDVI Time Series

    Science.gov (United States)

    Vrieling, Anton; deBeurs, K. M.; Brown, Molly E.

    2011-01-01

    Food security exists when people have access to sufficient, safe and nutritious food at all times to meet their dietary needs. The natural resource base is one of the many factors affecting food security. Its variability and decline creates problems for local food production. In this study we characterize for sub-Saharan Africa vegetation phenology and assess variability and trends of phenological indicators based on NDVI time series from 1982 to 2006. We focus on cumulated NDVI over the season (cumNDVI) which is a proxy for net primary productivity. Results are aggregated at the level of major farming systems, while determining also spatial variability within farming systems. High temporal variability of cumNDVI occurs in semiarid and subhumid regions. The results show a large area of positive cumNDVI trends between Senegal and South Sudan. These correspond to positive CRU rainfall trends found and relate to recovery after the 1980's droughts. We find significant negative cumNDVI trends near the south-coast of West Africa (Guinea coast) and in Tanzania. For each farming system, causes of change and variability are discussed based on available literature (Appendix A). Although food security comprises more than the local natural resource base, our results can perform an input for food security analysis by identifying zones of high variability or downward trends. Farming systems are found to be a useful level of analysis. Diversity and trends found within farming system boundaries underline that farming systems are dynamic.

  10. Real-time flavor tagging selection in ATLAS

    CERN Document Server

    Madaffari, D; The ATLAS collaboration

    2014-01-01

    In high-energy physics experiments on hadron colliders, online selection is crucial to reject most uninteresting collisions. In particular, the ATLAS experiment includes b-jet selections in its trigger strategy, in order to select final states with heavy-flavor content and enlarge its physics potentials. Dedicated selections are developed to quickly identify fully hadronic final states containing b-jets, while rejecting light QCD jets, and maintain affordable trigger rates without raising jet energy thresholds. ATLAS successfully operated b-jet trigger selections during both 2011 and 2012 data-taking campaigns and hard work is on-going now to improve the performance of tagging algorithms for coming Run2 in 2015. An overview of the ATLAS b-jet trigger strategy and its performance on real data is presented in this contribution, along with future prospects. Data-driven techniques to extract the online b-tagging performance, a key ingredient for all analyses relying on such triggers, are also discussed and result...

  11. Seasonal variability of the Red Sea, from GRACE time-variable gravity and altimeter sea surface height measurements

    Science.gov (United States)

    Wahr, John; Smeed, David; Leuliette, Eric; Swenson, Sean

    2014-05-01

    Seasonal variability of sea surface height and mass within the Red Sea, occurs mostly through the exchange of heat with the atmosphere and wind-driven inflow and outflow of water through the strait of Bab el Mandab that opens into the Gulf of Aden to the south. The seasonal effects of precipitation and evaporation, of water exchange through the Suez Canal to the north, and of runoff from the adjacent land, are all small. The flow through the Bab el Mandab involves a net mass transfer into the Red Sea during the winter and a net transfer out during the summer. But that flow has a multi-layer pattern, so that in the summer there is actually an influx of cool water at intermediate (~100 m) depths. Thus, summer water in the southern Red Sea is warmer near the surface due to higher air temperatures, but cooler at intermediate depths (especially in the far south). Summer water in the northern Red Sea experiences warming by air-sea exchange only. The temperature profile affects the water density, which impacts the sea surface height but has no effect on vertically integrated mass. Here, we study this seasonal cycle by combining GRACE time-variable mass estimates, altimeter (Jason-1, Jason-2, and Envisat) measurements of sea surface height, and steric sea surface height contributions derived from depth-dependent, climatological values of temperature and salinity obtained from the World Ocean Atlas. We find good consistency, particularly in the northern Red Sea, between these three data types. Among the general characteristics of our results are: (1) the mass contributions to seasonal SSHT variations are much larger than the steric contributions; (2) the mass signal is largest in winter, consistent with winds pushing water into the Red Sea through the Strait of Bab el Mandab in winter, and out during the summer; and (3) the steric signal is largest in summer, consistent with summer sea surface warming.

  12. Predictor variables for a half marathon race time in recreational male runners

    Directory of Open Access Journals (Sweden)

    Rüst CA

    2011-08-01

    Full Text Available Christoph Alexander Rüst1, Beat Knechtle1,2, Patrizia Knechtle2, Ursula Barandun1, Romuald Lepers3, Thomas Rosemann11Institute of General Practice and Health Services Research, University of Zurich, Zurich, Switzerland; 2Gesundheitszentrum St Gallen, St Gallen, Switzerland; 3INSERM U887, University of Burgundy, Faculty of Sport Sciences, Dijon, FranceAbstract: The aim of this study was to investigate predictor variables of anthropometry, training, and previous experience in order to predict a half marathon race time for future novice recreational male half marathoners. Eighty-four male finishers in the ‘Half Marathon Basel’ completed the race distance within (mean and standard deviation, SD 103.9 (16.5 min, running at a speed of 12.7 (1.9 km/h. After multivariate analysis of the anthropometric characteristics, body mass index (r = 0.56, suprailiacal (r = 0.36 and medial calf skin fold (r = 0.53 were related to race time. For the variables of training and previous experience, speed in running of the training sessions (r = –0.54 were associated with race time. After multivariate analysis of both the significant anthropometric and training variables, body mass index (P = 0.0150 and speed in running during training (P = 0.0045 were related to race time. Race time in a half marathon might be partially predicted by the following equation (r2 = 0.44: Race time (min = 72.91 + 3.045 * (body mass index, kg/m2 –3.884 * (speed in running during training, km/h for recreational male runners. To conclude, variables of both anthropometry and training were related to half marathon race time in recreational male half marathoners and cannot be reduced to one single predictor variable.Keywords: anthropometry, body fat, skin-folds, training, endurance

  13. Improving the Classification Accuracy for Near-Infrared Spectroscopy of Chinese Salvia miltiorrhiza Using Local Variable Selection

    Directory of Open Access Journals (Sweden)

    Lianqing Zhu

    2018-01-01

    Full Text Available In order to improve the classification accuracy of Chinese Salvia miltiorrhiza using near-infrared spectroscopy, a novel local variable selection strategy is thus proposed. Combining the strengths of the local algorithm and interval partial least squares, the spectra data have firstly been divided into several pairs of classes in sample direction and equidistant subintervals in variable direction. Then, a local classification model has been built, and the most proper spectral region has been selected based on the new evaluation criterion considering both classification error rate and best predictive ability under the leave-one-out cross validation scheme for each pair of classes. Finally, each observation can be assigned to belong to the class according to the statistical analysis of classification results of the local classification model built on selected variables. The performance of the proposed method was demonstrated through near-infrared spectra of cultivated or wild Salvia miltiorrhiza, which are collected from 8 geographical origins in 5 provinces of China. For comparison, soft independent modelling of class analogy and partial least squares discriminant analysis methods are, respectively, employed as the classification model. Experimental results showed that classification performance of the classification model with local variable selection was obvious better than that without variable selection.

  14. A modification of the successive projections algorithm for spectral variable selection in the presence of unknown interferents.

    Science.gov (United States)

    Soares, Sófacles Figueredo Carreiro; Galvão, Roberto Kawakami Harrop; Araújo, Mário César Ugulino; da Silva, Edvan Cirino; Pereira, Claudete Fernandes; de Andrade, Stéfani Iury Evangelista; Leite, Flaviano Carvalho

    2011-03-09

    This work proposes a modification to the successive projections algorithm (SPA) aimed at selecting spectral variables for multiple linear regression (MLR) in the presence of unknown interferents not included in the calibration data set. The modified algorithm favours the selection of variables in which the effect of the interferent is less pronounced. The proposed procedure can be regarded as an adaptive modelling technique, because the spectral features of the samples to be analyzed are considered in the variable selection process. The advantages of this new approach are demonstrated in two analytical problems, namely (1) ultraviolet-visible spectrometric determination of tartrazine, allure red and sunset yellow in aqueous solutions under the interference of erythrosine, and (2) near-infrared spectrometric determination of ethanol in gasoline under the interference of toluene. In these case studies, the performance of conventional MLR-SPA models is substantially degraded by the presence of the interferent. This problem is circumvented by applying the proposed Adaptive MLR-SPA approach, which results in prediction errors smaller than those obtained by three other multivariate calibration techniques, namely stepwise regression, full-spectrum partial-least-squares (PLS) and PLS with variables selected by a genetic algorithm. An inspection of the variable selection results reveals that the Adaptive approach successfully avoids spectral regions in which the interference is more intense. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. A Selected Annotated Bibliography on Work Time Options.

    Science.gov (United States)

    Ivantcho, Barbara

    This annotated bibliography is divided into three sections. Section I contains annotations of general publications on work time options. Section II presents resources on flexitime and the compressed work week. In Section III are found resources related to these reduced work time options: permanent part-time employment, job sharing, voluntary…

  16. Field site selection: getting it right first time around

    NARCIS (Netherlands)

    Malcolm, Colin A.; El Sayed, Badria; Babiker, Ahmed; Girod, Romain; Fontenille, Didier; Knols, Bart G. J.; Nugud, Abdel Hameed; Benedict, Mark Q.

    2009-01-01

    The selection of suitable field sites for integrated control of Anopheles mosquitoes using the sterile insect technique (SIT) requires consideration of the full gamut of factors facing most proposed control strategies, but four criteria identify an ideal site: 1) a single malaria vector, 2) an

  17. Danish Mutual Fund Performance - Selectivity, Market Timing and Persistence

    DEFF Research Database (Denmark)

    Christensen, Michael

    Funds under management by Danish mutual funds have increased by 25% annually during the last 10 years and measured per capita Denmark has the third largest mutual fund industry in Europe. This paper provides the first independent performance analysis of Danish mutual funds. We analyse selectivity...

  18. Selection for Dutch postgraduate GP training; time for improvement

    NARCIS (Netherlands)

    Vermeulen, M.I.; Kuyvenhoven, M.M.; Zuithoff, N.P.; Tromp, F.; Graaf, Y. van der; Pieters, R.H.

    2012-01-01

    Background: In the Netherlands we select candidates for the postgraduate GP training by assessing personal qualities in interviews. Because of differences in the ratio of number of candidates and number of vacancies between the eight departments of GP training we questioned whether the risk of being

  19. A search for time variability and its possible regularities in linear polarization of Be stars

    International Nuclear Information System (INIS)

    Huang, L.; Guo, Z.H.; Hsu, J.C.; Huang, L.

    1989-01-01

    Linear polarization measurements are presented for 14 Be stars obtained at McDonald Observatory during four observing runs from June to November of 1983. Methods of observation and data reduction are described. Seven of eight program stars which were observed on six or more nights exhibited obvious polarimetric variations on time-scales of days or months. The incidence is estimated as 50% and may be as high as 93%. No connection can be found between polarimetric variability and rapid periodic light or spectroscopic variability for our stars. Ultra-rapid variability on time-scale of minutes was searched for with negative results. In all cases the position angles also show variations indicating that the axis of symmetry of the circumstellar envelope changes its orientation in space. For the Be binary CX Dra the variations in polarization seems to have a period which is just half of the orbital period

  20. On the selection of significant variables in a model for the deteriorating process of facades

    Science.gov (United States)

    Serrat, C.; Gibert, V.; Casas, J. R.; Rapinski, J.

    2017-10-01

    In previous works the authors of this paper have introduced a predictive system that uses survival analysis techniques for the study of time-to-failure in the facades of a building stock. The approach is population based, in order to obtain information on the evolution of the stock across time, and to help the manager in the decision making process on global maintenance strategies. For the decision making it is crutial to determine those covariates -like materials, morphology and characteristics of the facade, orientation or environmental conditions- that play a significative role in the progression of different failures. The proposed platform also incorporates an open source GIS plugin that includes survival and test moduli that allow the investigator to model the time until a lesion taking into account the variables collected during the inspection process. The aim of this paper is double: a) to shortly introduce the predictive system, as well as the inspection and the analysis methodologies and b) to introduce and illustrate the modeling strategy for the deteriorating process of an urban front. The illustration will be focused on the city of L’Hospitalet de Llobregat (Barcelona, Spain) in which more than 14,000 facades have been inspected and analyzed.

  1. On the physical processes which lie at the bases of time variability of GRBs

    International Nuclear Information System (INIS)

    Ruffini, R.; Bianco, C. L.; Fraschetti, F.; Xue, S-S.

    2001-01-01

    The relative-space-time-transformation (RSTT) paradigm and the interpretation of the burst-structure (IBS) paradigm are applied to probe the origin of the time variability of GRBs. Again GRB 991216 is used as a prototypical case, thanks to the precise data from the CGRO, RXTE and Chandra satellites. It is found that with the exception of the relatively inconspicuous but scientifically very important signal originating from the initial proper gamma ray burst (P-GRB), all the other spikes and time variabilities can be explained by the interaction of the accelerated-baryonic-matter pulse with inhomogeneities in the interstellar matter. This can be demonstrated by using the RSTT paradigm as well as the IBS paradigm, to trace a typical spike observed in arrival time back to the corresponding one in the laboratory time. Using these paradigms, the identification of the physical nature of the time variability of the GRBs can be made most convincingly. It is made explicit the dependence of a) the intensities of the afterglow, b) the spikes amplitude and c) the actual time structure on the Lorentz gamma factor of the accelerated-baryonic-matter pulse. In principle it is possible to read off from the spike structure the detailed density contrast of the interstellar medium in the host galaxy, even at very high redshift

  2. Selecting sagebrush seed sources for restoration in a variable climate: ecophysiological variation among genotypes

    Science.gov (United States)

    Germino, Matthew J.

    2012-01-01

    Big sagebrush (Artemisia tridentata) communities dominate a large fraction of the United States and provide critical habitat for a number of wildlife species of concern. Loss of big sagebrush due to fire followed by poor restoration success continues to reduce ecological potential of this ecosystem type, particularly in the Great Basin. Choice of appropriate seed sources for restoration efforts is currently unguided due to knowledge gaps on genetic variation and local adaptation as they relate to a changing landscape. We are assessing ecophysiological responses of big sagebrush to climate variation, comparing plants that germinated from ~20 geographically distinct populations of each of the three subspecies of big sagebrush. Seedlings were previously planted into common gardens by US Forest Service collaborators Drs. B. Richardson and N. Shaw, (USFS Rocky Mountain Research Station, Provo, Utah and Boise, Idaho) as part of the Great Basin Native Plant Selection and Increase Project. Seed sources spanned all states in the conterminous Western United States. Germination, establishment, growth and ecophysiological responses are being linked to genomics and foliar palatability. New information is being produced to aid choice of appropriate seed sources by Bureau of Land Management and USFS field offices when they are planning seed acquisitions for emergency post-fire rehabilitation projects while considering climate variability and wildlife needs.

  3. Resiliency and subjective health assessment. Moderating role of selected psychosocial variables

    Directory of Open Access Journals (Sweden)

    Michalina Sołtys

    2015-12-01

    Full Text Available Background Resiliency is defined as a relatively permanent personality trait, which may be assigned to the category of health resources. The aim of this study was to determine conditions in which resiliency poses a significant health resource (moderation, thereby broadening knowledge of the specifics of the relationship between resiliency and subjective health assessment. Participants and procedure The study included 142 individuals. In order to examine the level of resiliency, the Assessment Resiliency Scale (SPP-25 by N. Ogińska-Bulik and Z. Juczyński was used. Participants evaluated subjective health state by means of an analogue-visual scale. Additionally, in the research the following moderating variables were controlled: sex, objective health status, having a partner, professional activity and age. These data were obtained by personal survey. Results The results confirmed the relationship between resiliency and subjective health assessment. Multiple regression analysis revealed that sex, having a partner and professional activity are significant moderators of associations between level of resiliency and subjective health evaluation. However, statistically significant interaction effects for health status and age as a moderator were not observed. Conclusions Resiliency is associated with subjective health assessment among adults, and selected socio-demographic features (such as sex, having a partner, professional activity moderate this relationship. This confirms the significant role of resiliency as a health resource and a reason to emphasize the benefits of enhancing the potential of individuals for their psychophysical wellbeing. However, the research requires replication in a more homogeneous sample.

  4. Variable Selection for Nonparametric Gaussian Process Priors: Models and Computational Strategies.

    Science.gov (United States)

    Savitsky, Terrance; Vannucci, Marina; Sha, Naijun

    2011-02-01

    This paper presents a unified treatment of Gaussian process models that extends to data from the exponential dispersion family and to survival data. Our specific interest is in the analysis of data sets with predictors that have an a priori unknown form of possibly nonlinear associations to the response. The modeling approach we describe incorporates Gaussian processes in a generalized linear model framework to obtain a class of nonparametric regression models where the covariance matrix depends on the predictors. We consider, in particular, continuous, categorical and count responses. We also look into models that account for survival outcomes. We explore alternative covariance formulations for the Gaussian process prior and demonstrate the flexibility of the construction. Next, we focus on the important problem of selecting variables from the set of possible predictors and describe a general framework that employs mixture priors. We compare alternative MCMC strategies for posterior inference and achieve a computationally efficient and practical approach. We demonstrate performances on simulated and benchmark data sets.

  5. Relationship of Powder Feedstock Variability to Microstructure and Defects in Selective Laser Melted Alloy 718

    Science.gov (United States)

    Smith, T. M.; Kloesel, M. F.; Sudbrack, C. K.

    2017-01-01

    Powder-bed additive manufacturing processes use fine powders to build parts layer by layer. For selective laser melted (SLM) Alloy 718, the powders that are available off-the-shelf are in the 10-45 or 15-45 micron size range. A comprehensive investigation of sixteen powders from these typical ranges and two off-nominal-sized powders is underway to gain insight into the impact of feedstock on processing, durability and performance of 718 SLM space-flight hardware. This talk emphasizes an aspect of this work: the impact of powder variability on the microstructure and defects observed in the as-fabricated and full heated material, where lab-scale components were built using vendor recommended parameters. These typical powders exhibit variation in composition, percentage of fines, roughness, morphology and particle size distribution. How these differences relate to the melt-pool size, porosity, grain structure, precipitate distributions, and inclusion content will be presented and discussed in context of build quality and powder acceptance.

  6. r2VIM: A new variable selection method for random forests in genome-wide association studies.

    Science.gov (United States)

    Szymczak, Silke; Holzinger, Emily; Dasgupta, Abhijit; Malley, James D; Molloy, Anne M; Mills, James L; Brody, Lawrence C; Stambolian, Dwight; Bailey-Wilson, Joan E

    2016-01-01

    Machine learning methods and in particular random forests (RFs) are a promising alternative to standard single SNP analyses in genome-wide association studies (GWAS). RFs provide variable importance measures (VIMs) to rank SNPs according to their predictive power. However, in contrast to the established genome-wide significance threshold, no clear criteria exist to determine how many SNPs should be selected for downstream analyses. We propose a new variable selection approach, recurrent relative variable importance measure (r2VIM). Importance values are calculated relative to an observed minimal importance score for several runs of RF and only SNPs with large relative VIMs in all of the runs are selected as important. Evaluations on simulated GWAS data show that the new method controls the number of false-positives under the null hypothesis. Under a simple alternative hypothesis with several independent main effects it is only slightly less powerful than logistic regression. In an experimental GWAS data set, the same strong signal is identified while the approach selects none of the SNPs in an underpowered GWAS. The novel variable selection method r2VIM is a promising extension to standard RF for objectively selecting relevant SNPs in GWAS while controlling the number of false-positive results.

  7. Genetic and Psychosocial Predictors of Aggression: Variable Selection and Model Building With Component-Wise Gradient Boosting

    Directory of Open Access Journals (Sweden)

    Robert Suchting

    2018-05-01

    Full Text Available Rationale: Given datasets with a large or diverse set of predictors of aggression, machine learning (ML provides efficient tools for identifying the most salient variables and building a parsimonious statistical model. ML techniques permit efficient exploration of data, have not been widely used in aggression research, and may have utility for those seeking prediction of aggressive behavior.Objectives: The present study examined predictors of aggression and constructed an optimized model using ML techniques. Predictors were derived from a dataset that included demographic, psychometric and genetic predictors, specifically FK506 binding protein 5 (FKBP5 polymorphisms, which have been shown to alter response to threatening stimuli, but have not been tested as predictors of aggressive behavior in adults.Methods: The data analysis approach utilized component-wise gradient boosting and model reduction via backward elimination to: (a select variables from an initial set of 20 to build a model of trait aggression; and then (b reduce that model to maximize parsimony and generalizability.Results: From a dataset of N = 47 participants, component-wise gradient boosting selected 8 of 20 possible predictors to model Buss-Perry Aggression Questionnaire (BPAQ total score, with R2 = 0.66. This model was simplified using backward elimination, retaining six predictors: smoking status, psychopathy (interpersonal manipulation and callous affect, childhood trauma (physical abuse and neglect, and the FKBP5_13 gene (rs1360780. The six-factor model approximated the initial eight-factor model at 99.4% of R2.Conclusions: Using an inductive data science approach, the gradient boosting model identified predictors consistent with previous experimental work in aggression; specifically psychopathy and trauma exposure. Additionally, allelic variants in FKBP5 were identified for the first time, but the relatively small sample size limits generality of results and calls for

  8. Genetic and Psychosocial Predictors of Aggression: Variable Selection and Model Building With Component-Wise Gradient Boosting.

    Science.gov (United States)

    Suchting, Robert; Gowin, Joshua L; Green, Charles E; Walss-Bass, Consuelo; Lane, Scott D

    2018-01-01

    Rationale : Given datasets with a large or diverse set of predictors of aggression, machine learning (ML) provides efficient tools for identifying the most salient variables and building a parsimonious statistical model. ML techniques permit efficient exploration of data, have not been widely used in aggression research, and may have utility for those seeking prediction of aggressive behavior. Objectives : The present study examined predictors of aggression and constructed an optimized model using ML techniques. Predictors were derived from a dataset that included demographic, psychometric and genetic predictors, specifically FK506 binding protein 5 (FKBP5) polymorphisms, which have been shown to alter response to threatening stimuli, but have not been tested as predictors of aggressive behavior in adults. Methods : The data analysis approach utilized component-wise gradient boosting and model reduction via backward elimination to: (a) select variables from an initial set of 20 to build a model of trait aggression; and then (b) reduce that model to maximize parsimony and generalizability. Results : From a dataset of N = 47 participants, component-wise gradient boosting selected 8 of 20 possible predictors to model Buss-Perry Aggression Questionnaire (BPAQ) total score, with R 2 = 0.66. This model was simplified using backward elimination, retaining six predictors: smoking status, psychopathy (interpersonal manipulation and callous affect), childhood trauma (physical abuse and neglect), and the FKBP5_13 gene (rs1360780). The six-factor model approximated the initial eight-factor model at 99.4% of R 2 . Conclusions : Using an inductive data science approach, the gradient boosting model identified predictors consistent with previous experimental work in aggression; specifically psychopathy and trauma exposure. Additionally, allelic variants in FKBP5 were identified for the first time, but the relatively small sample size limits generality of results and calls for

  9. Online Monitoring of Copper Damascene Electroplating Bath by Voltammetry: Selection of Variables for Multiblock and Hierarchical Chemometric Analysis of Voltammetric Data

    Directory of Open Access Journals (Sweden)

    Aleksander Jaworski

    2017-01-01

    Full Text Available The Real Time Analyzer (RTA utilizing DC- and AC-voltammetric techniques is an in situ, online monitoring system that provides a complete chemical analysis of different electrochemical deposition solutions. The RTA employs multivariate calibration when predicting concentration parameters from a multivariate data set. Although the hierarchical and multiblock Principal Component Regression- (PCR- and Partial Least Squares- (PLS- based methods can handle data sets even when the number of variables significantly exceeds the number of samples, it can be advantageous to reduce the number of variables to obtain improvement of the model predictions and better interpretation. This presentation focuses on the introduction of a multistep, rigorous method of data-selection-based Least Squares Regression, Simple Modeling of Class Analogy modeling power, and, as a novel application in electroanalysis, Uninformative Variable Elimination by PLS and by PCR, Variable Importance in the Projection coupled with PLS, Interval PLS, Interval PCR, and Moving Window PLS. Selection criteria of the optimum decomposition technique for the specific data are also demonstrated. The chief goal of this paper is to introduce to the community of electroanalytical chemists numerous variable selection methods which are well established in spectroscopy and can be successfully applied to voltammetric data analysis.

  10. NUMBER OF SUCCESSIVE CYCLES NECESSARY TO ACHIEVE STABILITY OF SELECTED GROUND REACTION FORCE VARIABLES DURING CONTINUOUS JUMPING

    Directory of Open Access Journals (Sweden)

    Jasmes M.W. Brownjohn

    2009-12-01

    Full Text Available Because of inherent variability in all human cyclical movements, such as walking, running and jumping, data collected across a single cycle might be atypical and potentially unable to represent an individual's generalized performance. The study described here was designed to determine the number of successive cycles due to continuous, repetitive countermovement jumping which a test subject should perform in a single experimental session to achieve stability of the mean of the corresponding continuously measured ground reaction force (GRF variables. Seven vertical GRF variables (period of jumping cycle, duration of contact phase, peak force amplitude and its timing, average rate of force development, average rate of force relaxation and impulse were extracted on the cycle-by-cycle basis from vertical jumping force time histories generated by twelve participants who were jumping in response to regular electronic metronome beats in the range 2-2.8 Hz. Stability of the selected GRF variables across successive jumping cycles was examined for three jumping rates (2, 2.4 and 2.8 Hz using two statistical methods: intra-class correlation (ICC analysis and segmental averaging technique (SAT. Results of the ICC analysis indicated that an average of four successive cycles (mean 4.5 ± 2.7 for 2 Hz; 3.9 ± 2.6 for 2.4 Hz; 3.3 ± 2.7 for 2.8 Hz were necessary to achieve maximum ICC values. Except for jumping period, maximum ICC values took values from 0.592 to 0.991 and all were significantly (p < 0.05 different from zero. Results of the SAT revealed that an average of ten successive cycles (mean 10.5 ± 3.5 for 2 Hz; 9.2 ± 3.8 for 2.4 Hz; 9.0 ± 3.9 for 2.8 Hz were necessary to achieve stability of the selected parameters using criteria previously reported in the literature. Using 10 reference trials, the SAT required standard deviation criterion values of 0.49, 0.41 and 0.55 for 2 Hz, 2.4 Hz and 2.8 Hz jumping rates, respectively, in order to approximate

  11. An experiment on selecting most informative variables in socio-economic data

    Directory of Open Access Journals (Sweden)

    L. Jenkins

    2014-01-01

    Full Text Available In many studies where data are collected on several variables, there is a motivation to find if fewer variables would provide almost as much information. Variance of a variable about its mean is the common statistical measure of information content, and that is used here. We are interested whether the variability in one variable is sufficiently correlated with that in one or more of the other variables that the first variable is redundant. We wish to find one or more ‘principal variables’ that sufficiently reflect the information content in all the original variables. The paper explains the method of principal variables and reports experiments using the technique to see if just a few variables are sufficient to reflect the information in 11 socioeconomic variables on 130 countries from a World Bank (WB database. While the method of principal variables is highly successful in a statistical sense, the WB data varies greatly from year to year, demonstrating that fewer variables wo uld be inadequate for this data.

  12. Time-variable gravity fields derived from GPS tracking of Swarm

    Czech Academy of Sciences Publication Activity Database

    Bezděk, Aleš; Sebera, Josef; da Encarnacao, J.T.; Klokočník, Jaroslav

    2016-01-01

    Roč. 205, č. 3 (2016), s. 1665-1669 ISSN 0956-540X R&D Projects: GA MŠk LG14026; GA ČR GA13-36843S Institutional support: RVO:67985815 Keywords : satellite geodesy * time variable gravity * global change from geodesy Subject RIV: DD - Geochemistry Impact factor: 2.414, year: 2016

  13. Improved theory of time domain reflectometry with variable coaxial cable length for electrical conductivity measurements

    Science.gov (United States)

    Although empirical models have been developed previously, a mechanistic model is needed for estimating electrical conductivity (EC) using time domain reflectometry (TDR) with variable lengths of coaxial cable. The goals of this study are to: (1) derive a mechanistic model based on multisection tra...

  14. Using Derivative Estimates to Describe Intraindividual Variability at Multiple Time Scales

    Science.gov (United States)

    Deboeck, Pascal R.; Montpetit, Mignon A.; Bergeman, C. S.; Boker, Steven M.

    2009-01-01

    The study of intraindividual variability is central to the study of individuals in psychology. Previous research has related the variance observed in repeated measurements (time series) of individuals to traitlike measures that are logically related. Intraindividual measures, such as intraindividual standard deviation or the coefficient of…

  15. Managing anthelmintic resistance-Variability in the dose of drug reaching the target worms influences selection for resistance?

    Science.gov (United States)

    Leathwick, Dave M; Luo, Dongwen

    2017-08-30

    The concentration profile of anthelmintic reaching the target worms in the host can vary between animals even when administered doses are tailored to individual liveweight at the manufacturer's recommended rate. Factors contributing to variation in drug concentration include weather, breed of animal, formulation and the route by which drugs are administered. The implications of this variability for the development of anthelmintic resistance was investigated using Monte-Carlo simulation. A model framework was established where 100 animals each received a single drug treatment. The 'dose' of drug allocated to each animal (i.e. the concentration-time profile of drug reaching the target worms) was sampled at random from a distribution of doses with mean m and standard deviation s. For each animal the dose of drug was used in conjunction with pre-determined dose-response relationships, representing single and poly-genetic inheritance, to calculate efficacy against susceptible and resistant genotypes. These data were then used to calculate the overall change in resistance gene frequency for the worm population as a result of the treatment. Values for m and s were varied to reflect differences in both mean dose and the variability in dose, and for each combination of these 100,000 simulations were run. The resistance gene frequency in the population after treatment increased as m decreased and as s increased. This occurred for both single and poly-gene models and for different levels of dominance (survival under treatment) of the heterozygote genotype(s). The results indicate that factors which result in lower and/or more variable concentrations of active reaching the target worms are more likely to select for resistance. The potential of different routes of anthelmintic administration to play a role in the development of anthelmintic resistance is discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. A selective review of the first 20 years of instrumental variables models in health-services research and medicine.

    Science.gov (United States)

    Cawley, John

    2015-01-01

    The method of instrumental variables (IV) is useful for estimating causal effects. Intuitively, it exploits exogenous variation in the treatment, sometimes called natural experiments or instruments. This study reviews the literature in health-services research and medical research that applies the method of instrumental variables, documents trends in its use, and offers examples of various types of instruments. A literature search of the PubMed and EconLit research databases for English-language journal articles published after 1990 yielded a total of 522 original research articles. Citations counts for each article were derived from the Web of Science. A selective review was conducted, with articles prioritized based on number of citations, validity and power of the instrument, and type of instrument. The average annual number of papers in health services research and medical research that apply the method of instrumental variables rose from 1.2 in 1991-1995 to 41.8 in 2006-2010. Commonly-used instruments (natural experiments) in health and medicine are relative distance to a medical care provider offering the treatment and the medical care provider's historic tendency to administer the treatment. Less common but still noteworthy instruments include randomization of treatment for reasons other than research, randomized encouragement to undertake the treatment, day of week of admission as an instrument for waiting time for surgery, and genes as an instrument for whether the respondent has a heritable condition. The use of the method of IV has increased dramatically in the past 20 years, and a wide range of instruments have been used. Applications of the method of IV have in several cases upended conventional wisdom that was based on correlations and led to important insights about health and healthcare. Future research should pursue new applications of existing instruments and search for new instruments that are powerful and valid.

  17. The reliable solution and computation time of variable parameters Logistic model

    OpenAIRE

    Pengfei, Wang; Xinnong, Pan

    2016-01-01

    The reliable computation time (RCT, marked as Tc) when applying a double precision computation of a variable parameters logistic map (VPLM) is studied. First, using the method proposed, the reliable solutions for the logistic map are obtained. Second, for a time-dependent non-stationary parameters VPLM, 10000 samples of reliable experiments are constructed, and the mean Tc is then computed. The results indicate that for each different initial value, the Tcs of the VPLM are generally different...

  18. Predictor variables for half marathon race time in recreational female runners

    OpenAIRE

    Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Rosemann, Thomas; Lepers, Romuald

    2011-01-01

    INTRODUCTION: The relationship between skin-fold thickness and running performance has been investigated from 100 m to the marathon distance, except the half marathon distance. OBJECTIVE: To investigate whether anthropometry characteristics or training practices were related to race time in 42 recreational female half marathoners to determine the predictor variables of half-marathon race time and to inform future novice female half marathoners. METHODS: Observational field study at the ‘Half ...

  19. A study of applying variable valve timing to highly rated diesel engines

    Energy Technology Data Exchange (ETDEWEB)

    Stone, C R; Leonard, H J [comps.; Brunel Univ., Uxbridge (United Kingdom); Charlton, S J [comp.; Bath Univ. (United Kingdom)

    1992-10-01

    The main objective of the research was to use Simulation Program for Internal Combustion Engines (SPICE) to quantify the potential offered by Variable Valve Timing (VVT) in improving engine performance. A model has been constructed of a particular engine using SPICE. The model has been validated with experimental data, and it has been shown that accurate predictions are made when the valve timing is changed. (author)

  20. Intraindividual Stepping Reaction Time Variability Predicts Falls in Older Adults With Mild Cognitive Impairment

    OpenAIRE

    Bunce, D; Haynes, BI; Lord, SR; Gschwind, YJ; Kochan, NA; Reppermund, S; Brodaty, H; Sachdev, PS; Delbaere, K

    2017-01-01

    Background: Reaction time measures have considerable potential to aid neuropsychological assessment in a variety of health care settings. One such measure, the intraindividual reaction time variability (IIV), is of particular interest as it is thought to reflect neurobiological disturbance. IIV is associated with a variety of age-related neurological disorders, as well as gait impairment and future falls in older adults. However, although persons diagnosed with Mild Cognitive Impairment (MCI)...

  1. Error Analysis of a Fractional Time-Stepping Technique for Incompressible Flows with Variable Density

    KAUST Repository

    Guermond, J.-L.; Salgado, Abner J.

    2011-01-01

    In this paper we analyze the convergence properties of a new fractional time-stepping technique for the solution of the variable density incompressible Navier-Stokes equations. The main feature of this method is that, contrary to other existing algorithms, the pressure is determined by just solving one Poisson equation per time step. First-order error estimates are proved, and stability of a formally second-order variant of the method is established. © 2011 Society for Industrial and Applied Mathematics.

  2. Bounds of Double Integral Dynamic Inequalities in Two Independent Variables on Time Scales

    Directory of Open Access Journals (Sweden)

    S. H. Saker

    2011-01-01

    Full Text Available Our aim in this paper is to establish some explicit bounds of the unknown function in a certain class of nonlinear dynamic inequalities in two independent variables on time scales which are unbounded above. These on the one hand generalize and on the other hand furnish a handy tool for the study of qualitative as well as quantitative properties of solutions of partial dynamic equations on time scales. Some examples are considered to demonstrate the applications of the results.

  3. GPS Imaging of Time-Variable Earthquake Hazard: The Hilton Creek Fault, Long Valley California

    Science.gov (United States)

    Hammond, W. C.; Blewitt, G.

    2016-12-01

    The Hilton Creek Fault, in Long Valley, California is a down-to-the-east normal fault that bounds the eastern edge of the Sierra Nevada/Great Valley microplate, and lies half inside and half outside the magmatically active caldera. Despite the dense coverage with GPS networks, the rapid and time-variable surface deformation attributable to sporadic magmatic inflation beneath the resurgent dome makes it difficult to use traditional geodetic methods to estimate the slip rate of the fault. While geologic studies identify cumulative offset, constrain timing of past earthquakes, and constrain a Quaternary slip rate to within 1-5 mm/yr, it is not currently possible to use geologic data to evaluate how the potential for slip correlates with transient caldera inflation. To estimate time-variable seismic hazard of the fault we estimate its instantaneous slip rate from GPS data using a new set of algorithms for robust estimation of velocity and strain rate fields and fault slip rates. From the GPS time series, we use the robust MIDAS algorithm to obtain time series of velocity that are highly insensitive to the effects of seasonality, outliers and steps in the data. We then use robust imaging of the velocity field to estimate a gridded time variable velocity field. Then we estimate fault slip rate at each time using a new technique that forms ad-hoc block representations that honor fault geometries, network complexity, connectivity, but does not require labor-intensive drawing of block boundaries. The results are compared to other slip rate estimates that have implications for hazard over different time scales. Time invariant long term seismic hazard is proportional to the long term slip rate accessible from geologic data. Contemporary time-invariant hazard, however, may differ from the long term rate, and is estimated from the geodetic velocity field that has been corrected for the effects of magmatic inflation in the caldera using a published model of a dipping ellipsoidal

  4. Essential Oil Variability and Biological Activities of Tetraclinis articulata (Vahl) Mast. Wood According to the Extraction Time.

    Science.gov (United States)

    Djouahri, Abderrahmane; Saka, Boualem; Boudarene, Lynda; Baaliouamer, Aoumeur

    2016-12-01

    In the present work, the hydrodistillation (HD) and microwave-assisted hydrodistillation (MAHD) kinetics of essential oil (EO) extracted from Tetraclinis articulata (Vahl) Mast. wood was conducted, in order to assess the impact of extraction time and technique on chemical composition and biological activities. Gas chromatography (GC) and GC/mass spectrometry analyses showed significant differences between the extracted EOs, where each family class or component presents a specific kinetic according to extraction time, technique and especially for the major components: camphene, linalool, cedrol, carvacrol and α-acorenol. Furthermore, our findings showed a high variability for both antioxidant and anti-inflammatory activities, where each activity has a specific effect according to extraction time and technique. The highlighted variability reflects the high impact of extraction time and technique on chemical composition and biological activities, which led to conclude that we should select EOs to be investigated carefully depending on extraction time and technique, in order to isolate the bioactive components or to have the best quality of EO in terms of biological activities and preventive effects in food. © 2016 Wiley-VHCA AG, Zurich, Switzerland.

  5. Spontaneous variability of pre-dialysis concentrations of uremic toxins over time in stable hemodialysis patients.

    Directory of Open Access Journals (Sweden)

    Sunny Eloot

    Full Text Available Numerous outcome studies and interventional trials in hemodialysis (HD patients are based on uremic toxin concentrations determined at one single or a limited number of time points. The reliability of these studies however entirely depends on how representative these cross-sectional concentrations are. We therefore investigated the variability of predialysis concentrations of uremic toxins over time.Prospectively collected predialysis serum samples of the midweek session of week 0, 1, 2, 3, 4, 8, 12, and 16 were analyzed for a panel of uremic toxins in stable chronic HD patients (N = 18 while maintaining dialyzer type and dialysis mode during the study period.Concentrations of the analyzed uremic toxins varied substantially between individuals, but also within stable HD patients (intra-patient variability. For urea, creatinine, beta-2-microglobulin, and some protein-bound uremic toxins, Intra-class Correlation Coefficient (ICC was higher than 0.7. However, for phosphorus, uric acid, symmetric and asymmetric dimethylarginine, and the protein-bound toxins hippuric acid and indoxyl sulfate, ICC values were below 0.7, implying a concentration variability within the individual patient even exceeding 65% of the observed inter-patient variability.Intra-patient variability may affect the interpretation of the association between a single concentration of certain uremic toxins and outcomes. When performing future outcome and interventional studies with uremic toxins other than described here, one should quantify their intra-patient variability and take into account that for solutes with a large intra-patient variability associations could be missed.

  6. Evaluating Varied Label Designs for Use with Medical Devices: Optimized Labels Outperform Existing Labels in the Correct Selection of Devices and Time to Select.

    Directory of Open Access Journals (Sweden)

    Laura Bix

    Full Text Available Effective standardization of medical device labels requires objective study of varied designs. Insufficient empirical evidence exists regarding how practitioners utilize and view labeling.Measure the effect of graphic elements (boxing information, grouping information, symbol use and color-coding to optimize a label for comparison with those typical of commercial medical devices.Participants viewed 54 trials on a computer screen. Trials were comprised of two labels that were identical with regard to graphics, but differed in one aspect of information (e.g., one had latex, the other did not. Participants were instructed to select the label along a given criteria (e.g., latex containing as quickly as possible. Dependent variables were binary (correct selection and continuous (time to correct selection.Eighty-nine healthcare professionals were recruited at Association of Surgical Technologists (AST conferences, and using a targeted e-mail of AST members.Symbol presence, color coding and grouping critical pieces of information all significantly improved selection rates and sped time to correct selection (α = 0.05. Conversely, when critical information was graphically boxed, probability of correct selection and time to selection were impaired (α = 0.05. Subsequently, responses from trials containing optimal treatments (color coded, critical information grouped with symbols were compared to two labels created based on a review of those commercially available. Optimal labels yielded a significant positive benefit regarding the probability of correct choice ((P<0.0001 LSM; UCL, LCL: 97.3%; 98.4%, 95.5%, as compared to the two labels we created based on commercial designs (92.0%; 94.7%, 87.9% and 89.8%; 93.0%, 85.3% and time to selection.Our study provides data regarding design factors, namely: color coding, symbol use and grouping of critical information that can be used to significantly enhance the performance of medical device labels.

  7. Stochastic weather inputs for improved urban water demand forecasting: application of nonlinear input variable selection and machine learning methods

    Science.gov (United States)

    Quilty, J.; Adamowski, J. F.

    2015-12-01

    Urban water supply systems are often stressed during seasonal outdoor water use as water demands related to the climate are variable in nature making it difficult to optimize the operation of the water supply system. Urban water demand forecasts (UWD) failing to include meteorological conditions as inputs to the forecast model may produce poor forecasts as they cannot account for the increase/decrease in demand related to meteorological conditions. Meteorological records stochastically simulated into the future can be used as inputs to data-driven UWD forecasts generally resulting in improved forecast accuracy. This study aims to produce data-driven UWD forecasts for two different Canadian water utilities (Montreal and Victoria) using machine learning methods by first selecting historical UWD and meteorological records derived from a stochastic weather generator using nonlinear input variable selection. The nonlinear input variable selection methods considered in this work are derived from the concept of conditional mutual information, a nonlinear dependency measure based on (multivariate) probability density functions and accounts for relevancy, conditional relevancy, and redundancy from a potential set of input variables. The results of our study indicate that stochastic weather inputs can improve UWD forecast accuracy for the two sites considered in this work. Nonlinear input variable selection is suggested as a means to identify which meteorological conditions should be utilized in the forecast.

  8. Variations in Carabidae assemblages across the farmland habitats in relation to selected environmental variables including soil properties

    Directory of Open Access Journals (Sweden)

    Beáta Baranová

    2018-03-01

    Full Text Available The variations in ground beetles (Coleoptera: Carabidae assemblages across the three types of farmland habitats, arable land, meadows and woody vegetation were studied in relation to vegetation cover structure, intensity of agrotechnical interventions and selected soil properties. Material was pitfall trapped in 2010 and 2011 on twelve sites of the agricultural landscape in the Prešov town and its near vicinity, Eastern Slovakia. A total of 14,763 ground beetle individuals were entrapped. Material collection resulted into 92 Carabidae species, with the following six species dominating: Poecilus cupreus, Pterostichus melanarius, Pseudoophonus rufipes, Brachinus crepitans, Anchomenus dorsalis and Poecilus versicolor. Studied habitats differed significantly in the number of entrapped individuals, activity abundance as well as representation of the carabids according to their habitat preferences and ability to fly. However, no significant distinction was observed in the diversity, evenness neither dominance. The most significant environmental variables affecting Carabidae assemblages species variability were soil moisture and herb layer 0-20 cm. Another best variables selected by the forward selection were intensity of agrotechnical interventions, humus content and shrub vegetation. The other from selected soil properties seem to have just secondary meaning for the adult carabids. Environmental variables have the strongest effect on the habitat specialists, whereas ground beetles without special requirements to the habitat quality seem to be affected by the studied environmental variables just little.

  9. Developing a spatial-statistical model and map of historical malaria prevalence in Botswana using a staged variable selection procedure

    Directory of Open Access Journals (Sweden)

    Mabaso Musawenkosi LH

    2007-09-01

    Full Text Available Abstract Background Several malaria risk maps have been developed in recent years, many from the prevalence of infection data collated by the MARA (Mapping Malaria Risk in Africa project, and using various environmental data sets as predictors. Variable selection is a major obstacle due to analytical problems caused by over-fitting, confounding and non-independence in the data. Testing and comparing every combination of explanatory variables in a Bayesian spatial framework remains unfeasible for most researchers. The aim of this study was to develop a malaria risk map using a systematic and practicable variable selection process for spatial analysis and mapping of historical malaria risk in Botswana. Results Of 50 potential explanatory variables from eight environmental data themes, 42 were significantly associated with malaria prevalence in univariate logistic regression and were ranked by the Akaike Information Criterion. Those correlated with higher-ranking relatives of the same environmental theme, were temporarily excluded. The remaining 14 candidates were ranked by selection frequency after running automated step-wise selection procedures on 1000 bootstrap samples drawn from the data. A non-spatial multiple-variable model was developed through step-wise inclusion in order of selection frequency. Previously excluded variables were then re-evaluated for inclusion, using further step-wise bootstrap procedures, resulting in the exclusion of another variable. Finally a Bayesian geo-statistical model using Markov Chain Monte Carlo simulation was fitted to the data, resulting in a final model of three predictor variables, namely summer rainfall, mean annual temperature and altitude. Each was independently and significantly associated with malaria prevalence after allowing for spatial correlation. This model was used to predict malaria prevalence at unobserved locations, producing a smooth risk map for the whole country. Conclusion We have

  10. Numerical Solution of the Time-Dependent Navier–Stokes Equation for Variable Density–Variable Viscosity. Part I

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Xin, H.; Neytcheva, M.

    2015-01-01

    Roč. 20, č. 2 (2015), s. 232-260 ISSN 1392-6292 Institutional support: RVO:68145535 Keywords : variable density * phase-field model * Navier-Stokes equations * preconditioning * variable viscosity Subject RIV: BA - General Mathematics Impact factor: 0.468, year: 2015 http://www.tandfonline.com/doi/abs/10.3846/13926292.2015.1021395

  11. Awareness of the Faculty Members at Al-Balqa' Applied University to the Concept of Time Management and Its Relation to Some Variables

    Science.gov (United States)

    Sabha, Raed Adel; Al-Assaf, Jamal Abdel-Fattah

    2012-01-01

    The study aims to investigate how extent is the time management awareness of the faculty members of the Al-Balqa' Applied university, and its relation to some variables. The study conducted on (150) teachers were selected randomly. For achieving the study goals an appropriate instrument has been built up based on the educational literature and…

  12. Application of several variable-valve-timing concepts to an LHR engine

    Science.gov (United States)

    Morel, T.; Keribar, R.; Sawlivala, M.; Hakim, N.

    1987-01-01

    The paper discusses advantages provided by electronically controlled hydraulically activated valves (ECVs) when applied to low heat rejection (LHR) engines. The ECV concept provides additional engine control flexibility by allowing for a variable valve timing as a function of speed and load, or for a given transient condition. The results of a study carried out to assess the benefits that this flexibility can offer to an LHR engine indicated that, when judged on the benefits to BSFC, volumetric efficiency, and peak firing pressure, ECVs would provide only modest benefits in comparison to conventional valve profiles. It is noted, however, that once installed on the engine, the ECVs would permit a whole range of certain more sophisticated variable valve timing strategies not otherwise possible, such as high compression cranking, engine braking, cylinder cutouts, and volumetric efficiency timing with engine speed.

  13. Synthesis of Biochemical Applications on Digital Microfluidic Biochips with Operation Execution Time Variability

    DEFF Research Database (Denmark)

    Alistar, Mirela; Pop, Paul

    2015-01-01

    that each biochemical operation in an application is characterized by a worst-case execution time (wcet). However, during the execution of the application, due to variability and randomness in biochemical reactions, operations may finish earlier than their wcetswcets, resulting in unexploited slack...... in the schedule. In this paper, we first propose an online synthesis strategy that re-synthesizes the application at runtime when operations experience variability in their execution time, exploiting thus the slack to obtain shorter application completion times. We also propose a quasi-static synthesis strategy...... approaches have been proposed for the synthesis of digital microfluidic biochips, which, starting from a biochemical application and a given biochip architecture, determine the allocation, resource binding, scheduling, placement and routing of the operations in the application. Researchers have assumed...

  14. Selective nature and inherent variability of interrill erosion across prolonged rainfall simulation

    Science.gov (United States)

    Hu, Y.; Kuhn, N. J.; Fister, W.

    2012-04-01

    Sediment of interrill erosion has been generally recognized to be selectively enriched with soil organic carbon (SOC) and fine fractions (clay/silt-sized particles or aggregates) in comparison to source area soil. Limited kinetic energy and lack of concentrated runoff are the dominant factors causing selective detachment and transportation. Although enrichment ratios of SOC (ERsoc) in eroded sediment were generally reported > 1, the values varied widely. Causal factors to variation, such as initial soil properties, rainfall properties and experimental conditions, have been extensively discussed. But less attention was directed to the potential influence of prolonged rainfall time onto the temporal pattern of ERsoc. Conservation of mass dictates that ERsoc must be balanced by a decline in the source material which should also lead to a reduced or even negative ERsoc in sediment over time. Besides, the stabilizing effects of structural crust on reducing erosional variation, and the unavoidable variations of erosional response induced by the inherent complexity of interrill erosion, have scarcely been integrated. Moreover, during a prolonged rainfall event surface roughness evolves and affects the movement of eroded aggregates and mineral particles. In this study, two silt loams from Möhlin, Switzerland, organically (OS) and conventionally farmed (CS), were exposed to simulated rainfall of 30 mm h-1 for up to 6 hours. Round donut-flumes with a confined eroding area (1845 cm2) and limited transporting distance (20 cm) were used. Sediments, runoff and subsurface flow were collected in intervals of 30 min. Loose aggregates left on the eroded soil surface, crusts and the soil underneath the crusts were collected after the experiment. All the samples were analyzed for total organic carbon (TOC) content, and texture. Laser scanning of soil surface was applied before and after the rainfall event. The whole experiment was repeated for 10 times. Results from this study showed

  15. Histogram bin width selection for time-dependent Poisson processes

    International Nuclear Information System (INIS)

    Koyama, Shinsuke; Shinomoto, Shigeru

    2004-01-01

    In constructing a time histogram of the event sequences derived from a nonstationary point process, we wish to determine the bin width such that the mean squared error of the histogram from the underlying rate of occurrence is minimized. We find that the optimal bin widths obtained for a doubly stochastic Poisson process and a sinusoidally regulated Poisson process exhibit different scaling relations with respect to the number of sequences, time scale and amplitude of rate modulation, but both diverge under similar parametric conditions. This implies that under these conditions, no determination of the time-dependent rate can be made. We also apply the kernel method to these point processes, and find that the optimal kernels do not exhibit any critical phenomena, unlike the time histogram method

  16. Histogram bin width selection for time-dependent Poisson processes

    Energy Technology Data Exchange (ETDEWEB)

    Koyama, Shinsuke; Shinomoto, Shigeru [Department of Physics, Graduate School of Science, Kyoto University, Sakyo-ku, Kyoto 606-8502 (Japan)

    2004-07-23

    In constructing a time histogram of the event sequences derived from a nonstationary point process, we wish to determine the bin width such that the mean squared error of the histogram from the underlying rate of occurrence is minimized. We find that the optimal bin widths obtained for a doubly stochastic Poisson process and a sinusoidally regulated Poisson process exhibit different scaling relations with respect to the number of sequences, time scale and amplitude of rate modulation, but both diverge under similar parametric conditions. This implies that under these conditions, no determination of the time-dependent rate can be made. We also apply the kernel method to these point processes, and find that the optimal kernels do not exhibit any critical phenomena, unlike the time histogram method.

  17. Selection and scheduling of jobs with time-dependent duration

    African Journals Online (AJOL)

    †Department of Logistics, University of Stellenbosch, Private Bag X1, ... station must apply to occupy the test station and sometimes may even choose ... are considered where the job duration and cost are dependent on the time and sequence.

  18. Continuous Time Portfolio Selection under Conditional Capital at Risk

    Directory of Open Access Journals (Sweden)

    Gordana Dmitrasinovic-Vidovic

    2010-01-01

    Full Text Available Portfolio optimization with respect to different risk measures is of interest to both practitioners and academics. For there to be a well-defined optimal portfolio, it is important that the risk measure be coherent and quasiconvex with respect to the proportion invested in risky assets. In this paper we investigate one such measure—conditional capital at risk—and find the optimal strategies under this measure, in the Black-Scholes continuous time setting, with time dependent coefficients.

  19. Selection of the initial conditions in the tunneling time definition

    International Nuclear Information System (INIS)

    Zajchenko, A.K.

    2004-01-01

    The necessity of changing of the initial conditions in the Olkhovsky - Recami definition of the tunneling time is justified. The new initial conditions are proposed which adequately taking into account the irreversibility of the wave packets spreading. The expression for the tunneling time with the new initial conditions is reduced to the form which is convenient for the performing and controlling the accuracy of calculations

  20. Network-based group variable selection for detecting expression quantitative trait loci (eQTL

    Directory of Open Access Journals (Sweden)

    Zhang Xuegong

    2011-06-01

    Full Text Available Abstract Background Analysis of expression quantitative trait loci (eQTL aims to identify the genetic loci associated with the expression level of genes. Penalized regression with a proper penalty is suitable for the high-dimensional biological data. Its performance should be enhanced when we incorporate biological knowledge of gene expression network and linkage disequilibrium (LD structure between loci in high-noise background. Results We propose a network-based group variable selection (NGVS method for QTL detection. Our method simultaneously maps highly correlated expression traits sharing the same biological function to marker sets formed by LD. By grouping markers, complex joint activity of multiple SNPs can be considered and the dimensionality of eQTL problem is reduced dramatically. In order to demonstrate the power and flexibility of our method, we used it to analyze two simulations and a mouse obesity and diabetes dataset. We considered the gene co-expression network, grouped markers into marker sets and treated the additive and dominant effect of each locus as a group: as a consequence, we were able to replicate results previously obtained on the mouse linkage dataset. Furthermore, we observed several possible sex-dependent loci and interactions of multiple SNPs. Conclusions The proposed NGVS method is appropriate for problems with high-dimensional data and high-noise background. On eQTL problem it outperforms the classical Lasso method, which does not consider biological knowledge. Introduction of proper gene expression and loci correlation information makes detecting causal markers more accurate. With reasonable model settings, NGVS can lead to novel biological findings.

  1. Bayesian nonparametric variable selection as an exploratory tool for discovering differentially expressed genes.

    Science.gov (United States)

    Shahbaba, Babak; Johnson, Wesley O

    2013-05-30

    High-throughput scientific studies involving no clear a priori hypothesis are common. For example, a large-scale genomic study of a disease may examine thousands of genes without hypothesizing that any specific gene is responsible for the disease. In these studies, the objective is to explore a large number of possible factors (e.g., genes) in order to identify a small number that will be considered in follow-up studies that tend to be more thorough and on smaller scales. A simple, hierarchical, linear regression model with random coefficients is assumed for case-control data that correspond to each gene. The specific model used will be seen to be related to a standard Bayesian variable selection model. Relatively large regression coefficients correspond to potential differences in responses for cases versus controls and thus to genes that might 'matter'. For large-scale studies, and using a Dirichlet process mixture model for the regression coefficients, we are able to find clusters of regression effects of genes with increasing potential effect or 'relevance', in relation to the outcome of interest. One cluster will always correspond to genes whose coefficients are in a neighborhood that is relatively close to zero and will be deemed least relevant. Other clusters will correspond to increasing magnitudes of the random/latent regression coefficients. Using simulated data, we demonstrate that our approach could be quite effective in finding relevant genes compared with several alternative methods. We apply our model to two large-scale studies. The first study involves transcriptome analysis of infection by human cytomegalovirus. The second study's objective is to identify differentially expressed genes between two types of leukemia. Copyright © 2012 John Wiley & Sons, Ltd.

  2. Time-varying surrogate data to assess nonlinearity in nonstationary time series: application to heart rate variability.

    Science.gov (United States)

    Faes, Luca; Zhao, He; Chon, Ki H; Nollo, Giandomenico

    2009-03-01

    We propose a method to extend to time-varying (TV) systems the procedure for generating typical surrogate time series, in order to test the presence of nonlinear dynamics in potentially nonstationary signals. The method is based on fitting a TV autoregressive (AR) model to the original series and then regressing the model coefficients with random replacements of the model residuals to generate TV AR surrogate series. The proposed surrogate series were used in combination with a TV sample entropy (SE) discriminating statistic to assess nonlinearity in both simulated and experimental time series, in comparison with traditional time-invariant (TIV) surrogates combined with the TIV SE discriminating statistic. Analysis of simulated time series showed that using TIV surrogates, linear nonstationary time series may be erroneously regarded as nonlinear and weak TV nonlinearities may remain unrevealed, while the use of TV AR surrogates markedly increases the probability of a correct interpretation. Application to short (500 beats) heart rate variability (HRV) time series recorded at rest (R), after head-up tilt (T), and during paced breathing (PB) showed: 1) modifications of the SE statistic that were well interpretable with the known cardiovascular physiology; 2) significant contribution of nonlinear dynamics to HRV in all conditions, with significant increase during PB at 0.2 Hz respiration rate; and 3) a disagreement between TV AR surrogates and TIV surrogates in about a quarter of the series, suggesting that nonstationarity may affect HRV recordings and bias the outcome of the traditional surrogate-based nonlinearity test.

  3. Selection and scheduling of jobs with time-dependent duration

    Directory of Open Access Journals (Sweden)

    DM Seegmuller

    2007-06-01

    Full Text Available In this paper two mathematical programming models, both with multiple objective functions, are proposed to solve four related categories of job scheduling problems. All four of these categories have the property that the duration of the jobs is dependent on the time of implementation and in some cases the preceding job. Furthermore, some jobs (restricted to subsets of the total pool of jobs can, to different extents, run in parallel. In addition, not all the jobs need necessarily be implemented during the given time period.

  4. Intraindividual variability in reaction time before and after neoadjuvant chemotherapy in women diagnosed with breast cancer.

    Science.gov (United States)

    Yao, Christie; Rich, Jill B; Tirona, Kattleya; Bernstein, Lori J

    2017-12-01

    Women treated with chemotherapy for breast cancer experience subtle cognitive deficits. Research has focused on mean performance level, yet recent work suggests that within-person variability in reaction time performance may underlie cognitive symptoms. We examined intraindividual variability (IIV) in women diagnosed with breast cancer and treated with neoadjuvant chemotherapy. Patients (n = 28) were assessed at baseline before chemotherapy (T1), approximately 1 month after chemotherapy but prior to surgery (T2), and after surgery about 9 months post chemotherapy (T3). Healthy women of similar age and education (n = 20) were assessed at comparable time intervals. Using a standardized regression-based approach, we examined changes in mean performance level and IIV (eg, intraindividual standard deviation) on a Stroop task and self-report measures of cognitive function from T1 to T2 and T1 to T3. At T1, women with breast cancer were more variable than controls as task complexity increased. Change scores from T1 to T2 were similar between groups on all Stroop performance measures. From T1 to T3, controls improved more than women with breast cancer. IIV was more sensitive than mean reaction time in capturing group differences. Additional analyses showed increased cognitive symptoms reported by women with breast cancer from T1 to T3. Specifically, change in language symptoms was positively correlated with change in variability. Women with breast cancer declined in attention and inhibitory control relative to pretreatment performance. Future studies should include measures of variability, because they are an important sensitive indicator of change in cognitive function. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Towards a More Biologically-meaningful Climate Characterization: Variability in Space and Time at Multiple Scales

    Science.gov (United States)

    Christianson, D. S.; Kaufman, C. G.; Kueppers, L. M.; Harte, J.

    2013-12-01

    fine-spatial scales (sub-meter to 10-meter) shows greater temperature variability with warmer mean temperatures. This is inconsistent with the inherent assumption made in current species distribution models that fine-scale variability is static, implying that current projections of future species ranges may be biased -- the direction and magnitude requiring further study. While we focus our findings on the cross-scaling characteristics of temporal and spatial variability, we also compare the mean-variance relationship between 1) experimental climate manipulations and observed conditions and 2) temporal versus spatial variance, i.e., variability in a time-series at one location vs. variability across a landscape at a single time. The former informs the rich debate concerning the ability to experimentally mimic a warmer future. The latter informs space-for-time study design and analyses, as well as species persistence via a combined spatiotemporal probability of suitable future habitat.

  6. DRAM selection and configuration for real-time mobile systems

    NARCIS (Netherlands)

    Gomony, M.D.; Weis, C.; Akesson, K.B.; Wehn, N.; Goossens, K.G.W.

    2012-01-01

    The performance and power consumption of mobile DRAMs (LPDDRs) depend on the configuration of system-level parameters, such as operating frequency, interface width, request size, and memory map. In mobile systems running both realtime and non-real-time applications, the memory configuration must

  7. multivariate time series modeling of selected childhood diseases

    African Journals Online (AJOL)

    2016-06-17

    Jun 17, 2016 ... KEYWORDS: Multivariate Approach, Pre-whitening, Vector Time Series, .... Alternatively, the process may be written in mean adjusted form as .... The AIC criterion asymptotically over estimates the order with positive probability, whereas the BIC and HQC criteria ... has the same asymptotic distribution as Ǫ.

  8. Inter and intra-observer variability of time-lapse annotations

    DEFF Research Database (Denmark)

    Sundvall Germeys, Linda Karin M; Ingerslev, Hans Jakob; Knudsen, Ulla Breth

    . This provides the basis for further investigation of embryo assessment and selection by time-lapse imaging in prospective trials. Study funding/competing interest(s): Research at the Fertility Clinic was funded by an unrestricted grant from Ferring and MSD. The authors have no competing interests to declare....

  9. Bayesian inference for the genetic control of water deficit tolerance in spring wheat by stochastic search variable selection.

    Science.gov (United States)

    Safari, Parviz; Danyali, Syyedeh Fatemeh; Rahimi, Mehdi

    2018-06-02

    Drought is the main abiotic stress seriously influencing wheat production. Information about the inheritance of drought tolerance is necessary to determine the most appropriate strategy to develop tolerant cultivars and populations. In this study, generation means analysis to identify the genetic effects controlling grain yield inheritance in water deficit and normal conditions was considered as a model selection problem in a Bayesian framework. Stochastic search variable selection (SSVS) was applied to identify the most important genetic effects and the best fitted models using different generations obtained from two crosses applying two water regimes in two growing seasons. The SSVS is used to evaluate the effect of each variable on the dependent variable via posterior variable inclusion probabilities. The model with the highest posterior probability is selected as the best model. In this study, the grain yield was controlled by the main effects (additive and non-additive effects) and epistatic. The results demonstrate that breeding methods such as recurrent selection and subsequent pedigree method and hybrid production can be useful to improve grain yield.

  10. Angular scanning and variable wavelength surface plasmon resonance allowing free sensor surface selection for optimum material- and bio-sensing

    NARCIS (Netherlands)

    Lakayan, Dina; Tuppurainen, Jussipekka; Albers, Martin; van Lint, Matthijs J.; van Iperen, Dick J.; Weda, Jelmer J.A.; Kuncova-Kallio, Johana; Somsen, Govert W.; Kool, Jeroen

    2018-01-01

    A variable-wavelength Kretschmann configuration surface plasmon resonance (SPR) apparatus with angle scanning is presented. The setup provides the possibility of selecting the optimum wavelength with respect to the properties of the metal layer of the sensorchip, sample matrix, and biomolecular

  11. Multivariate modeling of complications with data driven variable selection: Guarding against overfitting and effects of data set size

    International Nuclear Information System (INIS)

    Schaaf, Arjen van der; Xu Chengjian; Luijk, Peter van; Veld, Aart A. van’t; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    Purpose: Multivariate modeling of complications after radiotherapy is frequently used in conjunction with data driven variable selection. This study quantifies the risk of overfitting in a data driven modeling method using bootstrapping for data with typical clinical characteristics, and estimates the minimum amount of data needed to obtain models with relatively high predictive power. Materials and methods: To facilitate repeated modeling and cross-validation with independent datasets for the assessment of true predictive power, a method was developed to generate simulated data with statistical properties similar to real clinical data sets. Characteristics of three clinical data sets from radiotherapy treatment of head and neck cancer patients were used to simulate data with set sizes between 50 and 1000 patients. A logistic regression method using bootstrapping and forward variable selection was used for complication modeling, resulting for each simulated data set in a selected number of variables and an estimated predictive power. The true optimal number of variables and true predictive power were calculated using cross-validation with very large independent data sets. Results: For all simulated data set sizes the number of variables selected by the bootstrapping method was on average close to the true optimal number of variables, but showed considerable spread. Bootstrapping is more accurate in selecting the optimal number of variables than the AIC and BIC alternatives, but this did not translate into a significant difference of the true predictive power. The true predictive power asymptotically converged toward a maximum predictive power for large data sets, and the estimated predictive power converged toward the true predictive power. More than half of the potential predictive power is gained after approximately 200 samples. Our simulations demonstrated severe overfitting (a predicative power lower than that of predicting 50% probability) in a number of small

  12. Selection and scheduling of jobs with time-dependent duration

    OpenAIRE

    DM Seegmuller; SE Visagie; HC de Kock; WJ Pienaar

    2007-01-01

    In this paper two mathematical programming models, both with multiple objective functions, are proposed to solve four related categories of job scheduling problems. All four of these categories have the property that the duration of the jobs is dependent on the time of implementation and in some cases the preceding job. Furthermore, some jobs (restricted to subsets of the total pool of jobs) can, to different extents, run in parallel. In addition, not all the jobs need necessarily be implemen...

  13. The first-passage time distribution for the diffusion model with variable drift

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Kesselmeier, Miriam; Gondan, Matthias

    2017-01-01

    across trials. This extra flexibility allows accounting for slow errors that often occur in response time experiments. So far, the predicted response time distributions were obtained by numerical evaluation as analytical solutions were not available. Here, we present an analytical expression...... for the cumulative first-passage time distribution in the diffusion model with normally distributed trial-to-trial variability in the drift. The solution is obtained with predefined precision, and its evaluation turns out to be extremely fast.......The Ratcliff diffusion model is now arguably the most widely applied model for response time data. Its major advantage is its description of both response times and the probabilities for correct as well as incorrect responses. The model assumes a Wiener process with drift between two constant...

  14. Squeezing more information out of time variable gravity data with a temporal decomposition approach

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Bordoni, A.; Aoudia, A.

    2012-01-01

    an explorative approach based on a suitable time series decomposition, which does not rely on predefined time signatures. The comparison and validation against the fitting approach commonly used in GRACE literature shows a very good agreement for what concerns trends and periodic signals on one side......A measure of the Earth's gravity contains contributions from solid Earth as well as climate-related phenomena, that cannot be easily distinguished both in time and space. After more than 7years, the GRACE gravity data available now support more elaborate analysis on the time series. We propose...... used to assess the possibility of finding evidence of meaningful geophysical signals different from hydrology over Africa in GRACE data. In this case we conclude that hydrological phenomena are dominant and so time variable gravity data in Africa can be directly used to calibrate hydrological models....

  15. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Directory of Open Access Journals (Sweden)

    Yeqing Zhang

    2018-02-01

    Full Text Available For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully.

  16. Trend Change Detection in NDVI Time Series: Effects of Inter-Annual Variability and Methodology

    Science.gov (United States)

    Forkel, Matthias; Carvalhais, Nuno; Verbesselt, Jan; Mahecha, Miguel D.; Neigh, Christopher S.R.; Reichstein, Markus

    2013-01-01

    Changing trends in ecosystem productivity can be quantified using satellite observations of Normalized Difference Vegetation Index (NDVI). However, the estimation of trends from NDVI time series differs substantially depending on analyzed satellite dataset, the corresponding spatiotemporal resolution, and the applied statistical method. Here we compare the performance of a wide range of trend estimation methods and demonstrate that performance decreases with increasing inter-annual variability in the NDVI time series. Trend slope estimates based on annual aggregated time series or based on a seasonal-trend model show better performances than methods that remove the seasonal cycle of the time series. A breakpoint detection analysis reveals that an overestimation of breakpoints in NDVI trends can result in wrong or even opposite trend estimates. Based on our results, we give practical recommendations for the application of trend methods on long-term NDVI time series. Particularly, we apply and compare different methods on NDVI time series in Alaska, where both greening and browning trends have been previously observed. Here, the multi-method uncertainty of NDVI trends is quantified through the application of the different trend estimation methods. Our results indicate that greening NDVI trends in Alaska are more spatially and temporally prevalent than browning trends. We also show that detected breakpoints in NDVI trends tend to coincide with large fires. Overall, our analyses demonstrate that seasonal trend methods need to be improved against inter-annual variability to quantify changing trends in ecosystem productivity with higher accuracy.

  17. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    Science.gov (United States)

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-01-01

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301

  18. A new variable interval schedule with constant hazard rate and finite time range.

    Science.gov (United States)

    Bugallo, Mehdi; Machado, Armando; Vasconcelos, Marco

    2018-05-27

    We propose a new variable interval (VI) schedule that achieves constant probability of reinforcement in time while using a bounded range of intervals. By sampling each trial duration from a uniform distribution ranging from 0 to 2 T seconds, and then applying a reinforcement rule that depends linearly on trial duration, the schedule alternates reinforced and unreinforced trials, each less than 2 T seconds, while preserving a constant hazard function. © 2018 Society for the Experimental Analysis of Behavior.

  19. Antipersistent dynamics in short time scale variability of self-potential signals

    OpenAIRE

    Cuomo, V.; Lanfredi, M.; Lapenna, V.; Macchiato, M.; Ragosta, M.; Telesca, L.

    2000-01-01

    Time scale properties of self-potential signals are investigated through the analysis of the second order structure function (variogram), a powerful tool to investigate the spatial and temporal variability of observational data. In this work we analyse two sequences of self-potential values measured by means of a geophysical monitoring array located in a seismically active area of Southern Italy. The range of scales investigated goes from a few minutes to several days. It is shown that signal...

  20. Enhanced Requirements for Assessment in a Competency-Based, Time-Variable Medical Education System.

    Science.gov (United States)

    Gruppen, Larry D; Ten Cate, Olle; Lingard, Lorelei A; Teunissen, Pim W; Kogan, Jennifer R

    2018-03-01

    Competency-based, time-variable medical education has reshaped the perceptions and practices of teachers, curriculum designers, faculty developers, clinician educators, and program administrators. This increasingly popular approach highlights the fact that learning among different individuals varies in duration, foundation, and goal. Time variability places particular demands on the assessment data that are so necessary for making decisions about learner progress. These decisions may be formative (e.g., feedback for improvement) or summative (e.g., decisions about advancing a student). This article identifies challenges to collecting assessment data and to making assessment decisions in a time-variable system. These challenges include managing assessment data, defining and making valid assessment decisions, innovating in assessment, and modeling the considerable complexity of assessment in real-world settings and richly interconnected social systems. There are hopeful signs of creativity in assessment both from researchers and practitioners, but the transition from a traditional to a competency-based medical education system will likely continue to create much controversy and offer opportunities for originality and innovation in assessment.

  1. The role of protozoa-driven selection in shaping human genetic variability.

    Science.gov (United States)

    Pozzoli, Uberto; Fumagalli, Matteo; Cagliani, Rachele; Comi, Giacomo P; Bresolin, Nereo; Clerici, Mario; Sironi, Manuela

    2010-03-01

    Protozoa exert a strong selective pressure in humans. The selection signatures left by these pathogens can be exploited to identify genetic modulators of infection susceptibility. We show that protozoa diversity in different geographic locations is a good measure of protozoa-driven selective pressure; protozoa diversity captured selection signatures at known malaria resistance loci and identified several selected single nucleotide polymorphisms in immune and hemolytic anemia genes. A genome-wide search enabled us to identify 5180 variants mapping to 1145 genes that are subjected to protozoa-driven selective pressure. We provide a genome-wide estimate of protozoa-driven selective pressure and identify candidate susceptibility genes for protozoa-borne diseases. Copyright 2010 Elsevier Ltd. All rights reserved.

  2. Inter- and intra-observer variability of time-lapse annotations

    DEFF Research Database (Denmark)

    Sundvall, Linda; Ingerslev, Hans Jakob; Breth Knudsen, Ulla

    2013-01-01

    . This provides the basis for further investigation of embryo assessment and selection by time-lapse imaging in prospective trials. Study funding/competing interest(s): Research at the Fertility Clinic was funded by an unrestricted grant from Ferring and MSD. The authors have no competing interests to declare.......Study question: How consistent is the time-lapse annotation of dynamic and static morphologic parameters of embryo development, within and between observers? Summary answer: The assessment of dynamic parameters is characterized by almost perfect agreement within and between observers. What is known...... already: The commonly employed method used to assess embryos in IVF treatments is based on static evaluation of morphology in a microscope, but this is limited by substantial intra- and inter-observer variation. Time-lapse imaging has been proposed as a method to refine embryo selection by adding new...

  3. Study of selected phenotype switching strategies in time varying environment

    Energy Technology Data Exchange (ETDEWEB)

    Horvath, Denis, E-mail: horvath.denis@gmail.com [Centre of Interdisciplinary Biosciences, Institute of Physics, Faculty of Science, P.J. Šafárik University in Košice, Jesenná 5, 040 01 Košice (Slovakia); Brutovsky, Branislav, E-mail: branislav.brutovsky@upjs.sk [Department of Biophysics, Institute of Physics, P.J. Šafárik University in Košice, Jesenná 5, 040 01 Košice (Slovakia)

    2016-03-22

    Population heterogeneity plays an important role across many research, as well as the real-world, problems. The population heterogeneity relates to the ability of a population to cope with an environment change (or uncertainty) preventing its extinction. However, this ability is not always desirable as can be exemplified by an intratumor heterogeneity which positively correlates with the development of resistance to therapy. Causation of population heterogeneity is therefore in biology and medicine an intensively studied topic. In this paper the evolution of a specific strategy of population diversification, the phenotype switching, is studied at a conceptual level. The presented simulation model studies evolution of a large population of asexual organisms in a time-varying environment represented by a stochastic Markov process. Each organism disposes with a stochastic or nonlinear deterministic switching strategy realized by discrete-time models with evolvable parameters. We demonstrate that under rapidly varying exogenous conditions organisms operate in the vicinity of the bet-hedging strategy, while the deterministic patterns become relevant as the environmental variations are less frequent. Statistical characterization of the steady state regimes of the populations is done using the Hellinger and Kullback–Leibler functional distances and the Hamming distance. - Highlights: • Relation between phenotype switching and environment is studied. • The Markov chain Monte Carlo based model is developed. • Stochastic and deterministic strategies of phenotype switching are utilized. • Statistical measures of the dynamic heterogeneity reveal universal properties. • The results extend to higher lattice dimensions.

  4. Study of selected phenotype switching strategies in time varying environment

    International Nuclear Information System (INIS)

    Horvath, Denis; Brutovsky, Branislav

    2016-01-01

    Population heterogeneity plays an important role across many research, as well as the real-world, problems. The population heterogeneity relates to the ability of a population to cope with an environment change (or uncertainty) preventing its extinction. However, this ability is not always desirable as can be exemplified by an intratumor heterogeneity which positively correlates with the development of resistance to therapy. Causation of population heterogeneity is therefore in biology and medicine an intensively studied topic. In this paper the evolution of a specific strategy of population diversification, the phenotype switching, is studied at a conceptual level. The presented simulation model studies evolution of a large population of asexual organisms in a time-varying environment represented by a stochastic Markov process. Each organism disposes with a stochastic or nonlinear deterministic switching strategy realized by discrete-time models with evolvable parameters. We demonstrate that under rapidly varying exogenous conditions organisms operate in the vicinity of the bet-hedging strategy, while the deterministic patterns become relevant as the environmental variations are less frequent. Statistical characterization of the steady state regimes of the populations is done using the Hellinger and Kullback–Leibler functional distances and the Hamming distance. - Highlights: • Relation between phenotype switching and environment is studied. • The Markov chain Monte Carlo based model is developed. • Stochastic and deterministic strategies of phenotype switching are utilized. • Statistical measures of the dynamic heterogeneity reveal universal properties. • The results extend to higher lattice dimensions.

  5. Discrete-time bidirectional associative memory neural networks with variable delays

    International Nuclear Information System (INIS)

    Liang Jinling; Cao Jinde; Ho, Daniel W.C.

    2005-01-01

    Based on the linear matrix inequality (LMI), some sufficient conditions are presented in this Letter for the existence, uniqueness and global exponential stability of the equilibrium point of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Some of the stability criteria obtained in this Letter are delay-dependent, and some of them are delay-independent, they are less conservative than the ones reported so far in the literature. Furthermore, the results provide one more set of easily verified criteria for determining the exponential stability of discrete-time BAM neural networks

  6. Discrete-time bidirectional associative memory neural networks with variable delays

    Science.gov (United States)

    Liang, variable delays [rapid communication] J.; Cao, J.; Ho, D. W. C.

    2005-02-01

    Based on the linear matrix inequality (LMI), some sufficient conditions are presented in this Letter for the existence, uniqueness and global exponential stability of the equilibrium point of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Some of the stability criteria obtained in this Letter are delay-dependent, and some of them are delay-independent, they are less conservative than the ones reported so far in the literature. Furthermore, the results provide one more set of easily verified criteria for determining the exponential stability of discrete-time BAM neural networks.

  7. Time-variable gravity potential components for optical clock comparisons and the definition of international time scales

    International Nuclear Information System (INIS)

    Voigt, C.; Denker, H.; Timmen, L.

    2016-01-01

    The latest generation of optical atomic clocks is approaching the level of one part in 10 18 in terms of frequency stability and uncertainty. For clock comparisons and the definition of international time scales, a relativistic redshift effect of the clock frequencies has to be taken into account at a corresponding uncertainty level of about 0.1 m 2 s -2 and 0.01 m in terms of gravity potential and height, respectively. Besides the predominant static part of the gravity potential, temporal variations must be considered in order to avoid systematic frequency shifts. Time-variable gravity potential components induced by tides and non-tidal mass redistributions are investigated with regard to the level of one part in 10 18 . The magnitudes and dominant time periods of the individual gravity potential contributions are investigated globally and for specific laboratory sites together with the related uncertainty estimates. The basics of the computation methods are presented along with the applied models, data sets and software. Solid Earth tides contribute by far the most dominant signal with a global maximum amplitude of 4.2 m 2 s -2 for the potential and a range (maximum-to-minimum) of up to 1.3 and 10.0 m 2 s -2 in terms of potential differences between specific laboratories over continental and intercontinental scales, respectively. Amplitudes of the ocean tidal loading potential can amount up to 1.25 m 2 s -2 , while the range of the potential between specific laboratories is 0.3 and 1.1 m 2 s -2 over continental and intercontinental scales, respectively. These are the only two contributors being relevant at a 10 -17 level. However, several other time-variable potential effects can particularly affect clock comparisons at the 10 -18 level. Besides solid Earth pole tides, these are non-tidal mass redistributions in the atmosphere, the oceans and the continental water storage. (authors)

  8. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  9. Variable selection for confounder control, flexible modeling and Collaborative Targeted Minimum Loss-based Estimation in causal inference

    Science.gov (United States)

    Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan

    2015-01-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129

  10. An Epidemic Model of Computer Worms with Time Delay and Variable Infection Rate

    Directory of Open Access Journals (Sweden)

    Yu Yao

    2018-01-01

    Full Text Available With rapid development of Internet, network security issues become increasingly serious. Temporary patches have been put on the infectious hosts, which may lose efficacy on occasions. This leads to a time delay when vaccinated hosts change to susceptible hosts. On the other hand, the worm infection is usually a nonlinear process. Considering the actual situation, a variable infection rate is introduced to describe the spread process of worms. According to above aspects, we propose a time-delayed worm propagation model with variable infection rate. Then the existence condition and the stability of the positive equilibrium are derived. Due to the existence of time delay, the worm propagation system may be unstable and out of control. Moreover, the threshold τ0 of Hopf bifurcation is obtained. The worm propagation system is stable if time delay is less than τ0. When time delay is over τ0, the system will be unstable. In addition, numerical experiments have been performed, which can match the conclusions we deduce. The numerical experiments also show that there exists a threshold in the parameter a, which implies that we should choose appropriate infection rate β(t to constrain worm prevalence. Finally, simulation experiments are carried out to prove the validity of our conclusions.

  11. Intraindividual variability in reaction time predicts cognitive outcomes 5 years later.

    Science.gov (United States)

    Bielak, Allison A M; Hultsch, David F; Strauss, Esther; Macdonald, Stuart W S; Hunter, Michael A

    2010-11-01

    Building on results suggesting that intraindividual variability in reaction time (inconsistency) is highly sensitive to even subtle changes in cognitive ability, this study addressed the capacity of inconsistency to predict change in cognitive status (i.e., cognitive impairment, no dementia [CIND] classification) and attrition 5 years later. Two hundred twelve community-dwelling older adults, initially aged 64-92 years, remained in the study after 5 years. Inconsistency was calculated from baseline reaction time performance. Participants were assigned to groups on the basis of their fluctuations in CIND classification over time. Logistic and Cox regressions were used. Baseline inconsistency significantly distinguished among those who remained or transitioned into CIND over the 5 years and those who were consistently intact (e.g., stable intact vs. stable CIND, Wald (1) = 7.91, p < .01, Exp(β) = 1.49). Average level of inconsistency over time was also predictive of study attrition, for example, Wald (1) = 11.31, p < .01, Exp(β) = 1.24. For both outcomes, greater inconsistency was associated with a greater likelihood of being in a maladaptive group 5 years later. Variability based on moderately cognitively challenging tasks appeared to be particularly sensitive to longitudinal changes in cognitive ability. Mean rate of responding was a comparable predictor of change in most instances, but individuals were at greater relative risk of being in a maladaptive outcome group if they were more inconsistent rather than if they were slower in responding. Implications for the potential utility of intraindividual variability in reaction time as an early marker of cognitive decline are discussed. (c) 2010 APA, all rights reserved

  12. Earth System Data Records of Mass Transport from Time-Variable Gravity Data

    Science.gov (United States)

    Zlotnicki, V.; Talpe, M.; Nerem, R. S.; Landerer, F. W.; Watkins, M. M.

    2014-12-01

    Satellite measurements of time variable gravity have revolutionized the study of Earth, by measuring the ice losses of Greenland, Antarctica and land glaciers, changes in groundwater including unsustainable losses due to extraction of groundwater, the mass and currents of the oceans and their redistribution during El Niño events, among other findings. Satellite measurements of gravity have been made primarily by four techniques: satellite tracking from land stations using either lasers or Doppler radio systems, satellite positioning by GNSS/GPS, satellite to satellite tracking over distances of a few hundred km using microwaves, and through a gravity gradiometer (radar altimeters also measure the gravity field, but over the oceans only). We discuss the challenges in the measurement of gravity by different instruments, especially time-variable gravity. A special concern is how to bridge a possible gap in time between the end of life of the current GRACE satellite pair, launched in 2002, and a future GRACE Follow-On pair to be launched in 2017. One challenge in combining data from different measurement systems consists of their different spatial and temporal resolutions and the different ways in which they alias short time scale signals. Typically satellite measurements of gravity are expressed in spherical harmonic coefficients (although expansions in terms of 'mascons', the masses of small spherical caps, has certain advantages). Taking advantage of correlations among spherical harmonic coefficients described by empirical orthogonal functions and derived from GRACE data it is possible to localize the otherwise coarse spatial resolution of the laser and Doppler derived gravity models. This presentation discusses the issues facing a climate data record of time variable mass flux using these different data sources, including its validation.

  13. The selection of a mode of urban transportation: Integrating psychological variables to discrete choice models

    International Nuclear Information System (INIS)

    Cordoba Maquilon, Jorge E; Gonzalez Calderon, Carlos A; Posada Henao, John J

    2011-01-01

    A study using revealed preference surveys and psychological tests was conducted. Key psychological variables of behavior involved in the choice of transportation mode in a population sample of the Metropolitan Area of the Valle de Aburra were detected. The experiment used the random utility theory for discrete choice models and reasoned action in order to assess beliefs. This was used as a tool for analysis of the psychological variables using the sixteen personality factor questionnaire (16PF test). In addition to the revealed preference surveys, two other surveys were carried out: one with socio-economic characteristics and the other with latent indicators. This methodology allows for an integration of discrete choice models and latent variables. The integration makes the model operational and quantifies the unobservable psychological variables. The most relevant result obtained was that anxiety affects the choice of urban transportation mode and shows that physiological alterations, as well as problems in perception and beliefs, can affect the decision-making process.

  14. Oracle Efficient Variable Selection in Random and Fixed Effects Panel Data Models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    This paper generalizes the results for the Bridge estimator of Huang et al. (2008) to linear random and fixed effects panel data models which are allowed to grow in both dimensions. In particular we show that the Bridge estimator is oracle efficient. It can correctly distinguish between relevant...... and irrelevant variables and the asymptotic distribution of the estimators of the coefficients of the relevant variables is the same as if only these had been included in the model, i.e. as if an oracle had revealed the true model prior to estimation. In the case of more explanatory variables than observations......, we prove that the Marginal Bridge estimator can asymptotically correctly distinguish between relevant and irrelevant explanatory variables. We do this without restricting the dependence between covariates and without assuming sub Gaussianity of the error terms thereby generalizing the results...

  15. Variable Neighbourhood Search and Mathematical Programming for Just-in-Time Job-Shop Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Sunxin Wang

    2014-01-01

    Full Text Available This paper presents a combination of variable neighbourhood search and mathematical programming to minimize the sum of earliness and tardiness penalty costs of all operations for just-in-time job-shop scheduling problem (JITJSSP. Unlike classical E/T scheduling problem with each job having its earliness or tardiness penalty cost, each operation in this paper has its earliness and tardiness penalties, which are paid if the operation is completed before or after its due date. Our hybrid algorithm combines (i a variable neighbourhood search procedure to explore the huge feasible solution spaces efficiently by alternating the swap and insertion neighbourhood structures and (ii a mathematical programming model to optimize the completion times of the operations for a given solution in each iteration procedure. Additionally, a threshold accepting mechanism is proposed to diversify the local search of variable neighbourhood search. Computational results on the 72 benchmark instances show that our algorithm can obtain the best known solution for 40 problems, and the best known solutions for 33 problems are updated.

  16. Dissecting Time- from Tumor-Related Gene Expression Variability in Bilateral Breast Cancer

    Directory of Open Access Journals (Sweden)

    Maurizio Callari

    2018-01-01

    Full Text Available Metachronous (MBC and synchronous bilateral breast tumors (SBC are mostly distinct primaries, whereas paired primaries and their local recurrences (LRC share a common origin. Intra-pair gene expression variability in MBC, SBC, and LRC derives from time/tumor microenvironment-related and tumor genetic background-related factors and pairs represents an ideal model for trying to dissect tumor-related from microenvironment-related variability. Pairs of tumors derived from women with SBC (n = 18, MBC (n = 11, and LRC (n = 10 undergoing local-regional treatment were profiled for gene expression; similarity between pairs was measured using an intraclass correlation coefficient (ICC computed for each gene and compared using analysis of variance (ANOVA. When considering biologically unselected genes, the highest correlations were found for primaries and paired LRC, and the lowest for MBC pairs. By instead limiting the analysis to the breast cancer intrinsic genes, correlations between primaries and paired LRC were enhanced, while lower similarities were observed for SBC and MBC. Focusing on stromal-related genes, the ICC values decreased for MBC and were significantly different from SBC. These findings indicate that it is possible to dissect intra-pair gene expression variability into components that are associated with genetic origin or with time and microenvironment by using specific gene subsets.

  17. Effects of spring temperatures on the strength of selection on timing of reproduction in a long-distance migratory bird.

    Directory of Open Access Journals (Sweden)

    Marcel E Visser

    2015-04-01

    Full Text Available Climate change has differentially affected the timing of seasonal events for interacting trophic levels, and this has often led to increased selection on seasonal timing. Yet, the environmental variables driving this selection have rarely been identified, limiting our ability to predict future ecological impacts of climate change. Using a dataset spanning 31 years from a natural population of pied flycatchers (Ficedula hypoleuca, we show that directional selection on timing of reproduction intensified in the first two decades (1980-2000 but weakened during the last decade (2001-2010. Against expectation, this pattern could not be explained by the temporal variation in the phenological mismatch with food abundance. We therefore explored an alternative hypothesis that selection on timing was affected by conditions individuals experience when arriving in spring at the breeding grounds: arriving early in cold conditions may reduce survival. First, we show that in female recruits, spring arrival date in the first breeding year correlates positively with hatch date; hence, early-hatched individuals experience colder conditions at arrival than late-hatched individuals. Second, we show that when temperatures at arrival in the recruitment year were high, early-hatched young had a higher recruitment probability than when temperatures were low. We interpret this as a potential cost of arriving early in colder years, and climate warming may have reduced this cost. We thus show that higher temperatures in the arrival year of recruits were associated with stronger selection for early reproduction in the years these birds were born. As arrival temperatures in the beginning of the study increased, but recently declined again, directional selection on timing of reproduction showed a nonlinear change. We demonstrate that environmental conditions with a lag of up to two years can alter selection on phenological traits in natural populations, something that has

  18. Elevated temperature inelastic analysis of metallic media under time varying loads using state variable theories

    International Nuclear Information System (INIS)

    Kumar, V.; Mukherjee, S.

    1977-01-01

    In the present paper a general time-dependent inelastic analysis procedure for three-dimensional bodies subjected to arbitrary time varying mechanical and thermal loads using these state variable theories is presented. For the purpose of illustrations, the problems of hollow spheres, cylinders and solid circular shafts subjected to various combinations of internal and external pressures, axial force (or constraint) and torque are analyzed using the proposed solution procedure. Various cyclic thermal and mechanical loading histories with rectangular or sawtooth type waves with or without hold-time are considered. Numerical results for these geometrical shapes for various such loading histories are presented using Hart's theory (Journal of Engineering Materials and Technology 1976). The calculations are performed for nickel in the temperature range of 25 0 C to 400 0 C. For integrating forward in time, a method of solving a stiff system of ordinary differential equations is employed which corrects the step size and order of the method automatically. The limit loads for hollow spheres and cylinders are calculated using the proposed method and Hart's theory, and comparisons are made against the known theoretical results. The numerical results for other loading histories are discussed in the context of Hart's state variable type constitutive relations. The significance of phenomena such as strain rate sensitivity, Bauschinger's effect, crep recovery, history dependence and material softening with regard to these multiaxial problems are discussed in the context of Hart's theory

  19. Retention time variability as a mechanism for animal mediated long-distance dispersal.

    Directory of Open Access Journals (Sweden)

    Vishwesha Guttal

    Full Text Available Long-distance dispersal (LDD events, although rare for most plant species, can strongly influence population and community dynamics. Animals function as a key biotic vector of seeds and thus, a mechanistic and quantitative understanding of how individual animal behaviors scale to dispersal patterns at different spatial scales is a question of critical importance from both basic and applied perspectives. Using a diffusion-theory based analytical approach for a wide range of animal movement and seed transportation patterns, we show that the scale (a measure of local dispersal of the seed dispersal kernel increases with the organisms' rate of movement and mean seed retention time. We reveal that variations in seed retention time is a key determinant of various measures of LDD such as kurtosis (or shape of the kernel, thinkness of tails and the absolute number of seeds falling beyond a threshold distance. Using empirical data sets of frugivores, we illustrate the importance of variability in retention times for predicting the key disperser species that influence LDD. Our study makes testable predictions linking animal movement behaviors and gut retention times to dispersal patterns and, more generally, highlights the potential importance of animal behavioral variability for the LDD of seeds.

  20. Multiscale time irreversibility of heart rate and blood pressure variability during orthostasis

    International Nuclear Information System (INIS)

    Chladekova, L; Czippelova, B; Turianikova, Z; Tonhajzerova, I; Calkovska, A; Javorka, M; Baumert, M

    2012-01-01

    Time irreversibility is a characteristic feature of non-equilibrium, complex systems such as the cardiovascular control mediated by the autonomic nervous system (ANS). Time irreversibility analysis of heart rate variability (HRV) and blood pressure variability (BPV) represents a new approach to assess cardiovascular regulatory mechanisms. The aim of this paper was to assess the changes in HRV and BPV irreversibility during the active orthostatic test (a balance of ANS shifted towards sympathetic predominance) in 28 healthy young subjects. We used three different time irreversibility indices—Porta’s, Guzik's and Ehler's indices (P%, G% and E, respectively) derived from data segments containing 1000 beat-to-beat intervals on four timescales. We observed an increase in the HRV and a decrease in the BPV irreversibility during standing compared to the supine position. The postural change in irreversibility was confirmed by surrogate data analysis. The differences were more evident in G% and E than P% and for higher scale factors. Statistical analysis showed a close relationship between G% and E. Contrary to this, the association between P% and G% and P% and E was not proven. We conclude that time irreversibility of beat-to-beat HRV and BPV is significantly altered during orthostasis, implicating involvement of the autonomous nervous system in its generation. (paper)

  1. Bidecadal North Atlantic ocean circulation variability controlled by timing of volcanic eruptions.

    Science.gov (United States)

    Swingedouw, Didier; Ortega, Pablo; Mignot, Juliette; Guilyardi, Eric; Masson-Delmotte, Valérie; Butler, Paul G; Khodri, Myriam; Séférian, Roland

    2015-03-30

    While bidecadal climate variability has been evidenced in several North Atlantic paleoclimate records, its drivers remain poorly understood. Here we show that the subset of CMIP5 historical climate simulations that produce such bidecadal variability exhibits a robust synchronization, with a maximum in Atlantic Meridional Overturning Circulation (AMOC) 15 years after the 1963 Agung eruption. The mechanisms at play involve salinity advection from the Arctic and explain the timing of Great Salinity Anomalies observed in the 1970s and the 1990s. Simulations, as well as Greenland and Iceland paleoclimate records, indicate that coherent bidecadal cycles were excited following five Agung-like volcanic eruptions of the last millennium. Climate simulations and a conceptual model reveal that destructive interference caused by the Pinatubo 1991 eruption may have damped the observed decreasing trend of the AMOC in the 2000s. Our results imply a long-lasting climatic impact and predictability following the next Agung-like eruption.

  2. Diagnostic Value of Selected Echocardiographic Variables to Identify Pulmonary Hypertension in Dogs with Myxomatous Mitral Valve Disease.

    Science.gov (United States)

    Tidholm, A; Höglund, K; Häggström, J; Ljungvall, I

    2015-01-01

    Pulmonary hypertension (PH) is commonly associated with myxomatous mitral valve disease (MMVD). Because dogs with PH present without measureable tricuspid regurgitation (TR), it would be useful to investigate echocardiographic variables that can identify PH. To investigate associations between estimated systolic TR pressure gradient (TRPG) and dog characteristics and selected echocardiographic variables. 156 privately owned dogs. Prospective observational study comparing the estimations of TRPG with dog characteristics and selected echocardiographic variables in dogs with MMVD and measureable TR. Tricuspid regurgitation pressure gradient was significantly (P modeled as linear variables LA/Ao (P modeled as second order polynomial variables: AT/DT (P = .0039) and LVIDDn (P value for the final model was 0.45 and receiver operating characteristic curve analysis suggested the model's performance to predict PH, defined as 36, 45, and 55 mmHg as fair (area under the curve [AUC] = 0.80), good (AUC = 0.86), and excellent (AUC = 0.92), respectively. In dogs with MMVD, the presence of PH might be suspected with the combination of decreased PA AT/DT, increased RVIDDn and LA/Ao, and a small or great LVIDDn. Copyright © 2015 The Authors Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  3. Effects of musical tempo on physiological, affective, and perceptual variables and performance of self-selected walking pace.

    Science.gov (United States)

    Almeida, Flávia Angélica Martins; Nunes, Renan Felipe Hartmann; Ferreira, Sandro Dos Santos; Krinski, Kleverton; Elsangedy, Hassan Mohamed; Buzzachera, Cosme Franklin; Alves, Ragami Chaves; Gregorio da Silva, Sergio

    2015-06-01

    [Purpose] This study investigated the effects of musical tempo on physiological, affective, and perceptual responses as well as the performance of self-selected walking pace. [Subjects] The study included 28 adult women between 29 and 51 years old. [Methods] The subjects were divided into three groups: no musical stimulation group (control), and 90 and 140 beats per minute musical tempo groups. Each subject underwent three experimental sessions: involved familiarization with the equipment, an incremental test to exhaustion, and a 30-min walk on a treadmill at a self-selected pace, respectively. During the self-selected walking session, physiological, perceptual, and affective variables were evaluated, and walking performance was evaluated at the end. [Results] There were no significant differences in physiological variables or affective response among groups. However, there were significant differences in perceptual response and walking performance among groups. [Conclusion] Fast music (140 beats per minute) promotes a higher rating of perceived exertion and greater performance in self-selected walking pace without significantly altering physiological variables or affective response.

  4. Determination of main fruits in adulterated nectars by ATR-FTIR spectroscopy combined with multivariate calibration and variable selection methods.

    Science.gov (United States)

    Miaw, Carolina Sheng Whei; Assis, Camila; Silva, Alessandro Rangel Carolino Sales; Cunha, Maria Luísa; Sena, Marcelo Martins; de Souza, Scheilla Vitorino Carvalho

    2018-07-15

    Grape, orange, peach and passion fruit nectars were formulated and adulterated by dilution with syrup, apple and cashew juices at 10 levels for each adulterant. Attenuated total reflectance Fourier transform mid infrared (ATR-FTIR) spectra were obtained. Partial least squares (PLS) multivariate calibration models allied to different variable selection methods, such as interval partial least squares (iPLS), ordered predictors selection (OPS) and genetic algorithm (GA), were used to quantify the main fruits. PLS improved by iPLS-OPS variable selection showed the highest predictive capacity to quantify the main fruit contents. The selected variables in the final models varied from 72 to 100; the root mean square errors of prediction were estimated from 0.5 to 2.6%; the correlation coefficients of prediction ranged from 0.948 to 0.990; and, the mean relative errors of prediction varied from 3.0 to 6.7%. All of the developed models were validated. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Genotype-by-environment interactions leads to variable selection on life-history strategy in Common Evening Primrose (Oenothera biennis).

    Science.gov (United States)

    Johnson, M T J

    2007-01-01

    Monocarpic plant species, where reproduction is fatal, frequently exhibit variation in the length of their prereproductive period prior to flowering. If this life-history variation in flowering strategy has a genetic basis, genotype-by-environment interactions (G x E) may maintain phenotypic diversity in flowering strategy. The native monocarpic plant Common Evening Primrose (Oenothera biennis L., Onagraceae) exhibits phenotypic variation for annual vs. biennial flowering strategies. I tested whether there was a genetic basis to variation in flowering strategy in O. biennis, and whether environmental variation causes G x E that imposes variable selection on flowering strategy. In a field experiment, I randomized more than 900 plants from 14 clonal families (genotypes) into five distinct habitats that represented a natural productivity gradient. G x E strongly affected the lifetime fruit production of O. biennis, with the rank-order in relative fitness of genotypes changing substantially between habitats. I detected genetic variation in annual vs. biennial strategies in most habitats, as well as a G x E effect on flowering strategy. This variation in flowering strategy was correlated with genetic variation in relative fitness, and phenotypic and genotypic selection analyses revealed that environmental variation resulted in variable directional selection on annual vs. biennial strategies. Specifically, a biennial strategy was favoured in moderately productive environments, whereas an annual strategy was favoured in low-productivity environments. These results highlight the importance of variable selection for the maintenance of genetic variation in the life-history strategy of a monocarpic plant.

  6. Impact of menstruation on select hematology and clinical chemistry variables in cynomolgus macaques.

    Science.gov (United States)

    Perigard, Christopher J; Parrula, M Cecilia M; Larkin, Matthew H; Gleason, Carol R

    2016-06-01

    In preclinical studies with cynomolgus macaques, it is common to have one or more females presenting with menses. Published literature indicates that the blood lost during menses causes decreases in red blood cell mass variables (RBC, HGB, and HCT), which would be a confounding factor in the interpretation of drug-related effects on clinical pathology data, but no scientific data have been published to support this claim. This investigation was conducted to determine if the amount of blood lost during menses in cynomolgus macaques has an effect on routine hematology and serum chemistry variables. Ten female cynomolgus macaques (Macaca fascicularis), 5 to 6.5 years old, were observed daily during approximately 3 months (97 days) for the presence of menses. Hematology and serum chemistry variables were evaluated twice weekly. The results indicated that menstruation affects the erythrogram including RBC, HGB, HCT, MCHC, MCV, reticulocyte count, RDW, the leukogram including neutrophil, lymphocyte, and monocyte counts, and chemistry variables, including GGT activity, and the concentrations of total proteins, albumin, globulins, and calcium. The magnitude of the effect of menstruation on susceptible variables is dependent on the duration of the menstrual phase. Macaques with menstrual phases lasting ≥ 7 days are more likely to develop changes in variables related to chronic blood loss. In preclinical toxicology studies with cynomolgus macaques, interpretation of changes in several commonly evaluated hematology and serum chemistry variables requires adequate clinical observation and documentation concerning presence and duration of menses. There is a concern that macaques with long menstrual cycles can develop iron deficiency anemia due to chronic menstrual blood loss. © 2016 American Society for Veterinary Clinical Pathology.

  7. An Examination of Program Selection Criteria for Part-Time MBA Students

    Science.gov (United States)

    Colburn, Michael; Fox, Daniel E.; Westerfelt, Debra Kay

    2011-01-01

    Prospective graduate students select a graduate program as a result of a multifaceted decision-making process. This study examines the selection criteria that part-time MBA students used in selecting a program at a private university. Further, it analyzes the methods by which the students first learned of the MBA program. The authors posed the…

  8. Separating different scales of motion in time series of meteorological variables

    International Nuclear Information System (INIS)

    Eskridge, R.E.; Rao, S.T.; Porter, P.S.

    1997-01-01

    In this study, four methods are evaluated for detecting and tracking changes in time series of climate variables. The PEST algorithm and the monthly anomaly technique are shown to have shortcomings, while the wavelet transform and Kolmogorov-Zurbenko (KZ) filter methods are shown to be capable of separating time scales with minimal errors. The behavior of the filters are examined by transfer functions. The KZ filter, anomaly technique, and PEST were also applied to temperature data to estimate long-term trends. The KZ filter provides estimates with about 10 times higher confidence than the other methods. Advantages of the KZ filter over the wavelet transform method are that it may be applied to datasets containing missing observations and is very easy to use. 10 refs., 8 figs., 1 tab

  9. Use of a prototype pulse oximeter for time series analysis of heart rate variability

    Science.gov (United States)

    González, Erika; López, Jehú; Hautefeuille, Mathieu; Velázquez, Víctor; Del Moral, Jésica

    2015-05-01

    This work presents the development of a low cost pulse oximeter prototype consisting of pulsed red and infrared commercial LEDs and a broad spectral photodetector used to register time series of heart rate and oxygen saturation of blood. This platform, besides providing these values, like any other pulse oximeter, processes the signals to compute a power spectrum analysis of the patient heart rate variability in real time and, additionally, the device allows access to all raw and analyzed data if databases construction is required or another kind of further analysis is desired. Since the prototype is capable of acquiring data for long periods of time, it is suitable for collecting data in real life activities, enabling the development of future wearable applications.

  10. Knowledge acquisition with domain experts on the aspects of use of visual variables in the Space Time Cube

    DEFF Research Database (Denmark)

    Kveladze, Irma; Kraak, Menno-Jan

    2013-01-01

    participants are selected purposefully based on the specific criteria in order to say something on the topic that has to be discussed (Nielsen, 1993). Accordingly, the main objective for focus group interview was to discuss the use of the visual variables based on the cartographic design theory (Bertin, 1983......The Space – Time Cube (STC) is a visual representation developed at the end of the 20th century for understanding the spatio-temporal aspects in human’s everyday life (Hägerstrand, 1970). Since its introduction, it has been widely used in a various discipline (Kraak, 2003; Demšar and Virrantaus...... to other visual representations. However, the usability metrics of the cartographic design theory for the STC content still remain to be unexplored. Therefore, this study particularly focused on the evaluation of the cartographic design aspects into the STC. This study was conducted in two different...

  11. Antipersistent dynamics in short time scale variability of self-potential signals

    Directory of Open Access Journals (Sweden)

    M. Ragosta

    2000-06-01

    Full Text Available Time scale properties of self-potential signals are investigated through the analysis of the second order structure function (variogram, a powerful tool to investigate the spatial and temporal variability of observational data. In this work we analyse two sequences of self-potential values measured by means of a geophysical monitoring array located in a seismically active area of Southern Italy. The range of scales investigated goes from a few minutes to several days. It is shown that signal fluctuations are characterised by two time scale ranges in which self-potential variability appears to follow slightly different dynamical behaviours. Results point to the presence of fractal, non stationary features expressing a long term correlation with scaling coefficients which are the clue of stabilising mechanisms. In the scale ranges in which the series show scale invariant behaviour, self-potentials evolve like fractional Brownian motions with anticorrelated increments typical of processes regulated by negative feedback mechanisms (antipersistence. On scales below about 6 h the strength of such an antipersistence appears to be slightly greater than that observed on larger time scales where the fluctuations are less efficiently stabilised.

  12. Investigation of a rotary valving system with variable valve timing for internal combustion engines

    Science.gov (United States)

    Cross, Paul C.; Hansen, Craig N.

    1994-11-01

    The objective of the program was to provide a functional demonstration of the Hansen Rotary Valving System with Variable Valve Timing (HRVS/VVT), capable of throttleless inlet charge control, as an alternative to conventional poppet-valves for use in spark ignited internal combustion engines. The goal of this new technology is to secure benefits in fuel economy, broadened torque band, vibration reduction, and overhaul accessibility. Additionally, use of the variable valve timing capability to vary the effective compression ratio is expected to improve multifuel tolerance and efficiency. Efforts directed at the design of HRVS components proved to be far more extensive than had been anticipated, ultimately requiring that proof-trial design/development work be performed. Although both time and funds were exhausted before optical or ion-probe types of in-cylinder investigation could be undertaken, a great deal of laboratory data was acquired during the course of the design/development work. This laboratory data is the basis for the information presented in this final report.

  13. A Real-Time Analysis Method for Pulse Rate Variability Based on Improved Basic Scale Entropy

    Directory of Open Access Journals (Sweden)

    Yongxin Chou

    2017-01-01

    Full Text Available Base scale entropy analysis (BSEA is a nonlinear method to analyze heart rate variability (HRV signal. However, the time consumption of BSEA is too long, and it is unknown whether the BSEA is suitable for analyzing pulse rate variability (PRV signal. Therefore, we proposed a method named sliding window iterative base scale entropy analysis (SWIBSEA by combining BSEA and sliding window iterative theory. The blood pressure signals of healthy young and old subjects are chosen from the authoritative international database MIT/PhysioNet/Fantasia to generate PRV signals as the experimental data. Then, the BSEA and the SWIBSEA are used to analyze the experimental data; the results show that the SWIBSEA reduces the time consumption and the buffer cache space while it gets the same entropy as BSEA. Meanwhile, the changes of base scale entropy (BSE for healthy young and old subjects are the same as that of HRV signal. Therefore, the SWIBSEA can be used for deriving some information from long-term and short-term PRV signals in real time, which has the potential for dynamic PRV signal analysis in some portable and wearable medical devices.

  14. Sodium bicarbonate ingestion and individual variability in time-to-peak pH.

    Science.gov (United States)

    Sparks, Andy; Williams, Emily; Robinson, Amy; Miller, Peter; Bentley, David J; Bridge, Craig; Mc Naughton, Lars R

    2017-01-01

    This study determined variability in time-to-peak pH after consumption of 300 mg kg - 1 of sodium bicarbonate. Seventeen participants (mean ± SD: age 21.38 ± 1.5 years; mass 75.8 ± 5.8 kg; height 176.8 ± 7.6 cm) reported to the laboratory where a resting capillary sample was taken. Then, 300 mg kg -1 of NaHCO 3 in 450 ml of flavoured water was ingested. Participants rested for 90 min and repeated blood samples were procured at 10 min intervals for 60 min and then every 5 min until 90 min. Blood pH concentrations were measured. Results suggested that time-to-peak pH (64.41 ± 18.78 min) was variable with a range of 10-85 min and a coefficient of variation of 29.16%. A bimodal distribution occurred, at 65 and 75 min. In conclusion, athletes, when using NaHCO 3 as an ergogenic aid, should determine their time-to-peak pH to best utilize the added buffering capacity this substance allows.

  15. FIRE: an SPSS program for variable selection in multiple linear regression analysis via the relative importance of predictors.

    Science.gov (United States)

    Lorenzo-Seva, Urbano; Ferrando, Pere J

    2011-03-01

    We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.

  16. Variable dead time counters. 1 - theoretical responses and the effects of neutron multiplication

    International Nuclear Information System (INIS)

    Lees, E.W.; Hooton, B.W.

    1978-10-01

    A theoretical expression is derived for calculating the response of any variable dead time counter (VDC) used in the passive assay of plutonium by neutron counting of the natural spontaneous fission activity. The effects of neutron multiplication in the sample arising from interactions of the original spontaneous fission neutrons is shown to modify the linear relationship between VDC signal and Pu mass. Numerical examples are shown for the Euratom VDC and a systematic investigation of the various factors affecting neutron multiplication is reported. Limited comparisons between the calculations and experimental data indicate provisional validity of the calculations. (author)

  17. Smart Device for the Determination of Heart Rate Variability in Real Time

    Directory of Open Access Journals (Sweden)

    David Naranjo-Hernández

    2017-01-01

    Full Text Available This work presents a first approach to the design, development, and implementation of a smart device for the real-time measurement and detection of alterations in heart rate variability (HRV. The smart device follows a modular design scheme, which consists of an electrocardiogram (ECG signal acquisition module, a processing module and a wireless communications module. From five-minute ECG signals, the processing module algorithms perform a spectral estimation of the HRV. The experimental results demonstrate the viability of the smart device and the proposed processing algorithms.

  18. Global exponential stability for discrete-time neural networks with variable delays

    International Nuclear Information System (INIS)

    Chen Wuhua; Lu Xiaomei; Liang Dongying

    2006-01-01

    This Letter provides new exponential stability criteria for discrete-time neural networks with variable delays. The main technique is to reduce exponential convergence estimation of the neural network solution to that of one component of the corresponding solution by constructing Lyapunov function based on M-matrix. By introducing the tuning parameter diagonal matrix, the delay-independent and delay-dependent exponential stability conditions have been unified in the same mathematical formula. The effectiveness of the new results are illustrated by three examples

  19. Boundedness and stability for recurrent neural networks with variable coefficients and time-varying delays

    International Nuclear Information System (INIS)

    Liang Jinling; Cao Jinde

    2003-01-01

    In this Letter, the problems of boundedness and stability for a general class of non-autonomous recurrent neural networks with variable coefficients and time-varying delays are analyzed via employing Young inequality technique and Lyapunov method. Some simple sufficient conditions are given for boundedness and stability of the solutions for the recurrent neural networks. These results generalize and improve the previous works, and they are easy to check and apply in practice. Two illustrative examples and their numerical simulations are also given to demonstrate the effectiveness of the proposed results

  20. Time-dependent inelastic analysis of metallic media using constitutive relations with state variables

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, V; Mukherjee, S [Cornell Univ., Ithaca, N.Y. (USA)

    1977-03-01

    A computational technique in terms of stress, strain and displacement rates is presented for the solution of boundary value problems for metallic structural elements at uniform elevated temperatures subjected to time varying loads. This method can accommodate any number of constitutive relations with state variables recently proposed by other researchers to model the inelastic deformation of metallic media at elevated temperatures. Numerical solutions are obtained for several structural elements subjected to steady loads. The constitutive relations used for these numerical solutions are due to Hart. The solutions are discussed in the context of the computational scheme and Hart's theory.

  1. Resolución del Response Time Variability Problem mediante tabu search

    OpenAIRE

    Corominas Subias, Albert; García Villoria, Alberto; Pastor Moreno, Rafael

    2009-01-01

    El Response Time Variability Problem (RTVP) es un problema combinatorio de scheduling publicado recientemente en la literatura. Dicho problema de optimización combinatoria es muy fácil de formular pero muy difícil de resolver de forma exacta (es NP-hard). El RTVP se presenta cuando productos, clientes o tareas se han de secuenciar minimizando la variabilidad entre los instantes de tiempo en los que reciben los recursos que ellos necesitan. Este problema tiene una gran cantidad de aplicaciones...

  2. Sensitivity of adaptive enrichment trial designs to accrual rates, time to outcome measurement, and prognostic variables

    Directory of Open Access Journals (Sweden)

    Tianchen Qian

    2017-12-01

    Full Text Available Adaptive enrichment designs involve rules for restricting enrollment to a subset of the population during the course of an ongoing trial. This can be used to target those who benefit from the experimental treatment. Trial characteristics such as the accrual rate and the prognostic value of baseline variables are typically unknown when a trial is being planned; these values are typically assumed based on information available before the trial starts. Because of the added complexity in adaptive enrichment designs compared to standard designs, it may be of special concern how sensitive the trial performance is to deviations from assumptions. Through simulation studies, we evaluate the sensitivity of Type I error, power, expected sample size, and trial duration to different design characteristics. Our simulation distributions mimic features of data from the Alzheimer's Disease Neuroimaging Initiative cohort study, and involve two subpopulations based on a genetic marker. We investigate the impact of the following design characteristics: the accrual rate, the time from enrollment to measurement of a short-term outcome and the primary outcome, and the prognostic value of baseline variables and short-term outcomes. To leverage prognostic information in baseline variables and short-term outcomes, we use a semiparametric, locally efficient estimator, and investigate its strengths and limitations compared to standard estimators. We apply information-based monitoring, and evaluate how accurately information can be estimated in an ongoing trial.

  3. Firefly as a novel swarm intelligence variable selection method in spectroscopy.

    Science.gov (United States)

    Goodarzi, Mohammad; dos Santos Coelho, Leandro

    2014-12-10

    A critical step in multivariate calibration is wavelength selection, which is used to build models with better prediction performance when applied to spectral data. Up to now, many feature selection techniques have been developed. Among all different types of feature selection techniques, those based on swarm intelligence optimization methodologies are more interesting since they are usually simulated based on animal and insect life behavior to, e.g., find the shortest path between a food source and their nests. This decision is made by a crowd, leading to a more robust model with less falling in local minima during the optimization cycle. This paper represents a novel feature selection approach to the selection of spectroscopic data, leading to more robust calibration models. The performance of the firefly algorithm, a swarm intelligence paradigm, was evaluated and compared with genetic algorithm and particle swarm optimization. All three techniques were coupled with partial least squares (PLS) and applied to three spectroscopic data sets. They demonstrate improved prediction results in comparison to when only a PLS model was built using all wavelengths. Results show that firefly algorithm as a novel swarm paradigm leads to a lower number of selected wavelengths while the prediction performance of built PLS stays the same. Copyright © 2014. Published by Elsevier B.V.

  4. AVC: Selecting discriminative features on basis of AUC by maximizing variable complementarity.

    Science.gov (United States)

    Sun, Lei; Wang, Jun; Wei, Jinmao

    2017-03-14

    The Receiver Operator Characteristic (ROC) curve is well-known in evaluating classification performance in biomedical field. Owing to its superiority in dealing with imbalanced and cost-sensitive data, the ROC curve has been exploited as a popular metric to evaluate and find out disease-related genes (features). The existing ROC-based feature selection approaches are simple and effective in evaluating individual features. However, these approaches may fail to find real target feature subset due to their lack of effective means to reduce the redundancy between features, which is essential in machine learning. In this paper, we propose to assess feature complementarity by a trick of measuring the distances between the misclassified instances and their nearest misses on the dimensions of pairwise features. If a misclassified instance and its nearest miss on one feature dimension are far apart on another feature dimension, the two features are regarded as complementary to each other. Subsequently, we propose a novel filter feature selection approach on the basis of the ROC analysis. The new approach employs an efficient heuristic search strategy to select optimal features with highest complementarities. The experimental results on a broad range of microarray data sets validate that the classifiers built on the feature subset selected by our approach can get the minimal balanced error rate with a small amount of significant features. Compared with other ROC-based feature selection approaches, our new approach can select fewer features and effectively improve the classification performance.

  5. MODELING THE TIME VARIABILITY OF SDSS STRIPE 82 QUASARS AS A DAMPED RANDOM WALK

    International Nuclear Information System (INIS)

    MacLeod, C. L.; Ivezic, Z.; Bullock, E.; Kimball, A.; Sesar, B.; Westman, D.; Brooks, K.; Gibson, R.; Becker, A. C.; Kochanek, C. S.; Kozlowski, S.; Kelly, B.; De Vries, W. H.

    2010-01-01

    We model the time variability of ∼9000 spectroscopically confirmed quasars in SDSS Stripe 82 as a damped random walk (DRW). Using 2.7 million photometric measurements collected over 10 yr, we confirm the results of Kelly et al. and Kozlowski et al. that this model can explain quasar light curves at an impressive fidelity level (0.01-0.02 mag). The DRW model provides a simple, fast (O(N) for N data points), and powerful statistical description of quasar light curves by a characteristic timescale (τ) and an asymptotic rms variability on long timescales (SF ∞ ). We searched for correlations between these two variability parameters and physical parameters such as luminosity and black hole mass, and rest-frame wavelength. Our analysis shows SF ∞ to increase with decreasing luminosity and rest-frame wavelength as observed previously, and without a correlation with redshift. We find a correlation between SF ∞ and black hole mass with a power-law index of 0.18 ± 0.03, independent of the anti-correlation with luminosity. We find that τ increases with increasing wavelength with a power-law index of 0.17, remains nearly constant with redshift and luminosity, and increases with increasing black hole mass with a power-law index of 0.21 ± 0.07. The amplitude of variability is anti-correlated with the Eddington ratio, which suggests a scenario where optical fluctuations are tied to variations in the accretion rate. However, we find an additional dependence on luminosity and/or black hole mass that cannot be explained by the trend with Eddington ratio. The radio-loudest quasars have systematically larger variability amplitudes by about 30%, when corrected for the other observed trends, while the distribution of their characteristic timescale is indistinguishable from that of the full sample. We do not detect any statistically robust differences in the characteristic timescale and variability amplitude between the full sample and the small subsample of quasars detected

  6. Complexity and time asymmetry of heart rate variability are altered in acute mental stress.

    Science.gov (United States)

    Visnovcova, Z; Mestanik, M; Javorka, M; Mokra, D; Gala, M; Jurko, A; Calkovska, A; Tonhajzerova, I

    2014-07-01

    We aimed to study the complexity and time asymmetry of short-term heart rate variability (HRV) as an index of complex neurocardiac control in response to stress using symbolic dynamics and time irreversibility methods. ECG was recorded at rest and during and after two stressors (Stroop, arithmetic test) in 70 healthy students. Symbolic dynamics parameters (NUPI, NCI, 0V%, 1V%, 2LV%, 2UV%), and time irreversibility indices (P%, G%, E) were evaluated. Additionally, HRV magnitude was quantified by linear parameters: spectral powers in low (LF) and high frequency (HF) bands. Our results showed a reduction of HRV complexity in stress (lower NUPI with both stressors, lower NCI with Stroop). Pattern classification analysis revealed significantly higher 0V% and lower 2LV% with both stressors, indicating a shift in sympathovagal balance, and significantly higher 1V% and lower 2UV% with Stroop. An unexpected result was found in time irreversibility: significantly lower G% and E with both stressors, P% index significantly declined only with arithmetic test. Linear HRV analysis confirmed vagal withdrawal (lower HF) with both stressors; LF significantly increased with Stroop and decreased with arithmetic test. Correlation analysis revealed no significant associations between symbolic dynamics and time irreversibility. Concluding, symbolic dynamics and time irreversibility could provide independent information related to alterations of neurocardiac control integrity in stress-related disease.

  7. Complexity and time asymmetry of heart rate variability are altered in acute mental stress

    International Nuclear Information System (INIS)

    Visnovcova, Z; Mestanik, M; Javorka, M; Mokra, D; Calkovska, A; Tonhajzerova, I; Gala, M; Jurko, A

    2014-01-01

    We aimed to study the complexity and time asymmetry of short-term heart rate variability (HRV) as an index of complex neurocardiac control in response to stress using symbolic dynamics and time irreversibility methods. ECG was recorded at rest and during and after two stressors (Stroop, arithmetic test) in 70 healthy students. Symbolic dynamics parameters (NUPI, NCI, 0V%, 1V%, 2LV%, 2UV%), and time irreversibility indices (P%, G%, E) were evaluated. Additionally, HRV magnitude was quantified by linear parameters: spectral powers in low (LF) and high frequency (HF) bands. Our results showed a reduction of HRV complexity in stress (lower NUPI with both stressors, lower NCI with Stroop). Pattern classification analysis revealed significantly higher 0V% and lower 2LV% with both stressors, indicating a shift in sympathovagal balance, and significantly higher 1V% and lower 2UV% with Stroop. An unexpected result was found in time irreversibility: significantly lower G% and E with both stressors, P% index significantly declined only with arithmetic test. Linear HRV analysis confirmed vagal withdrawal (lower HF) with both stressors; LF significantly increased with Stroop and decreased with arithmetic test. Correlation analysis revealed no significant associations between symbolic dynamics and time irreversibility. Concluding, symbolic dynamics and time irreversibility could provide independent information related to alterations of neurocardiac control integrity in stress-related disease. (paper)

  8. A model for estimating pathogen variability in shellfish and predicting minimum depuration times.

    Science.gov (United States)

    McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick

    2018-01-01

    Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist

  9. Analysis of agility, reaction time and balance variables at badminton players aged 9-14 years

    Directory of Open Access Journals (Sweden)

    Seydi Ahmet Ağaoğlu

    2017-12-01

    Full Text Available Aim: The aim of this study was investigated agility, static and dynamic balance and reaction time variables of badminton players aged between 9-14 and relate with among variables. Material and Methods: In Samsun, 19 males (sport age, 3.42±1.64 years and 12 females (3.00±1.28 years active badminton players were voluntarily participated in who are in 9-14 ages range. Agility was measured by “T” test, CSMI-Tecnobody Pk-252 isokinetic balance system measuring instrument was used to test static balance and dynamic balance and Mozart Lafayette reaction measuring instrument was used to test visual and auditory reaction times of players. Spearman correlation analysis was applied so as to correlation analysis. The level of significance was taken as p<0.05. Results: For female athletes, a positive relation was determined between the agility and the perimeter (mm used (r=0.727; p<0.01 through the static balance measure double foot and eyes are open. For male athletes, a positive relation was determined between the visual reaction time and the perimeter (mm used (r=0.725; p<0.01 through the static balance measure dominant foot and eyes are open. For male and female athletes were not found any correlation between reaction time and dynamic balance. Conclusion: It was determined that audio (ears and visual (eyes reaction time was effective on balance. While badminton players are closed eyes, audio sensors are more influence on balance test through measure dominant foot.

  10. An Investigation of Selected Variables Related to Student Algebra I Performance in Mississippi

    Science.gov (United States)

    Scott, Undray

    2016-01-01

    This research study attempted to determine if specific variables were related to student performance on the Algebra I subject-area test. This study also sought to determine in which of grades 8, 9, or 10 students performed better on the Algebra I Subject Area Test. This study also investigated the different criteria that are used to schedule…

  11. Variable Selection Strategies for Small-area Estimation Using FIA Plots and Remotely Sensed Data

    Science.gov (United States)

    Andrew Lister; Rachel Riemann; James Westfall; Mike Hoppus

    2005-01-01

    The USDA Forest Service's Forest Inventory and Analysis (FIA) unit maintains a network of tens of thousands of georeferenced forest inventory plots distributed across the United States. Data collected on these plots include direct measurements of tree diameter and height and other variables. We present a technique by which FIA plot data and coregistered...

  12. Variable selection for modelling effects of eutrophication on stream and river ecosystems

    NARCIS (Netherlands)

    Nijboer, R.C.; Verdonschot, P.F.M.

    2004-01-01

    Models are needed for forecasting the effects of eutrophication on stream and river ecosystems. Most of the current models do not include differences in local stream characteristics and effects on the biota. To define the most important variables that should be used in a stream eutrophication model,

  13. SPATIAL AND TEMPORAL VARIABILITY IN ACROLEIN AND SELECT VOLATILE ORGANIC COMPOUNDS IN DETROIT, MICHIGAN

    Science.gov (United States)

    The variability in outdoor concentrations of acrolein, benzene, toluene, ethylbenzene and xylenes (BTEX), and 1,3-butadiene was examined for data measured during summer 2004 of the Detroit Exposure and Aerosol Research Study (DEARS). Results for acrolein indicated no significant...

  14. Temporal variability of selected chemical and physical propertires of topsoil of three soil types

    Czech Academy of Sciences Publication Activity Database

    Jirků, V.; Kodešová, R.; Nikodem, A.; Mühlhanselová, M.; Žigová, Anna

    2013-01-01

    Roč. 15, - (2013) ISSN 1607-7962. [EGU General Assembly /10./. 07.04.2013-12.04.2013, Vienna] R&D Projects: GA ČR GA526/08/0434 Institutional support: RVO:67985831 Keywords : soil properties * soil types * temporal variability Subject RIV: DF - Soil Science http://meetingorganizer.copernicus.org/EGU2013/EGU2013-7650-1.pdf

  15. Total sulfur determination in residues of crude oil distillation using FT-IR/ATR and variable selection methods

    Science.gov (United States)

    Müller, Aline Lima Hermes; Picoloto, Rochele Sogari; Mello, Paola de Azevedo; Ferrão, Marco Flores; dos Santos, Maria de Fátima Pereira; Guimarães, Regina Célia Lourenço; Müller, Edson Irineu; Flores, Erico Marlon Moraes

    2012-04-01

    Total sulfur concentration was determined in atmospheric residue (AR) and vacuum residue (VR) samples obtained from petroleum distillation process by Fourier transform infrared spectroscopy with attenuated total reflectance (FT-IR/ATR) in association with chemometric methods. Calibration and prediction set consisted of 40 and 20 samples, respectively. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). Different treatments and pre-processing steps were also evaluated for the development of models. The pre-treatment based on multiplicative scatter correction (MSC) and the mean centered data were selected for models construction. The use of siPLS as variable selection method provided a model with root mean square error of prediction (RMSEP) values significantly better than those obtained by PLS model using all variables. The best model was obtained using siPLS algorithm with spectra divided in 20 intervals and combinations of 3 intervals (911-824, 823-736 and 737-650 cm-1). This model produced a RMSECV of 400 mg kg-1 S and RMSEP of 420 mg kg-1 S, showing a correlation coefficient of 0.990.

  16. Effects of time-variable exposure regimes of the insecticide chlorpyrifos on freshwater invertebrate communities in microcosms

    NARCIS (Netherlands)

    Zafar, M.I.; Wijngaarden, van R.; Roessink, I.; Brink, van den P.J.

    2011-01-01

    The present study compared the effects of different time-variable exposure regimes having the same time-weighted average (TWA) concentration of the organophosphate insecticide chlorpyrifos on freshwater invertebrate communities to enable extrapolation of effects across exposure regimes. The

  17. Lung lesion doubling times: values and variability based on method of volume determination

    International Nuclear Information System (INIS)

    Eisenbud Quint, Leslie; Cheng, Joan; Schipper, Matthew; Chang, Andrew C.; Kalemkerian, Gregory

    2008-01-01

    Purpose: To determine doubling times (DTs) of lung lesions based on volumetric measurements from thin-section CT imaging. Methods: Previously untreated patients with ≥ two thin-section CT scans showing a focal lung lesion were identified. Lesion volumes were derived using direct volume measurements and volume calculations based on lesion area and diameter. Growth rates (GRs) were compared by tissue diagnosis and measurement technique. Results: 54 lesions were evaluated including 8 benign lesions, 10 metastases, 3 lymphomas, 15 adenocarcinomas, 11 squamous carcinomas, and 7 miscellaneous lung cancers. Using direct volume measurements, median DTs were 453, 111, 15, 181, 139 and 137 days, respectively. Lung cancer DTs ranged from 23-2239 days. There were no significant differences in GRs among the different lesion types. There was considerable variability among GRs using different volume determination methods. Conclusions: Lung cancer doubling times showed a substantial range, and different volume determination methods gave considerably different DTs

  18. Modelling a variable valve timing spark ignition engine using different neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Beham, M. [BMW AG, Munich (Germany); Yu, D.L. [John Moores University, Liverpool (United Kingdom). Control Systems Research Group

    2004-10-01

    In this paper different neural networks (NN) are compared for modelling a variable valve timing spark-ignition (VVT SI) engine. The overall system is divided for each output into five neural multi-input single output (MISO) subsystems. Three kinds of NN, multilayer Perceptron (MLP), pseudo-linear radial basis function (PLRBF), and local linear model tree (LOLIMOT) networks, are used to model each subsystem. Real data were collected when the engine was under different operating conditions and these data are used in training and validation of the developed neural models. The obtained models are finally tested in a real-time online model configuration on the test bench. The neural models run independently of the engine in parallel mode. The model outputs are compared with process output and compared among different models. These models performed well and can be used in the model-based engine control and optimization, and for hardware in the loop systems. (author)

  19. Industrial implementation of spatial variability control by real-time SPC

    Science.gov (United States)

    Roule, O.; Pasqualini, F.; Borde, M.

    2016-10-01

    Advanced technology nodes require more and more information to get the wafer process well setup. The critical dimension of components decreases following Moore's law. At the same time, the intra-wafer dispersion linked to the spatial non-uniformity of tool's processes is not capable to decrease in the same proportions. APC systems (Advanced Process Control) are being developed in waferfab to automatically adjust and tune wafer processing, based on a lot of process context information. It can generate and monitor complex intrawafer process profile corrections between different process steps. It leads us to put under control the spatial variability, in real time by our SPC system (Statistical Process Control). This paper will outline the architecture of an integrated process control system for shape monitoring in 3D, implemented in waferfab.

  20. Real-time Continuous Assessment Method for Mental and Physiological Condition using Heart Rate Variability

    Science.gov (United States)

    Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro

    It is necessary to monitor the daily health condition for preventing stress syndrome. In this study, it was proposed the method assessing the mental and physiological condition, such as the work stress or the relaxation, using heart rate variability at real time and continuously. The instantanuous heart rate (HR), and the ratio of the number of extreme points (NEP) and the number of heart beats were calculated for assessing mental and physiological condition. In this method, 20 beats heart rate were used to calculate these indexes. These were calculated in one beat interval. Three conditions, which are sitting rest, performing mental arithmetic and watching relaxation movie, were assessed using our proposed algorithm. The assessment accuracies were 71.9% and 55.8%, when performing mental arithmetic and watching relaxation movie respectively. In this method, the mental and physiological condition was assessed using only 20 regressive heart beats, so this method is considered as the real time assessment method.

  1. The effect of time trial cycling position on physiological and aerodynamic variables.

    Science.gov (United States)

    Fintelman, D M; Sterling, M; Hemida, H; Li, F-X

    2015-01-01

    To reduce aerodynamic resistance cyclists lower their torso angle, concurrently reducing Peak Power Output (PPO). However, realistic torso angle changes in the range used by time trial cyclists have not yet been examined. Therefore the aim of this study was to investigate the effect of torso angle on physiological parameters and frontal area in different commonly used time trial positions. Nineteen well-trained male cyclists performed incremental tests on a cycle ergometer at five different torso angles: their preferred torso angle and at 0, 8, 16 and 24°. Oxygen uptake, carbon dioxide expiration, minute ventilation, gross efficiency, PPO, heart rate, cadence and frontal area were recorded. The frontal area provides an estimate of the aerodynamic drag. Overall, results showed that lower torso angles attenuated performance. Maximal values of all variables, attained in the incremental test, decreased with lower torso angles (P aerodynamic drag and physiological functioning.

  2. Excitation of Earth Rotation Variations "Observed" by Time-Variable Gravity

    Science.gov (United States)

    Chao, Ben F.; Cox, C. M.

    2005-01-01

    Time variable gravity measurements have been made over the past two decades using the space geodetic technique of satellite laser ranging, and more recently by the GRACE satellite mission with improved spatial resolutions. The degree-2 harmonic components of the time-variable gravity contain important information about the Earth s length-of-day and polar motion excitation functions, in a way independent to the traditional "direct" Earth rotation measurements made by, for example, the very-long-baseline interferometry and GPS. In particular, the (degree=2, order= 1) components give the mass term of the polar motion excitation; the (2,O) component, under certain mass conservation conditions, gives the mass term of the length-of-day excitation. Combining these with yet another independent source of angular momentum estimation calculated from global geophysical fluid models (for example the atmospheric angular momentum, in both mass and motion terms), in principle can lead to new insights into the dynamics, particularly the role or the lack thereof of the cores, in the excitation processes of the Earth rotation variations.

  3. Dissociating neural variability related to stimulus quality and response times in perceptual decision-making.

    Science.gov (United States)

    Bode, Stefan; Bennett, Daniel; Sewell, David K; Paton, Bryan; Egan, Gary F; Smith, Philip L; Murawski, Carsten

    2018-03-01

    According to sequential sampling models, perceptual decision-making is based on accumulation of noisy evidence towards a decision threshold. The speed with which a decision is reached is determined by both the quality of incoming sensory information and random trial-by-trial variability in the encoded stimulus representations. To investigate those decision dynamics at the neural level, participants made perceptual decisions while functional magnetic resonance imaging (fMRI) was conducted. On each trial, participants judged whether an image presented under conditions of high, medium, or low visual noise showed a piano or a chair. Higher stimulus quality (lower visual noise) was associated with increased activation in bilateral medial occipito-temporal cortex and ventral striatum. Lower stimulus quality was related to stronger activation in posterior parietal cortex (PPC) and dorsolateral prefrontal cortex (DLPFC). When stimulus quality was fixed, faster response times were associated with a positive parametric modulation of activation in medial prefrontal and orbitofrontal cortex, while slower response times were again related to more activation in PPC, DLPFC and insula. Our results suggest that distinct neural networks were sensitive to the quality of stimulus information, and to trial-to-trial variability in the encoded stimulus representations, but that reaching a decision was a consequence of their joint activity. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory

    Science.gov (United States)

    Deyi, Feng; Ichikawa, M.

    1989-11-01

    In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.

  5. Time variability of C-reactive protein: implications for clinical risk stratification.

    Directory of Open Access Journals (Sweden)

    Peter Bogaty

    Full Text Available C-reactive protein (CRP is proposed as a screening test for predicting risk and guiding preventive approaches in coronary artery disease (CAD. However, the stability of repeated CRP measurements over time in subjects with and without CAD is not well defined. We sought to determine the stability of serial CRP measurements in stable subjects with distinct CAD manifestations and a group without CAD while carefully controlling for known confounders.We prospectively studied 4 groups of 25 stable subjects each 1 a history of recurrent acute coronary events; 2 a single myocardial infarction ≥7 years ago; 3 longstanding CAD (≥7 years that had never been unstable; 4 no CAD. Fifteen measurements of CRP were obtained to cover 21 time-points: 3 times during one day; 5 consecutive days; 4 consecutive weeks; 4 consecutive months; and every 3 months over the year. CRP risk threshold was set at 2.0 mg/L. We estimated variance across time-points using standard descriptive statistics and Bayesian hierarchical models.Median CRP values of the 4 groups and their pattern of variability did not differ substantially so all subjects were analyzed together. The median individual standard deviation (SD CRP values within-day, within-week, between-weeks and between-months were 0.07, 0.19, 0.36 and 0.63 mg/L, respectively. Forty-six percent of subjects changed CRP risk category at least once and 21% had ≥4 weekly and monthly CRP values in both low and high-risk categories.Considering its large intra-individual variability, it may be problematic to rely on CRP values for CAD risk prediction and therapeutic decision-making in individual subjects.

  6. Trends and variability in streamflow and snowmelt runoff timing in the southern Tianshan Mountains

    Science.gov (United States)

    Shen, Yan-Jun; Shen, Yanjun; Fink, Manfred; Kralisch, Sven; Chen, Yaning; Brenning, Alexander

    2018-02-01

    Streamflow and snowmelt runoff timing of mountain rivers are susceptible to climate change. Trends and variability in streamflow and snowmelt runoff timing in four mountain basins in the southern Tianshan were analyzed in this study. Streamflow trends were detected by Mann-Kendall tests and changes in snowmelt runoff timing were analyzed based on the winter/spring snowmelt runoff center time (WSCT). Pearson's correlation coefficient was further calculated to analyze the relationships between climate variables, streamflow and WSCT. Annual streamflow increased significantly in past decades in the southern Tianshan, especially in spring and winter months. However, the relations between streamflow and temperature/precipitation depend on the different streamflow generation processes. Annual precipitation plays a vital role in controlling recharge in the Toxkon basin, while the Kaidu and Huangshuigou basins are governed by both precipitation and temperature. Seasonally, temperature has a strong effect on streamflow in autumn and winter, while summer streamflow appears more sensitive to changes in precipitation. However, temperature is the dominant factor for streamflow in the glacierized Kunmalik basin at annual and seasonal scales. An uptrend in streamflow begins in the 1990s at both annual and seasonal scales, which is generally consistent with temperature and precipitation fluctuations. Average WSCT dates in the Kaidu and Huangshuigou basins are earlier than in the Toxkon and Kunmalik basins, and shifted towards earlier dates since the mid-1980s in all the basins. It is plausible that WSCT dates are more sensitive to warmer temperature in spring period compared to precipitation, except for the Huangshuigou basin. Taken together, these findings are useful for applications in flood risk regulation, future hydropower projects and integrated water resources management.

  7. A stochastic fractional dynamics model of space-time variability of rain

    Science.gov (United States)

    Kundu, Prasun K.; Travis, James E.

    2013-09-01

    varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, which allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and time scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and on the Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to fit the second moment statistics of radar data at the smaller spatiotemporal scales. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well at these scales without any further adjustment.

  8. Identifying market segments in consumer markets: variable selection and data interpretation

    OpenAIRE

    Tonks, D G

    2004-01-01

    Market segmentation is often articulated as being a process which displays the recognised features of classical rationalism but in part; convention, convenience, prior experience and the overarching impact of rhetoric will influence if not determine the outcomes of a segmentation exercise. Particular examples of this process are addressed critically in this paper which concentrates on the issues of variable choice for multivariate approaches to market segmentation and also the methods used fo...

  9. Variability in dose estimates associated with the food-chain transport and ingestion of selected radionuclides

    International Nuclear Information System (INIS)

    Hoffman, F.O.; Gardner, R.H.; Eckerman, K.F.

    1982-06-01

    Dose predictions for the ingestion of 90 Sr and 137 Cs, using aquatic and terrestrial food chain transport models similar to those in the Nuclear Regulatory Commission's Regulatory Guide 1.109, are evaluated through estimating the variability of model parameters and determining the effect of this variability on model output. The variability in the predicted dose equivalent is determined using analytical and numerical procedures. In addition, a detailed discussion is included on 90 Sr dosimetry. The overall estimates of uncertainty are most relevant to conditions where site-specific data is unavailable and when model structure and parameter estimates are unbiased. Based on the comparisons performed in this report, it is concluded that the use of the generic default parameters in Regulatory Guide 1.109 will usually produce conservative dose estimates that exceed the 90th percentile of the predicted distribution of dose equivalents. An exception is the meat pathway for 137 Cs, in which use of generic default values results in a dose estimate at the 24th percentile. Among the terrestrial pathways of exposure, the non-leafy vegetable pathway is the most important for 90 Sr. For 90 Sr, the parameters for soil retention, soil-to-plant transfer, and internal dosimetry contribute most significantly to the variability in the predicted dose for the combined exposure to all terrestrial pathways. For 137 Cs, the meat transfer coefficient the mass interception factor for pasture forage, and the ingestion dose factor are the most important parameters. The freshwater finfish bioaccumulation factor is the most important parameter for the dose prediction of 90 Sr and 137 Cs transported over the water-fish-man pathway

  10. Space and time variability of heating requirements for greenhouse tomato production in the Euro-Mediterranean area.

    Science.gov (United States)

    Mariani, Luigi; Cola, Gabriele; Bulgari, Roberta; Ferrante, Antonio; Martinetti, Livia

    2016-08-15

    The Euro-Mediterranean area is the seat of a relevant greenhouse activity, meeting the needs of important markets. A quantitative assessment of greenhouse energy consumption and of its variability in space and time is an important decision support tool for both greenhouse-sector policies and farmers. A mathematical model of greenhouse energy balance was developed and parameterized for a state-of-the-art greenhouse to evaluate the heating requirements for vegetables growing. Tomato was adopted as reference crop, due to its high energy requirement for fruit setting and ripening and its economic relevance. In order to gain a proper description of the Euro-Mediterranean area, 56 greenhouse areas located within the ranges 28°N-72°N and 11°W-55°E were analyzed over the period 1973-2014. Moreover, the two 1973-1987 and 1988-2014 sub-periods were separately studied to describe climate change effects on energy consumption. Results account for the spatial variability of energy needs for tomato growing, highlighting the strong influence of latitude on the magnitude of heat requirements. The comparison between the two selected sub-periods shows a decrease of energy demand in the current warm phase, more relevant for high latitudes. Finally, suggestions to reduce energy consumptions are provided. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Time-variable gravity fields and ocean mass change from 37 months of kinematic Swarm orbits

    Science.gov (United States)

    Lück, Christina; Kusche, Jürgen; Rietbroek, Roelof; Löcher, Anno

    2018-03-01

    Measuring the spatiotemporal variation of ocean mass allows for partitioning of volumetric sea level change, sampled by radar altimeters, into mass-driven and steric parts. The latter is related to ocean heat change and the current Earth's energy imbalance. Since 2002, the Gravity Recovery and Climate Experiment (GRACE) mission has provided monthly snapshots of the Earth's time-variable gravity field, from which one can derive ocean mass variability. However, GRACE has reached the end of its lifetime with data degradation and several gaps occurred during the last years, and there will be a prolonged gap until the launch of the follow-on mission GRACE-FO. Therefore, efforts focus on generating a long and consistent ocean mass time series by analyzing kinematic orbits from other low-flying satellites, i.e. extending the GRACE time series. Here we utilize data from the European Space Agency's (ESA) Swarm Earth Explorer satellites to derive and investigate ocean mass variations. For this aim, we use the integral equation approach with short arcs (Mayer-Gürr, 2006) to compute more than 500 time-variable gravity fields with different parameterizations from kinematic orbits. We investigate the potential to bridge the gap between the GRACE and the GRACE-FO mission and to substitute missing monthly solutions with Swarm results of significantly lower resolution. Our monthly Swarm solutions have a root mean square error (RMSE) of 4.0 mm with respect to GRACE, whereas directly estimating constant, trend, annual, and semiannual (CTAS) signal terms leads to an RMSE of only 1.7 mm. Concerning monthly gaps, our CTAS Swarm solution appears better than interpolating existing GRACE data in 13.5 % of all cases, when artificially removing one solution. In the case of an 18-month artificial gap, 80.0 % of all CTAS Swarm solutions were found closer to the observed GRACE data compared to interpolated GRACE data. Furthermore, we show that precise modeling of non-gravitational forces

  12. Predictor variables for half marathon race time in recreational female runners

    Directory of Open Access Journals (Sweden)

    Beat Knechtle

    2011-01-01

    Full Text Available INTRODUCTION: The relationship between skin-fold thickness and running performance has been investigated from 100 m to the marathon distance, except the half marathon distance. OBJECTIVE: To investigate whether anthropometry characteristics or training practices were related to race time in 42 recreational female half marathoners to determine the predictor variables of half-marathon race time and to inform future novice female half marathoners. METHODS: Observational field study at the 'Half Marathon Basel' in Switzerland. RESULTS: In the bivariate analysis, body mass (r = 0.60, body mass index (r = 0.48, body fat (r = 0.56, skin-fold at pectoral (r = 0.61, mid-axilla (r = 0.69, triceps (r = 0.49, subscapular (r = 0.61, abdominal (r = 0.59, suprailiac (r = 0.55 medial calf (r = 0.53 site, and speed of the training sessions (r = -0.68 correlated to race time. Mid-axilla skin-fold (p = 0.04 and speed of the training sessions (p = 0.0001 remained significant after multi-variate analysis. Race time in a half marathon might be predicted by the following equation (r² = 0.71: Race time (min = 166.7 + 1.7x (mid-axilla skin-fold, mm - 6.4x (speed in training, km/h. Running speed during training was related to skinfold thickness at mid-axilla (r = -0.31, subscapular (r = -0.38, abdominal (r = -0.44, suprailiacal (r = -0.41, the sum of eight skin-folds (r = -0.36 and percent body fat (r = -0.31. CONCLUSION: Anthropometric and training variables were related to half-marathon race time in recreational female runners. Skin-fold thicknesses at various upper body locations were related to training intensity. High running speed in training appears to be important for fast half-marathon race times and may reduce upper body skin-fold thicknesses in recreational female half marathoners.

  13. Predictor variables for half marathon race time in recreational female runners.

    Science.gov (United States)

    Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Rosemann, Thomas; Lepers, Romuald

    2011-01-01

    The relationship between skin-fold thickness and running performance has been investigated from 100 m to the marathon distance, except the half marathon distance. To investigate whether anthropometry characteristics or training practices were related to race time in 42 recreational female half marathoners to determine the predictor variables of half-marathon race time and to inform future novice female half marathoners. Observational field study at the 'Half Marathon Basel' in Switzerland. In the bivariate analysis, body mass (r = 0.60), body mass index (r = 0.48), body fat (r = 0.56), skin-fold at pectoral (r = 0.61), mid-axilla (r = 0.69), triceps (r = 0.49), subscapular (r = 0.61), abdominal (r = 0.59), suprailiac (r = 0.55) medial calf (r = 0.53) site, and speed of the training sessions (r = -0.68) correlated to race time. Mid-axilla skin-fold (p = 0.04) and speed of the training sessions (p = 0.0001) remained significant after multi-variate analysis. Race time in a half marathon might be predicted by the following equation (r² = 0.71): Race time (min) = 166.7 + 1.7x (mid-axilla skin-fold, mm) - 6.4x (speed in training, km/h). Running speed during training was related to skinfold thickness at mid-axilla (r = -0.31), subscapular (r = -0.38), abdominal (r = -0.44), suprailiacal (r = -0.41), the sum of eight skin-folds (r = -0.36) and percent body fat (r = -0.31). Anthropometric and training variables were related to half-marathon race time in recreational female runners. Skin-fold thicknesses at various upper body locations were related to training intensity. High running speed in training appears to be important for fast half-marathon race times and may reduce upper body skin-fold thicknesses in recreational female half marathoners.

  14. Ultrahigh Dimensional Variable Selection for Interpolation of Point Referenced Spatial Data: A Digital Soil Mapping Case Study

    Science.gov (United States)

    Lamb, David W.; Mengersen, Kerrie

    2016-01-01

    Modern soil mapping is characterised by the need to interpolate point referenced (geostatistical) observations and the availability of large numbers of environmental characteristics for consideration as covariates to aid this interpolation. Modelling tasks of this nature also occur in other fields such as biogeography and environmental science. This analysis employs the Least Angle Regression (LAR) algorithm for fitting Least Absolute Shrinkage and Selection Operator (LASSO) penalized Multiple Linear Regressions models. This analysis demonstrates the efficiency of the LAR algorithm at selecting covariates to aid the interpolation of geostatistical soil carbon observations. Where an exhaustive search of the models that could be constructed from 800 potential covariate terms and 60 observations would be prohibitively demanding, LASSO variable selection is accomplished with trivial computational investment. PMID:27603135

  15. Empirically Driven Variable Selection for the Estimation of Causal Effects with Observational Data

    Science.gov (United States)

    Keller, Bryan; Chen, Jianshen

    2016-01-01

    Observational studies are common in educational research, where subjects self-select or are otherwise non-randomly assigned to different interventions (e.g., educational programs, grade retention, special education). Unbiased estimation of a causal effect with observational data depends crucially on the assumption of ignorability, which specifies…

  16. Computed ABC Analysis for Rational Selection of Most Informative Variables in Multivariate Data.

    Science.gov (United States)

    Ultsch, Alfred; Lötsch, Jörn

    2015-01-01

    Multivariate data sets often differ in several factors or derived statistical parameters, which have to be selected for a valid interpretation. Basing this selection on traditional statistical limits leads occasionally to the perception of losing information from a data set. This paper proposes a novel method for calculating precise limits for the selection of parameter sets. The algorithm is based on an ABC analysis and calculates these limits on the basis of the mathematical properties of the distribution of the analyzed items. The limits implement the aim of any ABC analysis, i.e., comparing the increase in yield to the required additional effort. In particular, the limit for set A, the "important few", is optimized in a way that both, the effort and the yield for the other sets (B and C), are minimized and the additional gain is optimized. As a typical example from biomedical research, the feasibility of the ABC analysis as an objective replacement for classical subjective limits to select highly relevant variance components of pain thresholds is presented. The proposed method improved the biological interpretation of the results and increased the fraction of valid information that was obtained from the experimental data. The method is applicable to many further biomedical problems including the creation of diagnostic complex biomarkers or short screening tests from comprehensive test batteries. Thus, the ABC analysis can be proposed as a mathematically valid replacement for traditional limits to maximize the information obtained from multivariate research data.

  17. The study of variability and strain selection in Streptomyces atroolivaceus. III

    International Nuclear Information System (INIS)

    Blumauerova, M.; Lipavska, H.; Stajner, K.; Vanek, Z.

    1976-01-01

    Mutants of Streptomyces atroolivaceus blocked in the biosynthesis of mithramycin were isolated both by natural selection and after treatment with mutagenic factors (UV and gamma rays, nitrous acid). Both physical factors were more effective than nitrous acid. The selection was complicated by the high instability of isolates, out of which 20 to 80%=. (depending on their origin) reversed spontaneously to the parent type. Primary screening (selection of morphological variants and determination of their activity using the method of agar blocks) made it possible to detect only potentially non-productive strains; however, the final selection always had to be made under submerged conditions. Fifty-four stable non-productive mutants were divided, according to results of the chromatographic analysis, into five groups differing in the production of the six biologically inactive metabolites. The mutants did not accumulate chromomycinone, chromocyclomycin and chromocyclin. On mixed cultivation none of the pairs of mutants was capable of the cosynthesis of mithramycin or of new compounds differing from standard metabolites. Possible causes of the above results are discussed. (author)

  18. COPD phenotypes on computed tomography and its correlation with selected lung function variables in severe patients

    Directory of Open Access Journals (Sweden)

    da Silva SMD

    2016-03-01

    Full Text Available Silvia Maria Doria da Silva, Ilma Aparecida Paschoal, Eduardo Mello De Capitani, Marcos Mello Moreira, Luciana Campanatti Palhares, Mônica Corso PereiraPneumology Service, Department of Internal Medicine, School of Medical Sciences, State University of Campinas (UNICAMP, Campinas, São Paulo, BrazilBackground: Computed tomography (CT phenotypic characterization helps in understanding the clinical diversity of chronic obstructive pulmonary disease (COPD patients, but its clinical relevance and its relationship with functional features are not clarified. Volumetric capnography (VC uses the principle of gas washout and analyzes the pattern of CO2 elimination as a function of expired volume. The main variables analyzed were end-tidal concentration of carbon dioxide (ETCO2, Slope of phase 2 (Slp2, and Slope of phase 3 (Slp3 of capnogram, the curve which represents the total amount of CO2 eliminated by the lungs during each breath.Objective: To investigate, in a group of patients with severe COPD, if the phenotypic analysis by CT could identify different subsets of patients, and if there was an association of CT findings and functional variables.Subjects and methods: Sixty-five patients with COPD Gold III–IV were admitted for clinical evaluation, high-resolution CT, and functional evaluation (spirometry, 6-minute walk test [6MWT], and VC. The presence and profusion of tomography findings were evaluated, and later, the patients were identified as having emphysema (EMP or airway disease (AWD phenotype. EMP and AWD groups were compared; tomography findings scores were evaluated versus spirometric, 6MWT, and VC variables.Results: Bronchiectasis was found in 33.8% and peribronchial thickening in 69.2% of the 65 patients. Structural findings of airways had no significant correlation with spirometric variables. Air trapping and EMP were strongly correlated with VC variables, but in opposite directions. There was some overlap between the EMP and AWD

  19. Characterization of Machine Variability and Progressive Heat Treatment in Selective Laser Melting of Inconel 718

    Science.gov (United States)

    Prater, Tracie; Tilson, Will; Jones, Zack

    2015-01-01

    The absence of an economy of scale in spaceflight hardware makes additive manufacturing an immensely attractive option for propulsion components. As additive manufacturing techniques are increasingly adopted by government and industry to produce propulsion hardware in human-rated systems, significant development efforts are needed to establish these methods as reliable alternatives to conventional subtractive manufacturing. One of the critical challenges facing powder bed fusion techniques in this application is variability between machines used to perform builds. Even with implementation of robust process controls, it is possible for two machines operating at identical parameters with equivalent base materials to produce specimens with slightly different material properties. The machine variability study presented here evaluates 60 specimens of identical geometry built using the same parameters. 30 samples were produced on machine 1 (M1) and the other 30 samples were built on machine 2 (M2). Each of the 30-sample sets were further subdivided into three subsets (with 10 specimens in each subset) to assess the effect of progressive heat treatment on machine variability. The three categories for post-processing were: stress relief, stress relief followed by hot isostatic press (HIP), and stress relief followed by HIP followed by heat treatment per AMS 5664. Each specimen (a round, smooth tensile) was mechanically tested per ASTM E8. Two formal statistical techniques, hypothesis testing for equivalency of means and one-way analysis of variance (ANOVA), were applied to characterize the impact of machine variability and heat treatment on six material properties: tensile stress, yield stress, modulus of elasticity, fracture elongation, and reduction of area. This work represents the type of development effort that is critical as NASA, academia, and the industrial base work collaboratively to establish a path to certification for additively manufactured parts. For future

  20. Drivers of time-activity budget variability during breeding in a pelagic seabird.

    Directory of Open Access Journals (Sweden)

    Gavin M Rishworth

    Full Text Available During breeding, animal behaviour is particularly sensitive to environmental and food resource availability. Additionally, factors such as sex, body condition, and offspring developmental stage can influence behaviour. Amongst seabirds, behaviour is generally predictably affected by local foraging conditions and has therefore been suggested as a potentially useful proxy to indicate prey state. However, besides prey availability and distribution, a range of other variables also influence seabird behavior, and these need to be accounted for to increase the signal-to-noise ratio when assessing specific characteristics of the environment based on behavioural attributes. The aim of this study was to use continuous, fine-scale time-activity budget data from a pelagic seabird (Cape gannet, Morus capensis to determine the influence of intrinsic (sex and body condition and extrinsic (offspring and time variables on parent behaviour during breeding. Foraging trip duration and chick provisioning rates were clearly sex-specific and associated with chick developmental stage. Females made fewer, longer foraging trips and spent less time at the nest during chick provisioning. These sex-specific differences became increasingly apparent with chick development. Additionally, parents in better body condition spent longer periods at their nests and those which returned later in the day had longer overall nest attendance bouts. Using recent technological advances, this study provides new insights into the foraging behaviour of breeding seabirds, particularly during the post-guarding phase. The biparental strategy of chick provisioning revealed in this study appears to be an example where the costs of egg development to the female are balanced by paternal-dominated chick provisioning particularly as the chick nears fledging.

  1. Increasing work-time influence: consequences for flexibility, variability, regularity and predictability.

    Science.gov (United States)

    Nabe-Nielsen, Kirsten; Garde, Anne Helene; Aust, Birgit; Diderichsen, Finn

    2012-01-01

    This quasi-experimental study investigated how an intervention aiming at increasing eldercare workers' influence on their working hours affected the flexibility, variability, regularity and predictability of the working hours. We used baseline (n = 296) and follow-up (n = 274) questionnaire data and interviews with intervention-group participants (n = 32). The work units in the intervention group designed their own intervention comprising either implementation of computerised self-scheduling (subgroup A), collection of information about the employees' work-time preferences by questionnaires (subgroup B), or discussion of working hours (subgroup C). Only computerised self-scheduling changed the working hours and the way they were planned. These changes implied more flexible but less regular working hours and an experience of less predictability and less continuity in the care of clients and in the co-operation with colleagues. In subgroup B and C, the participants ended up discussing the potential consequences of more work-time influence without actually implementing any changes. Employee work-time influence may buffer the adverse effects of shift work. However, our intervention study suggested that while increasing the individual flexibility, increasing work-time influence may also result in decreased regularity of the working hours and less continuity in the care of clients and co-operation with colleagues.

  2. Decreased reaction time variability is associated with greater cardiovascular responses to acute stress.

    Science.gov (United States)

    Wawrzyniak, Andrew J; Hamer, Mark; Steptoe, Andrew; Endrighi, Romano

    2016-05-01

    Cardiovascular (CV) responses to mental stress are prospectively associated with poor CV outcomes. The association between CV responses to mental stress and reaction times (RTs) in aging individuals may be important but warrants further investigation. The present study assessed RTs to examine associations with CV responses to mental stress in healthy, older individuals using robust regression techniques. Participants were 262 men and women (mean age = 63.3 ± 5.5 years) from the Whitehall II cohort who completed a RT task (Stroop) and underwent acute mental stress (mirror tracing) to elicit CV responses. Blood pressure, heart rate, and heart rate variability were measured at baseline, during acute stress, and through a 75-min recovery. RT measures were generated from an ex-Gaussian distribution that yielded three predictors: mu-RT, sigma-RT, and tau-RT, the mean, standard deviation, and mean of the exponential component of the normal distribution, respectively. Decreased intraindividual RT variability was marginally associated with greater systolic (B = -.009, SE = .005, p = .09) and diastolic (B = -.004, SE = .002, p = .08) blood pressure reactivity. Decreased intraindividual RT variability was associated with impaired systolic blood pressure recovery (B = -.007, SE = .003, p = .03) and impaired vagal tone (B = -.0047, SE = .0024, p = .045). Study findings offer tentative support for an association between RTs and CV responses. Despite small effect sizes and associations not consistent across predictors, these data may point to a link between intrinsic neuronal plasticity and CV responses. © 2016 The Authors. Psychophysiology published by Wiley Periodicals, Inc. on behalf of Society for Psychophysiological Research.

  3. Variable School Start Times and Middle School Student's Sleep Health and Academic Performance.

    Science.gov (United States)

    Lewin, Daniel S; Wang, Guanghai; Chen, Yao I; Skora, Elizabeth; Hoehn, Jessica; Baylor, Allison; Wang, Jichuan

    2017-08-01

    Improving sleep health among adolescents is a national health priority and implementing healthy school start times (SSTs) is an important strategy to achieve these goals. This study leveraged the differences in middle school SST in a large district to evaluate associations between SST, sleep health, and academic performance. This cross-sectional study draws data from a county-wide surveillance survey. Participants were three cohorts of eighth graders (n = 26,440). The school district is unique because SST ranged from 7:20 a.m. to 8:10 a.m. Path analysis and probit regression were used to analyze associations between SST and self-report measures of weekday sleep duration, grades, and homework controlling for demographic variables (sex, race, and socioeconomic status). The independent contributions of SST and sleep duration to academic performance were also analyzed. Earlier SST was associated with decreased sleep duration (χ 2  = 173, p academic performance, and academic effort. Path analysis models demonstrated the independent contributions of sleep duration, SST, and variable effects for demographic variables. This is the first study to evaluate the independent contributions of SST and sleep to academic performance in a large sample of middle school students. Deficient sleep was prevalent, and the earliest SST was associated with decrements in sleep and academics. These findings support the prioritization of policy initiatives to implement healthy SST for younger adolescents and highlight the importance of sleep health education disparities among race and gender groups. Copyright © 2017 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  4. A stochastic analysis approach on the cost-time profile for selecting the best future state MA

    Directory of Open Access Journals (Sweden)

    Seyedhosseini, Seyed Mohammad

    2015-05-01

    Full Text Available In the literature on value stream mapping (VSM, the only basis for choosing the best future state map (FSM among the proposed alternatives is the time factor. As a result, the FSM is selected as the best option because it has the least amount of total production lead time (TPLT. In this paper, the cost factor is considered in the FSM selection process, in addition to the time factor. Thus, for each of the proposed FSMs, the cost-time profile (CTP is used. Two factors that are of particular importance for the customer and the manufacturer – the TPLT and the direct cost of the product – are reviewed and analysed by calculating the sub-area of the CTP curve, called the cost-time investment (CTI. In addition, variability in the generated data has been studied in each of the CTPs in order to choose the best FSM more precisely and accurately. Based on a proposed step-by-step stochastic analysis method, and also by using non-parametric Kernel estimation methods for estimating the probability density function of CTIs, the process of choosing the best FSM has been carried out, based not only on the minimum expected CTI, but also on the minimum expected variability amount in CTIs among proposed alternatives. By implementing this method during the process of choosing the best FSM, the manufacturing organisations will consider both the cost factor and the variability in the generated data, in addition to the time factor. Accordingly, the decision-making process proceeds more easily and logically than do traditional methods. Finally, to describe the effectiveness and applicability of the proposed method in this paper, it is applied to a case study on an industrial parts manufacturing company in Iran.

  5. The relationship between selected variables and customer loyalty within an optometric practice environment

    Directory of Open Access Journals (Sweden)

    T. Van Vuuren

    2012-12-01

    Full Text Available Purpose: The purpose of the research that informed this article was to examine the relationship between customer satisfaction, trust, supplier image, commitment and customer loyalty within an optometric practice environment. Problem investigated: Optometric businesses need to adopt their strategies to enhance loyalty, as customer satisfaction is not enough to ensure loyalty and customer retention. An understanding of the variables influencing loyalty could help businesses within the optometric service environment to retain their customers and become more profitable. Methodology: The methodological approach followed was exploratory and quantitative in nature. The sample consisted of 357 customers who visited the practice twice or more over the previous six years. A structured questionnaire, with a five-point Likert scale, was fielded to gather the data. The descriptive and multiple regression analysis approach was used to analyse the results. Collinearity statistics and Pearson's correlation coefficient were also calculated to determine which independent variable has the largest influence on customer loyalty. Findings and implications: The main finding is that customer satisfaction had the highest correlation with customer loyalty. The other independent variables, however, also appear to significantly influence customer loyalty within an optometric practice environment. The implication is that optometric practices need to focus on customer satisfaction, trust, supplier image and commitment when addressing the improvement of customer loyalty. Originality and value of the research: The article contributes to the improvement of customer loyalty within a service business environment that could assist in facilitating larger market share, higher customer retention and greater profitability for the business over the long term.

  6. Select injury-related variables are affected by stride length and foot strike style during running.

    Science.gov (United States)

    Boyer, Elizabeth R; Derrick, Timothy R

    2015-09-01

    Some frontal plane and transverse plane variables have been associated with running injury, but it is not known if they differ with foot strike style or as stride length is shortened. To identify if step width, iliotibial band strain and strain rate, positive and negative free moment, pelvic drop, hip adduction, knee internal rotation, and rearfoot eversion differ between habitual rearfoot and habitual mid-/forefoot strikers when running with both a rearfoot strike (RFS) and a mid-/forefoot strike (FFS) at 3 stride lengths. Controlled laboratory study. A total of 42 healthy runners (21 habitual rearfoot, 21 habitual mid-/forefoot) ran overground at 3.35 m/s with both a RFS and a FFS at their preferred stride lengths and 5% and 10% shorter. Variables did not differ between habitual groups. Step width was 1.5 cm narrower for FFS, widening to 0.8 cm as stride length shortened. Iliotibial band strain and strain rate did not differ between foot strikes but decreased as stride length shortened (0.3% and 1.8%/s, respectively). Pelvic drop was reduced 0.7° for FFS compared with RFS, and both pelvic drop and hip adduction decreased as stride length shortened (0.8° and 1.5°, respectively). Peak knee internal rotation was not affected by foot strike or stride length. Peak rearfoot eversion was not different between foot strikes but decreased 0.6° as stride length shortened. Peak positive free moment (normalized to body weight [BW] and height [h]) was not affected by foot strike or stride length. Peak negative free moment was -0.0038 BW·m/h greater for FFS and decreased -0.0004 BW·m/h as stride length shortened. The small decreases in most variables as stride length shortened were likely associated with the concomitant wider step width. RFS had slightly greater pelvic drop, while FFS had slightly narrower step width and greater negative free moment. Shortening one's stride length may decrease or at least not increase propensity for running injuries based on the variables

  7. Effect of Integrated Yoga Module on Selected Psychological Variables among Women with Anxiety Problem.

    Science.gov (United States)

    Parthasarathy, S; Jaiganesh, K; Duraisamy

    2014-01-01

    The implementation of yogic practices has proven benefits in both organic and psychological diseases. Forty-five women with anxiety selected by a random sampling method were divided into three groups. Experimental group I was subjected to asanas, relaxation and pranayama while Experimental group II was subjected to an integrated yoga module. The control group did not receive any intervention. Anxiety was measured by Taylor's Manifest Anxiety Scale before and after treatment. Frustration was measured through Reaction to Frustration Scale. All data were spread in an Excel sheet to be analysed with SPSS 16 software using analysis of covariance (ANCOVA). Selected yoga and asanas decreased anxiety and frustration scores but treatment with an integrated yoga module resulted in significant reduction of anxiety and frustration. To conclude, the practice of asanas and yoga decreased anxiety in women, and yoga as an integrated module significantly improved anxiety scores in young women with proven anxiety without any ill effects.

  8. Induction and selection of superior genetic variables of oil seed rape (brassica napus L.)

    International Nuclear Information System (INIS)

    Shah, S.S.; Ali, I.; Rehman, K.

    1990-01-01

    Dry and uniform seeds of two rape seed varieties, Ganyou-5 and Tower, were subjected to different doses of gamma rays. Genetic variation in yield and yield components generated in M1 was studied in M2 and 30 useful variants were isolated from a large magnetized population. The selected mutants were progeny tested for stability of the characters in M3. Only five out of 30 progenies were identified to be uniform and stable. Further selection was made in the segregating m3 progenies. Results on some of the promising mutants are reported. The effect of irradiation treatment was highly pronounced on pod length, seeds per pod and 1000-seed weight. The genetic changes thus induced would help to evolve high yielding versions of different rape seed varieties under local environmental conditions. (author)

  9. An Application of Supervised Learning Methods to Search for Variable Stars in a Selected Field of the VVV Survey

    Science.gov (United States)

    Rodríguez-Feliciano, B.; García-Varela, A.; Pérez-Ortiz, M. F.; Sabogal, B. E.; Minniti, D.

    2017-07-01

    We characterize properties of time series of variable stars in the B278 field of the VVV survey, using robust statistics. Using random forest and support vector machines classifiers we propose 47 candidates to RR Lyraae, and 12 candidates to WU Ursae Majoris eclipsing binaries.

  10. Travelling green : Variables influencing students’ intention to select a green hotel

    OpenAIRE

    Lindqvist, Julia; Andersson, Mikaela

    2015-01-01

    Problematization: Tourism has a major impact on the environment. However, there is a conflict of interest making it difficult for the hotel business to decrease this impact. On the one hand, there is a pressure for environmentally friendly behaviour from society. On the other hand, the customers want to be pampered during their hotel stay. This makes it necessary to further investigate what influences customers’ intention to select a green hotel. Therefore this thesis examines students’ inten...

  11. Intraindividual Variability in Basic Reaction Time Predicts Middle-Aged and Older Pilots’ Flight Simulator Performance

    Science.gov (United States)

    2013-01-01

    Objectives. Intraindividual variability (IIV) is negatively associated with cognitive test performance and is positively associated with age and some neurological disorders. We aimed to extend these findings to a real-world task, flight simulator performance. We hypothesized that IIV predicts poorer initial flight performance and increased rate of decline in performance among middle-aged and older pilots. Method. Two-hundred and thirty-six pilots (40–69 years) completed annual assessments comprising a cognitive battery and two 75-min simulated flights in a flight simulator. Basic and complex IIV composite variables were created from measures of basic reaction time and shifting and divided attention tasks. Flight simulator performance was characterized by an overall summary score and scores on communication, emergencies, approach, and traffic avoidance components. Results. Although basic IIV did not predict rate of decline in flight performance, it had a negative association with initial performance for most flight measures. After taking into account processing speed, basic IIV explained an additional 8%–12% of the negative age effect on initial flight performance. Discussion. IIV plays an important role in real-world tasks and is another aspect of cognition that underlies age-related differences in cognitive performance. PMID:23052365

  12. Intraindividual variability in basic reaction time predicts middle-aged and older pilots' flight simulator performance.

    Science.gov (United States)

    Kennedy, Quinn; Taylor, Joy; Heraldez, Daniel; Noda, Art; Lazzeroni, Laura C; Yesavage, Jerome

    2013-07-01

    Intraindividual variability (IIV) is negatively associated with cognitive test performance and is positively associated with age and some neurological disorders. We aimed to extend these findings to a real-world task, flight simulator performance. We hypothesized that IIV predicts poorer initial flight performance and increased rate of decline in performance among middle-aged and older pilots. Two-hundred and thirty-six pilots (40-69 years) completed annual assessments comprising a cognitive battery and two 75-min simulated flights in a flight simulator. Basic and complex IIV composite variables were created from measures of basic reaction time and shifting and divided attention tasks. Flight simulator performance was characterized by an overall summary score and scores on communication, emergencies, approach, and traffic avoidance components. Although basic IIV did not predict rate of decline in flight performance, it had a negative association with initial performance for most flight measures. After taking into account processing speed, basic IIV explained an additional 8%-12% of the negative age effect on initial flight performance. IIV plays an important role in real-world tasks and is another aspect of cognition that underlies age-related differences in cognitive performance.

  13. Stress, Time Pressure, Strategy Selection and Math Anxiety in Mathematics: A Review of the Literature.

    Science.gov (United States)

    Caviola, Sara; Carey, Emma; Mammarella, Irene C; Szucs, Denes

    2017-01-01

    We review how stress induction, time pressure manipulations and math anxiety can interfere with or modulate selection of problem-solving strategies (henceforth "strategy selection") in arithmetical tasks. Nineteen relevant articles were identified, which contain references to strategy selection and time limit (or time manipulations), with some also discussing emotional aspects in mathematical outcomes. Few of these take cognitive processes such as working memory or executive functions into consideration. We conclude that due to the sparsity of available literature our questions can only be partially answered and currently there is not much evidence of clear associations. We identify major gaps in knowledge and raise a series of open questions to guide further research.

  14. Constructing the reduced dynamical models of interannual climate variability from spatial-distributed time series

    Science.gov (United States)

    Mukhin, Dmitry; Gavrilov, Andrey; Loskutov, Evgeny; Feigin, Alexander

    2016-04-01

    We suggest a method for empirical forecast of climate dynamics basing on the reconstruction of reduced dynamical models in a form of random dynamical systems [1,2] derived from observational time series. The construction of proper embedding - the set of variables determining the phase space the model works in - is no doubt the most important step in such a modeling, but this task is non-trivial due to huge dimension of time series of typical climatic fields. Actually, an appropriate expansion of observational time series is needed yielding the number of principal components considered as phase variables, which are to be efficient for the construction of low-dimensional evolution operator. We emphasize two main features the reduced models should have for capturing the main dynamical properties of the system: (i) taking into account time-lagged teleconnections in the atmosphere-ocean system and (ii) reflecting the nonlinear nature of these teleconnections. In accordance to these principles, in this report we present the methodology which includes the combination of a new way for the construction of an embedding by the spatio-temporal data expansion and nonlinear model construction on the basis of artificial neural networks. The methodology is aplied to NCEP/NCAR reanalysis data including fields of sea level pressure, geopotential height, and wind speed, covering Northern Hemisphere. Its efficiency for the interannual forecast of various climate phenomena including ENSO, PDO, NAO and strong blocking event condition over the mid latitudes, is demonstrated. Also, we investigate the ability of the models to reproduce and predict the evolution of qualitative features of the dynamics, such as spectral peaks, critical transitions and statistics of extremes. This research was supported by the Government of the Russian Federation (Agreement No. 14.Z50.31.0033 with the Institute of Applied Physics RAS) [1] Y. I. Molkov, E. M. Loskutov, D. N. Mukhin, and A. M. Feigin, "Random

  15. The effect of aquatic plyometric training with and without resistance on selected physical fitness variables among volleyball players

    Directory of Open Access Journals (Sweden)

    K. KAMALAKKANNAN

    2011-06-01

    Full Text Available The purpose of this study is to analyze the effect of aquatic plyometric training with and without the use ofweights on selected physical fitness variables among volleyball players. To achieve the purpose of these study 36physically active undergraduate volleyball players between 18 and 20 years of age volunteered as participants.The participants were randomly categorized into three groups of 12 each: a control group (CG, an aquaticPlyometric training with weight group (APTWG, and an aquatic Plyometric training without weight group(APTWOG. The subjects of the control group were not exposed to any training. Both experimental groupsunderwent their respective experimental treatment for 12 weeks, 3 days per week and a single session on eachday. Speed, endurance, and explosive power were measured as the dependent variables for this study. 36 days ofexperimental treatment was conducted for all the groups and pre and post data was collected. The collected datawere analyzed using an analysis of covariance (ANCOVA and followed by a Scheffé’s post hoc test. The resultsrevealed significant differences between groups on all the selected dependent variables. This study demonstratedthat aquatic plyometric training can be one effective means for improving speed, endurance, and explosivepower in volley ball players

  16. Fuzzy central tendency measure for time series variability analysis with application to fatigue electromyography signals.

    Science.gov (United States)

    Xie, Hong-Bo; Dokos, Socrates

    2013-01-01

    A new method, namely fuzzy central tendency measure (fCTM) analysis, that could enable measurement of the variability of a time series, is presented in this study. Tests on simulated data sets show that fCTM is superior to the conventional central tendency measure (CTM) in several respects, including improved relative consistency and robustness to noise. The proposed fCTM method was applied to electromyograph (EMG) signals recorded during sustained isometric contraction for tracking local muscle fatigue. The results showed that the fCTM increased significantly during the development of muscle fatigue, and it was more sensitive to the fatigue phenomenon than mean frequency (MNF), the most commonly-used muscle fatigue indicator.

  17. Variable speed wind turbine control by discrete-time sliding mode approach.

    Science.gov (United States)

    Torchani, Borhen; Sellami, Anis; Garcia, Germain

    2016-05-01

    The aim of this paper is to propose a new design variable speed wind turbine control by discrete-time sliding mode approach. This methodology is designed for linear saturated system. The saturation constraint is reported on inputs vector. To this end, the back stepping design procedure is followed to construct a suitable sliding manifold that guarantees the attainment of a stabilization control objective. It is well known that the mechanisms are investigated in term of the most proposed assumptions to deal with the damping, shaft stiffness and inertia effect of the gear. The objectives are to synthesize robust controllers that maximize the energy extracted from wind, while reducing mechanical loads and rotor speed tracking combined with an electromagnetic torque. Simulation results of the proposed scheme are presented. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    Science.gov (United States)

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  19. Similar star formation rate and metallicity variability time-scales drive the fundamental metallicity relation

    Science.gov (United States)

    Torrey, Paul; Vogelsberger, Mark; Hernquist, Lars; McKinnon, Ryan; Marinacci, Federico; Simcoe, Robert A.; Springel, Volker; Pillepich, Annalisa; Naiman, Jill; Pakmor, Rüdiger; Weinberger, Rainer; Nelson, Dylan; Genel, Shy

    2018-06-01

    The fundamental metallicity relation (FMR) is a postulated correlation between galaxy stellar mass, star formation rate (SFR), and gas-phase metallicity. At its core, this relation posits that offsets from the mass-metallicity relation (MZR) at a fixed stellar mass are correlated with galactic SFR. In this Letter, we use hydrodynamical simulations to quantify the time-scales over which populations of galaxies oscillate about the average SFR and metallicity values at fixed stellar mass. We find that Illustris and IllustrisTNG predict that galaxy offsets from the star formation main sequence and MZR oscillate over similar time-scales, are often anticorrelated in their evolution, evolve with the halo dynamical time, and produce a pronounced FMR. Our models indicate that galaxies oscillate about equilibrium SFR and metallicity values - set by the galaxy's stellar mass - and that SFR and metallicity offsets evolve in an anticorrelated fashion. This anticorrelated variability of the metallicity and SFR offsets drives the existence of the FMR in our models. In contrast to Illustris and IllustrisTNG, we speculate that the SFR and metallicity evolution tracks may become decoupled in galaxy formation models dominated by feedback-driven globally bursty SFR histories, which could weaken the FMR residual correlation strength. This opens the possibility of discriminating between bursty and non-bursty feedback models based on the strength and persistence of the FMR - especially at high redshift.

  20. The reliable solution and computation time of variable parameters logistic model

    Science.gov (United States)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.