WorldWideScience

Sample records for sample autocovariance functions

  1. Nonparametric autocovariance estimation from censored time series by Gaussian imputation.

    Science.gov (United States)

    Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K

    2009-02-01

    One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.

  2. A Generalized Autocovariance Least-Squares Method for Covariance Estimation

    DEFF Research Database (Denmark)

    Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2007-01-01

    A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....

  3. Detection of seizures from small samples using nonlinear dynamic system theory.

    Science.gov (United States)

    Yaylali, I; Koçak, H; Jayakar, P

    1996-07-01

    The electroencephalogram (EEG), like many other biological phenomena, is quite likely governed by nonlinear dynamics. Certain characteristics of the underlying dynamics have recently been quantified by computing the correlation dimensions (D2) of EEG time series data. In this paper, D2 of the unbiased autocovariance function of the scalp EEG data was used to detect electrographic seizure activity. Digital EEG data were acquired at a sampling rate of 200 Hz per channel and organized in continuous frames (duration 2.56 s, 512 data points). To increase the reliability of D2 computations with short duration data, raw EEG data were initially simplified using unbiased autocovariance analysis to highlight the periodic activity that is present during seizures. The D2 computation was then performed from the unbiased autocovariance function of each channel using the Grassberger-Procaccia method with Theiler's box-assisted correlation algorithm. Even with short duration data, this preprocessing proved to be computationally robust and displayed no significant sensitivity to implementation details such as the choices of embedding dimension and box size. The system successfully identified various types of seizures in clinical studies.

  4. A Generalized Autocovariance Least-Squares Method for Kalman Filter Tuning

    DEFF Research Database (Denmark)

    Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2008-01-01

    This paper discusses a method for estimating noise covariances from process data. In linear stochastic state-space representations the true noise covariances are generally unknown in practical applications. Using estimated covariances a Kalman filter can be tuned in order to increase the accuracy...... of the state estimates. There is a linear relationship between covariances and autocovariance. Therefore, the covariance estimation problem can be stated as a least-squares problem, which can be solved as a symmetric semidefinite least-squares problem. This problem is convex and can be solved efficiently...... by interior-point methods. A numerical algorithm for solving the symmetric is able to handle systems with mutually correlated process noise and measurement noise. (c) 2007 Elsevier Ltd. All rights reserved....

  5. A new non-parametric stationarity test of time series in the time domain

    KAUST Repository

    Jin, Lei

    2014-11-07

    © 2015 The Royal Statistical Society and Blackwell Publishing Ltd. We propose a new double-order selection test for checking second-order stationarity of a time series. To develop the test, a sequence of systematic samples is defined via Walsh functions. Then the deviations of the autocovariances based on these systematic samples from the corresponding autocovariances of the whole time series are calculated and the uniform asymptotic joint normality of these deviations over different systematic samples is obtained. With a double-order selection scheme, our test statistic is constructed by combining the deviations at different lags in the systematic samples. The null asymptotic distribution of the statistic proposed is derived and the consistency of the test is shown under fixed and local alternatives. Simulation studies demonstrate well-behaved finite sample properties of the method proposed. Comparisons with some existing tests in terms of power are given both analytically and empirically. In addition, the method proposed is applied to check the stationarity assumption of a chemical process viscosity readings data set.

  6. Optimal trading strategies—a time series approach

    Science.gov (United States)

    Bebbington, Peter A.; Kühn, Reimer

    2016-05-01

    Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.

  7. Population density approach for discrete mRNA distributions in generalized switching models for stochastic gene expression.

    Science.gov (United States)

    Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel

    2012-06-01

    We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.

  8. Mars Sample Handling Functionality

    Science.gov (United States)

    Meyer, M. A.; Mattingly, R. L.

    2018-04-01

    The final leg of a Mars Sample Return campaign would be an entity that we have referred to as Mars Returned Sample Handling (MRSH.) This talk will address our current view of the functional requirements on MRSH, focused on the Sample Receiving Facility (SRF).

  9. Timing Noise Analysis of NANOGrav Pulsars

    OpenAIRE

    Perrodin, Delphine; Jenet, Fredrick; Lommen, Andrea; Finn, Lee; Demorest, Paul; Ferdman, Robert; Gonzalez, Marjorie; Nice, David; Ransom, Scott; Stairs, Ingrid

    2013-01-01

    We analyze timing noise from five years of Arecibo and Green Bank observations of the seventeen millisecond pulsars of the North-American Nanohertz Observatory for Gravitational Waves (NANOGrav) pulsar timing array. The weighted autocovariance of the timing residuals was computed for each pulsar and compared against two possible models for the underlying noise process. The first model includes red noise and predicts the autocovariance to be a decaying exponential as a function of time lag. Th...

  10. A spatial model for a stream networks of Citarik River with the environmental variables: potential of hydrogen (PH) and temperature

    Science.gov (United States)

    Bachrudin, A.; Mohamed, N. B.; Supian, S.; Sukono; Hidayat, Y.

    2018-03-01

    Application of existing geostatistical theory of stream networks provides a number of interesting and challenging problems. Most of statistical tools in the traditional geostatistics have been based on a Euclidean distance such as autocovariance functions, but for stream data is not permissible since it deals with a stream distance. To overcome this autocovariance developed a model based on the distance the flow with using convolution kernel approach (moving average construction). Spatial model for a stream networks is widely used to monitor environmental on a river networks. In a case study of a river in province of West Java, the objective of this paper is to analyze a capability of a predictive on two environmental variables, potential of hydrogen (PH) and temperature using ordinary kriging. Several the empirical results show: (1) The best fit of autocovariance functions for temperature and potential hydrogen (ph) of Citarik River is linear which also yields the smallest root mean squared prediction error (RMSPE), (2) the spatial correlation values between the locations on upstream and on downstream of Citarik river exhibit decreasingly

  11. Importance sampling the Rayleigh phase function

    DEFF Research Database (Denmark)

    Frisvad, Jeppe Revall

    2011-01-01

    Rayleigh scattering is used frequently in Monte Carlo simulation of multiple scattering. The Rayleigh phase function is quite simple, and one might expect that it should be simple to importance sample it efficiently. However, there seems to be no one good way of sampling it in the literature....... This paper provides the details of several different techniques for importance sampling the Rayleigh phase function, and it includes a comparison of their performance as well as hints toward efficient implementation....

  12. Statistical properties of the seasonal fractionally integrated ...

    African Journals Online (AJOL)

    We investigate the properties of this new model providing stationary conditions, some explicit form of the autocovariance function and the spectral density. We also establish the asymptotic behaviour of the spectral density function near the seasonal frequencies. Keywords: Seasonality; Spatial short memory; Seasonal long ...

  13. Effect of neural connectivity on autocovariance and cross covariance estimates

    Directory of Open Access Journals (Sweden)

    Stecker Mark M

    2007-01-01

    Full Text Available Abstract Background Measurements of auto and cross covariance functions are frequently used to investigate neural systems. In interpreting this data, it is commonly assumed that the largest contribution to the recordings comes from sources near the electrode. However, the potential recorded at an electrode represents the superimposition of the potentials generated by large numbers of active neural structures. This creates situations under which the measured auto and cross covariance functions are dominated by the activity in structures far from the electrode and in which the distance dependence of the cross-covariance function differs significantly from that describing the activity in the actual neural structures. Methods Direct application of electrostatics to calculate the theoretical auto and cross covariance functions that would be recorded from electrodes immersed in a large volume filled with active neural structures with specific statistical properties. Results It is demonstrated that the potentials recorded from a monopolar electrode surrounded by dipole sources in a uniform medium are predominantly due to activity in neural structures far from the electrode when neuronal correlations drop more slowly than 1/r3 or when the size of the neural system is much smaller than a known correlation distance. Recordings from quadrupolar sources are strongly dependent on distant neurons when correlations drop more slowly than 1/r or the size of the system is much smaller than the correlation distance. Differences between bipolar and monopolar recordings are discussed. It is also demonstrated that the cross covariance of the recorded in two spatially separated electrodes declines as a power-law function of the distance between them even when the electrical activity from different neuronal structures is uncorrelated. Conclusion When extracellular electrophysiologic recordings are made from systems containing large numbers of neural structures, it is

  14. Hepatic mitochondrial function analysis using needle liver biopsy samples.

    Directory of Open Access Journals (Sweden)

    Michael J J Chu

    Full Text Available BACKGROUNDS AND AIM: Current assessment of pre-operative liver function relies upon biochemical blood tests and histology but these only indirectly measure liver function. Mitochondrial function (MF analysis allows direct measurement of cellular metabolic function and may provide an additional index of hepatic health. Conventional MF analysis requires substantial tissue samples (>100 mg obtained at open surgery. Here we report a method to assess MF using <3 mg of tissue obtained by a Tru-cut® biopsy needle making it suitable for percutaneous application. METHODS: An 18G Bard® Max-core® biopsy instrument was used to collect samples. The optimal Tru-cut® sample weight, stability in ice-cold University of Wisconsin solution, reproducibility and protocol utility was initially evaluated in Wistar rat livers then confirmed in human samples. MF was measured in saponin-permeabilized samples using high-resolution respirometry. RESULTS: The average mass of a single rat and human liver Tru-cut® biopsy was 5.60±0.30 and 5.16±0.15 mg, respectively (mean; standard error of mean. Two milligram of sample was found the lowest feasible mass for the MF assay. Tissue MF declined after 1 hour of cold storage. Six replicate measurements within rats and humans (n = 6 each showed low coefficient of variation (<10% in measurements of State-III respiration, electron transport chain (ETC capacity and respiratory control ratio (RCR. Ischemic rat and human liver samples consistently showed lower State-III respiration, ETC capacity and RCR, compared to normal perfused liver samples. CONCLUSION: Consistent measurement of liver MF and detection of derangement in a disease state was successfully demonstrated using less than half the tissue from a single Tru-cut® biopsy. Using this technique outpatient assessment of liver MF is now feasible, providing a new assay for the evaluation of hepatic function.

  15. Interpolation and sampling in spaces of analytic functions

    CERN Document Server

    Seip, Kristian

    2004-01-01

    The book is about understanding the geometry of interpolating and sampling sequences in classical spaces of analytic functions. The subject can be viewed as arising from three classical topics: Nevanlinna-Pick interpolation, Carleson's interpolation theorem for H^\\infty, and the sampling theorem, also known as the Whittaker-Kotelnikov-Shannon theorem. The book aims at clarifying how certain basic properties of the space at hand are reflected in the geometry of interpolating and sampling sequences. Key words for the geometric descriptions are Carleson measures, Beurling densities, the Nyquist rate, and the Helson-Szegő condition. The book is based on six lectures given by the author at the University of Michigan. This is reflected in the exposition, which is a blend of informal explanations with technical details. The book is essentially self-contained. There is an underlying assumption that the reader has a basic knowledge of complex and functional analysis. Beyond that, the reader should have some familiari...

  16. On the parameter estimation of first order IMA model corrupted with ...

    African Journals Online (AJOL)

    In this paper, we showed how the autocovariance functions can be used to estimate the true parameters of IMA(1) models corrupted with white noise . We performed simulation studies to demonstrate our findings. The simulation studies showed that under the presence of errors in not more than 30% of total data points, our ...

  17. Determinação do estoque de segurança em um sistema de estoque de revisão periódica, com demanda correlacionada em série Safety stock determination with serially correlated demand in a periodic-review inventory system

    Directory of Open Access Journals (Sweden)

    John M. Charnes

    1997-08-01

    Full Text Available Consideramos um modelo de reabastecimento de estoque com revisão periódica junto a uma doutrina operacional encomende-até-R para o caso de prazos determinísticos de reposição e um processo de demanda estocástica com covariância estacionária. Derivamos um método para fixar o estoque de segurança a fim de obter a probabilidade da falta de estoque desejada quando a função de autocovariância de uma demanda Gaussiana é conhecida. Dado que o método não requer que modelos paramétricos de séries temporais sejam ajustados aos dados, ele é facilmente posto em prática. Ademais, o método tem se mostrado assimptoticamente válido quando a função de autocovariância da demanda é estimada com dados históricos. Os efeitos dos vários níveis de demanda autocorrelacionada no índice de falta de estoque são demonstrados em situações em que a autocorrelação de demanda é ignorada ou desconhecida pelo gerente de estoque. Similarmente, os efeitos no nível de estoque de segurança são demonstrados para vários níveis de autocorrelação.We consider a periodic-review inventory replenishment model with an order-up-to-R operating doctrine for the case of deterministic lead times and a covariance-stationary stochastic demand process. A method is derived for setting the inventory safety stock to achieve an exact desired stockout probability when the autocovariance function for Gaussian demand is known. Because the method does not require that parametric time-series models be fit to the data, it is easily implemented in practice. Moreover, the method is shown to be asymptoticaly valid when the autocovariance function of demand is estimated from historical data. The effects on the stockout rate of various levels of autocorrelated demand are demonstrated for situations in which autocorrelation in demand goes undetected or is ignored by the inventory manager. Similarly, the changes to the required level of safety stock are demonstrated for

  18. Functions with disconnected spectrum sampling, interpolation, translates

    CERN Document Server

    Olevskii, Alexander M

    2016-01-01

    The classical sampling problem is to reconstruct entire functions with given spectrum S from their values on a discrete set L. From the geometric point of view, the possibility of such reconstruction is equivalent to determining for which sets L the exponential system with frequencies in L forms a frame in the space L^2(S). The book also treats the problem of interpolation of discrete functions by analytic ones with spectrum in S and the problem of completeness of discrete translates. The size and arithmetic structure of both the spectrum S and the discrete set L play a crucial role in these problems. After an elementary introduction, the authors give a new presentation of classical results due to Beurling, Kahane, and Landau. The main part of the book focuses on recent progress in the area, such as construction of universal sampling sets, high-dimensional and non-analytic phenomena. The reader will see how methods of harmonic and complex analysis interplay with various important concepts in different areas, ...

  19. Supervised learning with restricted training sets: a generating functional analysis

    Energy Technology Data Exchange (ETDEWEB)

    Heimel, J.A.F.; Coolen, A.C.C. [Department of Mathematics, King' s College London, Strand, London (United Kingdom)

    2001-10-26

    We study the dynamics of supervised on-line learning of realizable tasks in feed-forward neural networks. We focus on the regime where the number of examples used for training is proportional to the number of input channels N. Using generating functional techniques from spin glass theory, we are able to average over the composition of the training set and transform the problem for N{yields}{infinity} to an effective single pattern system described completely by the student autocovariance, the student-teacher overlap and the student response function with exact closed equations. Our method applies to arbitrary learning rules, i.e., not necessarily of a gradient-descent type. The resulting exact macroscopic dynamical equations can be integrated without finite-size effects up to any degree of accuracy, but their main value is in providing an exact and simple starting point for analytical approximation schemes. Finally, we show how, in the region of absent anomalous response and using the hypothesis that (as in detailed balance systems) the short-time part of the various operators can be transformed away, one can describe the stationary state of the network successfully by a set of coupled equations involving only four scalar order parameters. (author)

  20. Improvement of vertical velocity statistics measured by a Doppler lidar through comparison with sonic anemometer observations

    Science.gov (United States)

    Bonin, Timothy A.; Newman, Jennifer F.; Klein, Petra M.; Chilson, Phillip B.; Wharton, Sonia

    2016-12-01

    Since turbulence measurements from Doppler lidars are being increasingly used within wind energy and boundary-layer meteorology, it is important to assess and improve the accuracy of these observations. While turbulent quantities are measured by Doppler lidars in several different ways, the simplest and most frequently used statistic is vertical velocity variance (w'2) from zenith stares. However, the competing effects of signal noise and resolution volume limitations, which respectively increase and decrease w'2, reduce the accuracy of these measurements. Herein, an established method that utilises the autocovariance of the signal to remove noise is evaluated and its skill in correcting for volume-averaging effects in the calculation of w'2 is also assessed. Additionally, this autocovariance technique is further refined by defining the amount of lag time to use for the most accurate estimates of w'2. Through comparison of observations from two Doppler lidars and sonic anemometers on a 300 m tower, the autocovariance technique is shown to generally improve estimates of w'2. After the autocovariance technique is applied, values of w'2 from the Doppler lidars are generally in close agreement (R2 ≈ 0.95 - 0.98) with those calculated from sonic anemometer measurements.

  1. The quasar luminosity function from a variability-selected sample

    Science.gov (United States)

    Hawkins, M. R. S.; Veron, P.

    1993-01-01

    A sample of quasars is selected from a 10-yr sequence of 30 UK Schmidt plates. Luminosity functions are derived in several redshift intervals, which in each case show a featureless power-law rise towards low luminosities. There is no sign of the 'break' found in the recent UVX sample of Boyle et al. It is suggested that reasons for the disagreement are connected with biases in the selection of the UVX sample. The question of the nature of quasar evolution appears to be still unresolved.

  2. Costationarity of Locally Stationary Time Series Using costat

    OpenAIRE

    Cardinali, Alessandro; Nason, Guy P.

    2013-01-01

    This article describes the R package costat. This package enables a user to (i) perform a test for time series stationarity; (ii) compute and plot time-localized autocovariances, and (iii) to determine and explore any costationary relationship between two locally stationary time series. Two locally stationary time series are said to be costationary if there exists two time-varying combination functions such that the linear combination of the two series with the functions produces another time...

  3. Approximation of the exponential integral (well function) using sampling methods

    Science.gov (United States)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  4. Adaptive Importance Sampling with a Rapidly Varying Importance Function

    International Nuclear Information System (INIS)

    Booth, Thomas E.

    2000-01-01

    It is known well that zero-variance Monte Carlo solutions are possible if an exact importance function is available to bias the random walks. Monte Carlo can be used to estimate the importance function. This estimated importance function then can be used to bias a subsequent Monte Carlo calculation that estimates an even better importance function; this iterative process is called adaptive importance sampling.To obtain the importance function, one can expand the importance function in a basis such as the Legendre polynomials and make Monte Carlo estimates of the expansion coefficients. For simple problems, Legendre expansions of order 10 to 15 are able to represent the importance function well enough to reduce the error geometrically by ten orders of magnitude or more. The more complicated problems are addressed in which the importance function cannot be represented well by Legendre expansions of order 10 to 15. In particular, a problem with a cross-section notch and a problem with a discontinuous cross section are considered

  5. A heteroskedastic error covariance matrix estimator using a first-order conditional autoregressive Markov simulation for deriving asympotical efficient estimates from ecological sampled Anopheles arabiensis aquatic habitat covariates

    Directory of Open Access Journals (Sweden)

    Githure John I

    2009-09-01

    Full Text Available Abstract Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression. The eigenfunction

  6. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2011-07-14

    An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.

  7. Stationary and related stochastic processes sample function properties and their applications

    CERN Document Server

    Cramér, Harald

    2004-01-01

    This graduate-level text offers a comprehensive account of the general theory of stationary processes, with special emphasis on the properties of sample functions. Assuming a familiarity with the basic features of modern probability theory, the text develops the foundations of the general theory of stochastic processes, examines processes with a continuous-time parameter, and applies the general theory to procedures key to the study of stationary processes. Additional topics include analytic properties of the sample functions and the problem of time distribution of the intersections between a

  8. Unraveling hidden order in the dynamics of developed and emerging markets.

    Science.gov (United States)

    Berman, Yonatan; Shapira, Yoash; Ben-Jacob, Eshel

    2014-01-01

    The characterization of asset price returns is an important subject in modern finance. Traditionally, the dynamics of stock returns are assumed to lack any temporal order. Here we present an analysis of the autocovariance of stock market indices and unravel temporal order in several major stock markets. We also demonstrate a fundamental difference between developed and emerging markets in the past decade - emerging markets are marked by positive order in contrast to developed markets whose dynamics are marked by weakly negative order. In addition, the reaction to financial crises was found to be reversed among developed and emerging markets, presenting large positive/negative autocovariance spikes following the onset of these crises. Notably, the Chinese market shows neutral or no order while being regarded as an emerging market. These findings show that despite the coupling between international markets and global trading, major differences exist between different markets, and demonstrate that the autocovariance of markets is correlated with their stability, as well as with their state of development.

  9. Differences in Neuropsychological Functioning Between Homicidal and Nonviolent Schizophrenia Samples.

    Science.gov (United States)

    Stratton, John; Cobia, Derin J; Reilly, James; Brook, Michael; Hanlon, Robert E

    2018-02-07

    Few studies have compared performance on neurocognitive measures between violent and nonviolent schizophrenia samples. A better understanding of neurocognitive dysfunction in violent individuals with schizophrenia could increase the efficacy of violence reduction strategies and aid in risk assessment and adjudication processes. This study aimed to compare neuropsychological performance between 25 homicide offenders with schizophrenia and 25 nonviolent schizophrenia controls. The groups were matched for age, race, sex, and handedness. Independent t-tests and Mann-Whitney U-tests were used to compare the schizophrenia groups' performance on measures of cognition, including composite scores assessing domain level functioning and individual neuropsychological tests. Results indicated the violent schizophrenia group performed worse on measures of memory and executive functioning, and the Intellectual Functioning composite score, when compared to the nonviolent schizophrenia sample. These findings replicate previous research documenting neuropsychological deficits specific to violent individuals with schizophrenia and support research implicating fronto-limbic dysfunction among violent offenders with schizophrenia. © 2018 American Academy of Forensic Sciences.

  10. Effective dielectric functions of samples obtained by evaporation of alkali halides

    International Nuclear Information System (INIS)

    Sturm, J.; Grosse, P.; Theiss, W.

    1991-01-01

    This paper investigates the dielectric properties of inhomogeneous samples consisting of small alkali halide particles (NaCl, KBr) on gold-coated substrates. Our reflection measurements in the far infrared can be simulated as a thin layer of the power with an effective dielectric function on a perfectly reflecting substrate. Scanning electron micrographs provide useful information about sample topology. Several mixing formulas (e.g. the Maxwell-Garnett, the Bruggeman- and the Looyenga-formula) lead to effective dielectric functions neglecting the individual arrangement of the particles. The essence of our work is that, in contrast, the general ansatz of the Bergman spectral representation has to be employed in order to take into account topology effects on the dielectric function based on the so-called spectral density g adjustable to the specific situation. (orig.)

  11. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  12. Sampling intraspecific variability in leaf functional traits: Practical suggestions to maximize collected information.

    Science.gov (United States)

    Petruzzellis, Francesco; Palandrani, Chiara; Savi, Tadeja; Alberti, Roberto; Nardini, Andrea; Bacaro, Giovanni

    2017-12-01

    The choice of the best sampling strategy to capture mean values of functional traits for a species/population, while maintaining information about traits' variability and minimizing the sampling size and effort, is an open issue in functional trait ecology. Intraspecific variability (ITV) of functional traits strongly influences sampling size and effort. However, while adequate information is available about intraspecific variability between individuals (ITV BI ) and among populations (ITV POP ), relatively few studies have analyzed intraspecific variability within individuals (ITV WI ). Here, we provide an analysis of ITV WI of two foliar traits, namely specific leaf area (SLA) and osmotic potential (π), in a population of Quercus ilex L. We assessed the baseline ITV WI level of variation between the two traits and provided the minimum and optimal sampling size in order to take into account ITV WI , comparing sampling optimization outputs with those previously proposed in the literature. Different factors accounted for different amount of variance of the two traits. SLA variance was mostly spread within individuals (43.4% of the total variance), while π variance was mainly spread between individuals (43.2%). Strategies that did not account for all the canopy strata produced mean values not representative of the sampled population. The minimum size to adequately capture the studied functional traits corresponded to 5 leaves taken randomly from 5 individuals, while the most accurate and feasible sampling size was 4 leaves taken randomly from 10 individuals. We demonstrate that the spatial structure of the canopy could significantly affect traits variability. Moreover, different strategies for different traits could be implemented during sampling surveys. We partially confirm sampling sizes previously proposed in the recent literature and encourage future analysis involving different traits.

  13. A sampling approach to constructing Lyapunov functions for nonlinear continuous–time systems

    NARCIS (Netherlands)

    Bobiti, R.V.; Lazar, M.

    2016-01-01

    The problem of constructing a Lyapunov function for continuous-time nonlinear dynamical systems is tackled in this paper via a sampling-based approach. The main idea of the sampling-based method is to verify a Lyapunov-type inequality for a finite number of points (known state vectors) in the

  14. Rational Arithmetic Mathematica Functions to Evaluate the Two-Sided One Sample K-S Cumulative Sampling Distribution

    Directory of Open Access Journals (Sweden)

    J. Randall Brown

    2007-06-01

    Full Text Available One of the most widely used goodness-of-fit tests is the two-sided one sample Kolmogorov-Smirnov (K-S test which has been implemented by many computer statistical software packages. To calculate a two-sided p value (evaluate the cumulative sampling distribution, these packages use various methods including recursion formulae, limiting distributions, and approximations of unknown accuracy developed over thirty years ago. Based on an extensive literature search for the two-sided one sample K-S test, this paper identifies an exact formula for sample sizes up to 31, six recursion formulae, and one matrix formula that can be used to calculate a p value. To ensure accurate calculation by avoiding catastrophic cancelation and eliminating rounding error, each of these formulae is implemented in rational arithmetic. For the six recursion formulae and the matrix formula, computational experience for sample sizes up to 500 shows that computational times are increasing functions of both the sample size and the number of digits in the numerator and denominator integers of the rational number test statistic. The computational times of the seven formulae vary immensely but the Durbin recursion formula is almost always the fastest. Linear search is used to calculate the inverse of the cumulative sampling distribution (find the confidence interval half-width and tables of calculated half-widths are presented for sample sizes up to 500. Using calculated half-widths as input, computational times for the fastest formula, the Durbin recursion formula, are given for sample sizes up to two thousand.

  15. Estimation of functional failure probability of passive systems based on adaptive importance sampling method

    International Nuclear Information System (INIS)

    Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)

  16. Estimation of Finite Population Mean in Multivariate Stratified Sampling under Cost Function Using Goal Programming

    Directory of Open Access Journals (Sweden)

    Atta Ullah

    2014-01-01

    Full Text Available In practical utilization of stratified random sampling scheme, the investigator meets a problem to select a sample that maximizes the precision of a finite population mean under cost constraint. An allocation of sample size becomes complicated when more than one characteristic is observed from each selected unit in a sample. In many real life situations, a linear cost function of a sample size nh is not a good approximation to actual cost of sample survey when traveling cost between selected units in a stratum is significant. In this paper, sample allocation problem in multivariate stratified random sampling with proposed cost function is formulated in integer nonlinear multiobjective mathematical programming. A solution procedure is proposed using extended lexicographic goal programming approach. A numerical example is presented to illustrate the computational details and to compare the efficiency of proposed compromise allocation.

  17. Data depth and rank-based tests for covariance and spectral density matrices

    KAUST Repository

    Chau, Joris

    2017-06-26

    In multivariate time series analysis, objects of primary interest to study cross-dependences in the time series are the autocovariance or spectral density matrices. Non-degenerate covariance and spectral density matrices are necessarily Hermitian and positive definite, and our primary goal is to develop new methods to analyze samples of such matrices. The main contribution of this paper is the generalization of the concept of statistical data depth for collections of covariance or spectral density matrices by exploiting the geometric properties of the space of Hermitian positive definite matrices as a Riemannian manifold. This allows one to naturally characterize most central or outlying matrices, but also provides a practical framework for rank-based hypothesis testing in the context of samples of covariance or spectral density matrices. First, the desired properties of a data depth function acting on the space of Hermitian positive definite matrices are presented. Second, we propose two computationally efficient pointwise and integrated data depth functions that satisfy each of these requirements. Several applications of the developed methodology are illustrated by the analysis of collections of spectral matrices in multivariate brain signal time series datasets.

  18. Data depth and rank-based tests for covariance and spectral density matrices

    KAUST Repository

    Chau, Joris; Ombao, Hernando; Sachs, Rainer von

    2017-01-01

    In multivariate time series analysis, objects of primary interest to study cross-dependences in the time series are the autocovariance or spectral density matrices. Non-degenerate covariance and spectral density matrices are necessarily Hermitian and positive definite, and our primary goal is to develop new methods to analyze samples of such matrices. The main contribution of this paper is the generalization of the concept of statistical data depth for collections of covariance or spectral density matrices by exploiting the geometric properties of the space of Hermitian positive definite matrices as a Riemannian manifold. This allows one to naturally characterize most central or outlying matrices, but also provides a practical framework for rank-based hypothesis testing in the context of samples of covariance or spectral density matrices. First, the desired properties of a data depth function acting on the space of Hermitian positive definite matrices are presented. Second, we propose two computationally efficient pointwise and integrated data depth functions that satisfy each of these requirements. Several applications of the developed methodology are illustrated by the analysis of collections of spectral matrices in multivariate brain signal time series datasets.

  19. Analytical fitting model for rough-surface BRDF.

    Science.gov (United States)

    Renhorn, Ingmar G E; Boreman, Glenn D

    2008-08-18

    A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.

  20. Modulation transfer function cascade model for a sampled IR imaging system.

    Science.gov (United States)

    de Luca, L; Cardone, G

    1991-05-01

    The performance of the infrared scanning radiometer (IRSR) is strongly stressed in convective heat transfer applications where high spatial frequencies in the signal that describes the thermal image are present. The need to characterize more deeply the system spatial resolution has led to the formulation of a cascade model for the evaluation of the actual modulation transfer function of a sampled IR imaging system. The model can yield both the aliasing band and the averaged modulation response for a general sampling subsystem. For a line scan imaging system, which is the case of a typical IRSR, a rule of thumb that states whether the combined sampling-imaging system is either imaging-dependent or sampling-dependent is proposed. The model is tested by comparing it with other noncascade models as well as by ad hoc measurements performed on a commercial digitized IRSR.

  1. Screening disrupted molecular functions and pathways associated with clear cell renal cell carcinoma using Gibbs sampling.

    Science.gov (United States)

    Nan, Ning; Chen, Qi; Wang, Yu; Zhai, Xu; Yang, Chuan-Ce; Cao, Bin; Chong, Tie

    2017-10-01

    To explore the disturbed molecular functions and pathways in clear cell renal cell carcinoma (ccRCC) using Gibbs sampling. Gene expression data of ccRCC samples and adjacent non-tumor renal tissues were recruited from public available database. Then, molecular functions of expression changed genes in ccRCC were classed to Gene Ontology (GO) project, and these molecular functions were converted into Markov chains. Markov chain Monte Carlo (MCMC) algorithm was implemented to perform posterior inference and identify probability distributions of molecular functions in Gibbs sampling. Differentially expressed molecular functions were selected under posterior value more than 0.95, and genes with the appeared times in differentially expressed molecular functions ≥5 were defined as pivotal genes. Functional analysis was employed to explore the pathways of pivotal genes and their strongly co-regulated genes. In this work, we obtained 396 molecular functions, and 13 of them were differentially expressed. Oxidoreductase activity showed the highest posterior value. Gene composition analysis identified 79 pivotal genes, and survival analysis indicated that these pivotal genes could be used as a strong independent predictor of poor prognosis in patients with ccRCC. Pathway analysis identified one pivotal pathway - oxidative phosphorylation. We identified the differentially expressed molecular functions and pivotal pathway in ccRCC using Gibbs sampling. The results could be considered as potential signatures for early detection and therapy of ccRCC. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Physical, mental, and cognitive function in a convenience sample of centenarians in Australia.

    Science.gov (United States)

    Richmond, Robyn L; Law, Jenaleen; Kay-Lambkin, Frances

    2011-06-01

    To examine the physical, mental, and cognitive function of centenarians. Descriptive study using a structured questionnaire and convenience sampling. Residential care facilities and private dwellings in Australia. A convenience sample of 188 centenarians. The Hospital Anxiety and Depression Scale (HADS) screened for anxiety and depression. The Katz Index of Independence in Activities of Daily Living (Katz ADL) was used to assess functional status. The Quality of Life Scale was used to assess quality of life. The Mini-Mental State Examination (MMSE) was used to screen for dementia. Structured responses were obtained for living arrangement, marital status, social relationships, and supports. Centenarians had regular contact with friends (59%), neighbors (62%), and families (72%); 54% were religious and 43.5% had received social supports. Average MMSE and Katz ADL scores were 21.5 and 3.7, respectively; 45% had scores on the MMSE indicative of dementia, 10% indicated anxiety and 14% depression on the HADS. Participants with poor ratings of health experienced higher rates of anxiety and depression than their healthier counterparts. In this convenience sample of Australian centenarians, anxiety and depression was relatively nonexistent, and most reported a high quality of life. This was despite objective deterioration in functional status, paralleling the aging process, and high dependence on others for everyday tasks. Potentially, this is suggestive of a unique ability within the sample to adapt to aging and its limitations. © 2011, Copyright the Authors. Journal compilation © 2011, The American Geriatrics Society.

  3. Importance sampling and histogrammic representations of reactivity functions and product distributions in Monte Carlo quasiclassical trajectory calculations

    International Nuclear Information System (INIS)

    Faist, M.B.; Muckerman, J.T.; Schubert, F.E.

    1978-01-01

    The application of importance sampling as a variance reduction technique in Monte Carlo quasiclassical trajectory calculations is discussed. Two measures are proposed which quantify the quality of the importance sampling used, and indicate whether further improvements may be obtained by some other choice of importance sampling function. A general procedure for constructing standardized histogrammic representations of differential functions which integrate to the appropriate integral value obtained from a trajectory calculation is presented. Two criteria for ''optimum'' binning of these histogrammic representations of differential functions are suggested. These are (1) that each bin makes an equal contribution to the integral value, and (2) each bin has the same relative error. Numerical examples illustrating these sampling and binning concepts are provided

  4. Transfer function design based on user selected samples for intuitive multivariate volume exploration

    KAUST Repository

    Zhou, Liang

    2013-02-01

    Multivariate volumetric datasets are important to both science and medicine. We propose a transfer function (TF) design approach based on user selected samples in the spatial domain to make multivariate volumetric data visualization more accessible for domain users. Specifically, the user starts the visualization by probing features of interest on slices and the data values are instantly queried by user selection. The queried sample values are then used to automatically and robustly generate high dimensional transfer functions (HDTFs) via kernel density estimation (KDE). Alternatively, 2D Gaussian TFs can be automatically generated in the dimensionality reduced space using these samples. With the extracted features rendered in the volume rendering view, the user can further refine these features using segmentation brushes. Interactivity is achieved in our system and different views are tightly linked. Use cases show that our system has been successfully applied for simulation and complicated seismic data sets. © 2013 IEEE.

  5. Transfer function design based on user selected samples for intuitive multivariate volume exploration

    KAUST Repository

    Zhou, Liang; Hansen, Charles

    2013-01-01

    Multivariate volumetric datasets are important to both science and medicine. We propose a transfer function (TF) design approach based on user selected samples in the spatial domain to make multivariate volumetric data visualization more accessible for domain users. Specifically, the user starts the visualization by probing features of interest on slices and the data values are instantly queried by user selection. The queried sample values are then used to automatically and robustly generate high dimensional transfer functions (HDTFs) via kernel density estimation (KDE). Alternatively, 2D Gaussian TFs can be automatically generated in the dimensionality reduced space using these samples. With the extracted features rendered in the volume rendering view, the user can further refine these features using segmentation brushes. Interactivity is achieved in our system and different views are tightly linked. Use cases show that our system has been successfully applied for simulation and complicated seismic data sets. © 2013 IEEE.

  6. Method for utilizing properties of the sinc(x) function for phase retrieval on nyquist-under-sampled data

    Science.gov (United States)

    Dean, Bruce H. (Inventor); Smith, Jeffrey Scott (Inventor); Aronstein, David L. (Inventor)

    2012-01-01

    Disclosed herein are systems, methods, and non-transitory computer-readable storage media for simulating propagation of an electromagnetic field, performing phase retrieval, or sampling a band-limited function. A system practicing the method generates transformed data using a discrete Fourier transform which samples a band-limited function f(x) without interpolating or modifying received data associated with the function f(x), wherein an interval between repeated copies in a periodic extension of the function f(x) obtained from the discrete Fourier transform is associated with a sampling ratio Q, defined as a ratio of a sampling frequency to a band-limited frequency, and wherein Q is assigned a value between 1 and 2 such that substantially no aliasing occurs in the transformed data, and retrieves a phase in the received data based on the transformed data, wherein the phase is used as feedback to an optical system.

  7. Expression of Biglycan in First Trimester Chorionic Villous Sampling Placental Samples and Altered Function in Telomerase-Immortalized Microvascular Endothelial Cells

    NARCIS (Netherlands)

    Chui, Amy; Gunatillake, Tilini; Brennecke, Shaun P.; Ignjatovic, Vera; Monagle, Paul T.; Whitelock, John M.; van Zanten, Dagmar E.; Eijsink, Jasper; Wang, Yao; Deane, James; Borg, Anthony J.; Stevenson, Janet; Erwich, Jan Jaap; Said, Joanne M.; Murthi, Padma

    Objective-Biglycan (BGN) has reduced expression in placentae from pregnancies complicated by fetal growth restriction (FGR). We used first trimester placental samples from pregnancies with later small for gestational age (SGA) infants as a surrogate for FGR. The functional consequences of reduced

  8. Large-scale prospective T cell function assays in shipped, unfrozen blood samples

    DEFF Research Database (Denmark)

    Hadley, David; Cheung, Roy K; Becker, Dorothy J

    2014-01-01

    , for measuring core T cell functions. The Trial to Reduce Insulin-dependent diabetes mellitus in the Genetically at Risk (TRIGR) type 1 diabetes prevention trial used consecutive measurements of T cell proliferative responses in prospectively collected fresh heparinized blood samples shipped by courier within...... cell immunocompetence. We have found that the vast majority of the samples were viable up to 3 days from the blood draw, yet meaningful responses were found in a proportion of those with longer travel times. Furthermore, the shipping time of uncooled samples significantly decreased both the viabilities...... North America. In this article, we report on the quality control implications of this simple and pragmatic shipping practice and the interpretation of positive- and negative-control analytes in our assay. We used polyclonal and postvaccination responses in 4,919 samples to analyze the development of T...

  9. Maternal Drug Abuse History, Maltreatment, and Functioning in a Clinical Sample of Urban Children

    Science.gov (United States)

    Onigu-Otite, Edore C.; Belcher, Harolyn M. E.

    2012-01-01

    Objective: This study examined the association between maternal drug abuse history, maltreatment exposure, and functioning, in a clinical sample of young children seeking therapy for maltreatment. Methods: Data were collected on 91 children, mean age 5.3 years (SD 1.0). The Preschool and Early Childhood Functional Assessment Scales (PECFAS) was…

  10. A COMPLETE SAMPLE OF BRIGHT SWIFT LONG GAMMA-RAY BURSTS. I. SAMPLE PRESENTATION, LUMINOSITY FUNCTION AND EVOLUTION

    International Nuclear Information System (INIS)

    Salvaterra, R.; Campana, S.; Vergani, S. D.; Covino, S.; D'Avanzo, P.; Fugazza, D.; Ghirlanda, G.; Ghisellini, G.; Melandri, A.; Sbarufatti, B.; Tagliaferri, G.; Nava, L.; Flores, H.; Piranomonte, S.

    2012-01-01

    We present a carefully selected sub-sample of Swift long gamma-ray bursts (GRBs) that is complete in redshift. The sample is constructed by considering only bursts with favorable observing conditions for ground-based follow-up searches, which are bright in the 15-150 keV Swift/BAT band, i.e., with 1-s peak photon fluxes in excess to 2.6 photons s –1 cm –2 . The sample is composed of 58 bursts, 52 of them with redshift for a completeness level of 90%, while another two have a redshift constraint, reaching a completeness level of 95%. For only three bursts we have no constraint on the redshift. The high level of redshift completeness allows us for the first time to constrain the GRB luminosity function and its evolution with cosmic times in an unbiased way. We find that strong evolution in luminosity (δ l = 2.3 ± 0.6) or in density (δ d = 1.7 ± 0.5) is required in order to account for the observations. The derived redshift distributions in the two scenarios are consistent with each other, in spite of their different intrinsic redshift distributions. This calls for other indicators to distinguish among different evolution models. Complete samples are at the base of any population studies. In future works we will use this unique sample of Swift bright GRBs to study the properties of the population of long GRBs.

  11. A COMPLETE SAMPLE OF BRIGHT SWIFT LONG GAMMA-RAY BURSTS. I. SAMPLE PRESENTATION, LUMINOSITY FUNCTION AND EVOLUTION

    Energy Technology Data Exchange (ETDEWEB)

    Salvaterra, R. [INAF, IASF Milano, via E. Bassini 15, I-20133 Milano (Italy); Campana, S.; Vergani, S. D.; Covino, S.; D' Avanzo, P.; Fugazza, D.; Ghirlanda, G.; Ghisellini, G.; Melandri, A.; Sbarufatti, B.; Tagliaferri, G. [INAF, Osservatorio Astronomico di Brera, via E. Bianchi 46, I-23807 Merate (Saint Lucia) (Italy); Nava, L. [SISSA, via Bonomea 265, I-34136 Trieste (Italy); Flores, H. [Laboratoire GEPI, Observatoire de Paris, CNRS-UMR8111, Univ. Paris-Diderot 5 place Jules Janssen, 92195 Meudon (France); Piranomonte, S., E-mail: ruben@lambrate.inaf.it [INAF, Osservatorio Astronomico di Roma, via Frascati 33, 00040 Monte Porzio Catone, Rome (Italy)

    2012-04-10

    We present a carefully selected sub-sample of Swift long gamma-ray bursts (GRBs) that is complete in redshift. The sample is constructed by considering only bursts with favorable observing conditions for ground-based follow-up searches, which are bright in the 15-150 keV Swift/BAT band, i.e., with 1-s peak photon fluxes in excess to 2.6 photons s{sup -1} cm{sup -2}. The sample is composed of 58 bursts, 52 of them with redshift for a completeness level of 90%, while another two have a redshift constraint, reaching a completeness level of 95%. For only three bursts we have no constraint on the redshift. The high level of redshift completeness allows us for the first time to constrain the GRB luminosity function and its evolution with cosmic times in an unbiased way. We find that strong evolution in luminosity ({delta}{sub l} = 2.3 {+-} 0.6) or in density ({delta}{sub d} = 1.7 {+-} 0.5) is required in order to account for the observations. The derived redshift distributions in the two scenarios are consistent with each other, in spite of their different intrinsic redshift distributions. This calls for other indicators to distinguish among different evolution models. Complete samples are at the base of any population studies. In future works we will use this unique sample of Swift bright GRBs to study the properties of the population of long GRBs.

  12. Taxonomic and functional profiles of soil samples from Atlantic forest and Caatinga biomes in northeastern Brazil.

    Science.gov (United States)

    Pacchioni, Ralfo G; Carvalho, Fabíola M; Thompson, Claudia E; Faustino, André L F; Nicolini, Fernanda; Pereira, Tatiana S; Silva, Rita C B; Cantão, Mauricio E; Gerber, Alexandra; Vasconcelos, Ana T R; Agnez-Lima, Lucymara F

    2014-06-01

    Although microorganisms play crucial roles in ecosystems, metagenomic analyses of soil samples are quite scarce, especially in the Southern Hemisphere. In this work, the microbial diversity of soil samples from an Atlantic Forest and Caatinga was analyzed using a metagenomic approach. Proteobacteria and Actinobacteria were the dominant phyla in both samples. Among which, a significant proportion of stress-resistant bacteria associated to organic matter degradation was found. Sequences related to metabolism of amino acids, nitrogen, and DNA and stress resistance were more frequent in Caatinga soil, while the forest sample showed the highest occurrence of hits annotated in phosphorous metabolism, defense mechanisms, and aromatic compound degradation subsystems. The principal component analysis (PCA) showed that our samples are close to the desert metagenomes in relation to taxonomy, but are more similar to rhizosphere microbiota in relation to the functional profiles. The data indicate that soil characteristics affect the taxonomic and functional distribution; these characteristics include low nutrient content, high drainage (both are sandy soils), vegetation, and exposure to stress. In both samples, a rapid turnover of organic matter with low greenhouse gas emission was suggested by the functional profiles obtained, reinforcing the importance of preserving natural areas. © 2014 The Authors. MicrobiologyOpen published by John Wiley & Sons Ltd.

  13. Non-parametric system identification from non-linear stochastic response

    DEFF Research Database (Denmark)

    Rüdinger, Finn; Krenk, Steen

    2001-01-01

    An estimation method is proposed for identification of non-linear stiffness and damping of single-degree-of-freedom systems under stationary white noise excitation. Non-parametric estimates of the stiffness and damping along with an estimate of the white noise intensity are obtained by suitable...... of the energy at mean-level crossings, which yields the damping relative to white noise intensity. Finally, an estimate of the noise intensity is extracted by estimating the absolute damping from the autocovariance functions of a set of modified phase plane variables at different energy levels. The method...

  14. Accounting for animal movement in estimation of resource selection functions: sampling and data analysis.

    Science.gov (United States)

    Forester, James D; Im, Hae Kyung; Rathouz, Paul J

    2009-12-01

    Patterns of resource selection by animal populations emerge as a result of the behavior of many individuals. Statistical models that describe these population-level patterns of habitat use can miss important interactions between individual animals and characteristics of their local environment; however, identifying these interactions is difficult. One approach to this problem is to incorporate models of individual movement into resource selection models. To do this, we propose a model for step selection functions (SSF) that is composed of a resource-independent movement kernel and a resource selection function (RSF). We show that standard case-control logistic regression may be used to fit the SSF; however, the sampling scheme used to generate control points (i.e., the definition of availability) must be accommodated. We used three sampling schemes to analyze simulated movement data and found that ignoring sampling and the resource-independent movement kernel yielded biased estimates of selection. The level of bias depended on the method used to generate control locations, the strength of selection, and the spatial scale of the resource map. Using empirical or parametric methods to sample control locations produced biased estimates under stronger selection; however, we show that the addition of a distance function to the analysis substantially reduced that bias. Assuming a uniform availability within a fixed buffer yielded strongly biased selection estimates that could be corrected by including the distance function but remained inefficient relative to the empirical and parametric sampling methods. As a case study, we used location data collected from elk in Yellowstone National Park, USA, to show that selection and bias may be temporally variable. Because under constant selection the amount of bias depends on the scale at which a resource is distributed in the landscape, we suggest that distance always be included as a covariate in SSF analyses. This approach to

  15. Functional inverted Wishart for Bayesian multivariate spatial modeling with application to regional climatology model data.

    Science.gov (United States)

    Duan, L L; Szczesniak, R D; Wang, X

    2017-11-01

    Modern environmental and climatological studies produce multiple outcomes at high spatial resolutions. Multivariate spatial modeling is an established means to quantify cross-correlation among outcomes. However, existing models typically suffer from poor computational efficiency and lack the flexibility to simultaneously estimate auto- and cross-covariance structures. In this article, we undertake a novel construction of covariance by utilizing spectral convolution and by imposing an inverted Wishart prior on the cross-correlation structure. The cross-correlation structure with this functional inverted Wishart prior flexibly accommodates not only positive but also weak or negative associations among outcomes while preserving spatial resolution. Furthermore, the proposed model is computationally efficient and produces easily interpretable results, including the individual autocovariances and full cross-correlation matrices, as well as a partial cross-correlation matrix reflecting the outcome correlation after excluding the effects caused by spatial convolution. The model is examined using simulated data sets under different scenarios. It is also applied to the data from the North American Regional Climate Change Assessment Program, examining long-term associations between surface outcomes for air temperature, pressure, humidity, and radiation, on the land area of the North American West Coast. Results and predictive performance are compared with findings from approaches using convolution only or coregionalization.

  16. Functional inverted Wishart for Bayesian multivariate spatial modeling with application to regional climatology model data

    Science.gov (United States)

    Duan, L. L.; Szczesniak, R. D.; Wang, X.

    2018-01-01

    Modern environmental and climatological studies produce multiple outcomes at high spatial resolutions. Multivariate spatial modeling is an established means to quantify cross-correlation among outcomes. However, existing models typically suffer from poor computational efficiency and lack the flexibility to simultaneously estimate auto- and cross-covariance structures. In this article, we undertake a novel construction of covariance by utilizing spectral convolution and by imposing an inverted Wishart prior on the cross-correlation structure. The cross-correlation structure with this functional inverted Wishart prior flexibly accommodates not only positive but also weak or negative associations among outcomes while preserving spatial resolution. Furthermore, the proposed model is computationally efficient and produces easily interpretable results, including the individual autocovariances and full cross-correlation matrices, as well as a partial cross-correlation matrix reflecting the outcome correlation after excluding the effects caused by spatial convolution. The model is examined using simulated data sets under different scenarios. It is also applied to the data from the North American Regional Climate Change Assessment Program, examining long-term associations between surface outcomes for air temperature, pressure, humidity, and radiation, on the land area of the North American West Coast. Results and predictive performance are compared with findings from approaches using convolution only or coregionalization. PMID:29576735

  17. Real Time Deconvolution of In-Vivo Ultrasound Images

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2013-01-01

    and two wavelengths. This can be improved by deconvolution, which increase the bandwidth and equalizes the phase to increase resolution under the constraint of the electronic noise in the received signal. A fixed interval Kalman filter based deconvolution routine written in C is employed. It uses a state...... resolution has been determined from the in-vivo liver image using the auto-covariance function. From the envelope of the estimated pulse the axial resolution at Full-Width-Half-Max is 0.581 mm corresponding to 1.13 l at 3 MHz. The algorithm increases the resolution to 0.116 mm or 0.227 l corresponding...... to a factor of 5.1. The basic pulse can be estimated in roughly 0.176 seconds on a single CPU core on an Intel i5 CPU running at 1.8 GHz. An in-vivo image consisting of 100 lines of 1600 samples can be processed in roughly 0.1 seconds making it possible to perform real-time deconvolution on ultrasound data...

  18. Hybrid image and blood sampling input function for quantification of small animal dynamic PET data

    International Nuclear Information System (INIS)

    Shoghi, Kooresh I.; Welch, Michael J.

    2007-01-01

    We describe and validate a hybrid image and blood sampling (HIBS) method to derive the input function for quantification of microPET mice data. The HIBS algorithm derives the peak of the input function from the image, which is corrected for recovery, while the tail is derived from 5 to 6 optimally placed blood sampling points. A Bezier interpolation algorithm is used to link the rightmost image peak data point to the leftmost blood sampling point. To assess the performance of HIBS, 4 mice underwent 60-min microPET imaging sessions following a 0.40-0.50-mCi bolus administration of 18 FDG. In total, 21 blood samples (blood-sampled plasma time-activity curve, bsPTAC) were obtained throughout the imaging session to compare against the proposed HIBS method. MicroPET images were reconstructed using filtered back projection with a zoom of 2.75 on the heart. Volumetric regions of interest (ROIs) were composed by drawing circular ROIs 3 pixels in diameter on 3-4 transverse planes of the left ventricle. Performance was characterized by kinetic simulations in terms of bias in parameter estimates when bsPTAC and HIBS are used as input functions. The peak of the bsPTAC curve was distorted in comparison to the HIBS-derived curve due to temporal limitations and delay in blood sampling, which affected the rates of bidirectional exchange between plasma and tissue. The results highlight limitations in using bsPTAC. The HIBS method, however, yields consistent results, and thus, is a substitute for bsPTAC

  19. Combined HSRL and Optical Autocovarience Wind Lidar Demonstration

    Data.gov (United States)

    National Aeronautics and Space Administration — Global observations of atmospheric aerosol scattering and extinction profiles are needed to directly support several Decadal survey missions (e.g. ACE, GACM)....

  20. Organizing heterogeneous samples using community detection of GIMME-derived resting state functional networks.

    Directory of Open Access Journals (Sweden)

    Kathleen M Gates

    Full Text Available Clinical investigations of many neuropsychiatric disorders rely on the assumption that diagnostic categories and typical control samples each have within-group homogeneity. However, research using human neuroimaging has revealed that much heterogeneity exists across individuals in both clinical and control samples. This reality necessitates that researchers identify and organize the potentially varied patterns of brain physiology. We introduce an analytical approach for arriving at subgroups of individuals based entirely on their brain physiology. The method begins with Group Iterative Multiple Model Estimation (GIMME to assess individual directed functional connectivity maps. GIMME is one of the only methods to date that can recover both the direction and presence of directed functional connectivity maps in heterogeneous data, making it an ideal place to start since it addresses the problem of heterogeneity. Individuals are then grouped based on similarities in their connectivity patterns using a modularity approach for community detection. Monte Carlo simulations demonstrate that using GIMME in combination with the modularity algorithm works exceptionally well--on average over 97% of simulated individuals are placed in the accurate subgroup with no prior information on functional architecture or group identity. Having demonstrated reliability, we examine resting-state data of fronto-parietal regions drawn from a sample (N = 80 of typically developing and attention-deficit/hyperactivity disorder (ADHD -diagnosed children. Here, we find 5 subgroups. Two subgroups were predominantly comprised of ADHD, suggesting that more than one biological marker exists that can be used to identify children with ADHD based from their brain physiology. Empirical evidence presented here supports notions that heterogeneity exists in brain physiology within ADHD and control samples. This type of information gained from the approach presented here can assist in

  1. Sample Data Synchronization and Harmonic Analysis Algorithm Based on Radial Basis Function Interpolation

    Directory of Open Access Journals (Sweden)

    Huaiqing Zhang

    2014-01-01

    Full Text Available The spectral leakage has a harmful effect on the accuracy of harmonic analysis for asynchronous sampling. This paper proposed a time quasi-synchronous sampling algorithm which is based on radial basis function (RBF interpolation. Firstly, a fundamental period is evaluated by a zero-crossing technique with fourth-order Newton’s interpolation, and then, the sampling sequence is reproduced by the RBF interpolation. Finally, the harmonic parameters can be calculated by FFT on the synchronization of sampling data. Simulation results showed that the proposed algorithm has high accuracy in measuring distorted and noisy signals. Compared to the local approximation schemes as linear, quadric, and fourth-order Newton interpolations, the RBF is a global approximation method which can acquire more accurate results while the time-consuming is about the same as Newton’s.

  2. Sampled control of vibration in suspended cask by using vibration manipulation functions

    International Nuclear Information System (INIS)

    Kotake, Shigeo

    2014-01-01

    Safe and reliable operation is most important for decommissioning the Fukushima 1 nuclear power plant. Especially it requires for transferring spent nuclear fuels from fuel pool to storage cask. Since the heavy cask will be suspended during the transferring operation, there is a risk of dropping it in case of the strike of large earthquakes. In this study, we introduce analytical functions to suppress residual vibration of a suspended cask by using vibration manipulation function. Hence the oscillation of the cask can be feedforward or sampled-data controlled by moving a trolley with analog actuator, the possible risk could be reduced. (author)

  3. Gas and liquid sampling for closed canisters in K-West basins - functional design criteria

    International Nuclear Information System (INIS)

    Pitkoff, C.C.

    1994-01-01

    The purpose of this document is to provide functions and requirements for the design and fabrication of equipment for sampling closed canisters in the K-West basin. The samples will be used to help determine the state of the fuel elements in closed canisters. The characterization information obtained will support evaluation and development of processes required for safe storage and disposition of Spent Nuclear Fuel (SNF) materials

  4. Mechanical properties and filler distribution as a function filler content in silica filled PDMS samples

    International Nuclear Information System (INIS)

    Hawley, Marilyn E.; Wrobleski, Debra A.; Orler, E. Bruce; Houlton, Robert J.; Chitanvis, Kiran E.; Brown, Geoffrey W.; Hanson, David E.

    2004-01-01

    Atomic force microscopy (AFM) phase imaging and tensile stress-strain measurements are used to study a series of model compression molded fumed silica filled polydimethysiloxane (PDMS) samples with filler content of zero, 20, 35, and 50 parts per hundred (phr) to determine the relationship between filler content and stress-strain properties. AFM phase imaging was used to determine filler size, degree of aggregation, and distribution within the soft PDMS matrix. A small tensile stage was used to measure mechanical properties. Samples were not pulled to break in order to study Mullins and aging effects. Several identical 35 phr samples were subjected to an initial stress, and then one each was reevaluated over intervals up to 26 weeks to determine the degree to which these samples recovered their initial stress-strain behavior as a function of time. One sample was tested before and after heat treatment to determine if heating accelerated recovery of the stress-strain behavior. The effect of filler surface treatment on mechanical properties was examined for two samples containing 35 phr filler treated or untreated with hexamethyldisilazane (HMDZ), respectively. Fiduciary marks were used on several samples to determine permanent set. 35 phr filler samples were found to give the optimum mechanical properties. A clear Mullins effect was seen. Within experimental error, no change was seen in mechanical behavior as a function of time or heat-treatment. The mechanical properties of the sample containing the HDMZ treated silica were adversely affected. AFM phase images revealed aggregation and nonuniform distribution of the filler for all samples. Finally, a permanent set of about 3 to 6 percent was observed for the 35 phr samples.

  5. Metabolic liver function measured in vivo by dynamic (18)F-FDGal PET/CT without arterial blood sampling.

    Science.gov (United States)

    Horsager, Jacob; Munk, Ole Lajord; Sørensen, Michael

    2015-01-01

    Metabolic liver function can be measured by dynamic PET/CT with the radio-labelled galactose-analogue 2-[(18)F]fluoro-2-deoxy-D-galactose ((18)F-FDGal) in terms of hepatic systemic clearance of (18)F-FDGal (K, ml blood/ml liver tissue/min). The method requires arterial blood sampling from a radial artery (arterial input function), and the aim of this study was to develop a method for extracting an image-derived, non-invasive input function from a volume of interest (VOI). Dynamic (18)F-FDGal PET/CT data from 16 subjects without liver disease (healthy subjects) and 16 patients with liver cirrhosis were included in the study. Five different input VOIs were tested: four in the abdominal aorta and one in the left ventricle of the heart. Arterial input function from manual blood sampling was available for all subjects. K*-values were calculated using time-activity curves (TACs) from each VOI as input and compared to the K-value calculated using arterial blood samples as input. Each input VOI was tested on PET data reconstructed with and without resolution modelling. All five image-derived input VOIs yielded K*-values that correlated significantly with K calculated using arterial blood samples. Furthermore, TACs from two different VOIs yielded K*-values that did not statistically deviate from K calculated using arterial blood samples. A semicircle drawn in the posterior part of the abdominal aorta was the only VOI that was successful for both healthy subjects and patients as well as for PET data reconstructed with and without resolution modelling. Metabolic liver function using (18)F-FDGal PET/CT can be measured without arterial blood samples by using input data from a semicircle VOI drawn in the posterior part of the abdominal aorta.

  6. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions. II. A simplified implementation.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2012-09-28

    An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.

  7. Plasma functionalization of powdery nanomaterials using porous filter electrode and sample circulation

    Science.gov (United States)

    Lee, Deuk Yeon; Choi, Jae Hong; Shin, Jung Chul; Jung, Man Ki; Song, Seok Kyun; Suh, Jung Ki; Lee, Chang Young

    2018-06-01

    Compared with wet processes, dry functionalization using plasma is fast, scalable, solvent-free, and thus presents a promising approach for grafting functional groups to powdery nanomaterials. Previous approaches, however, had difficulties in maintaining an intimate sample-plasma contact and achieving uniform functionalization. Here, we demonstrate a plasma reactor equipped with a porous filter electrode that increases both homogeneity and degree of functionalization by capturing and circulating powdery carbon nanotubes (CNTs) via vacuum and gas blowing. Spectroscopic measurements verify that treatment with O2/air plasma generates oxygen-containing groups on the surface of CNTs, with the degree of functionalization readily controlled by varying the circulation number. Gas sensors fabricated using the plasma-treated CNTs confirm alteration of molecular adsorption on the surface of CNTs. A sequential treatment with NH3 plasma following the oxidation pre-treatment results in the functionalization with nitrogen species of up to 3.2 wt%. Our approach requiring no organic solvents not only is cost-effective and environmentally friendly, but also serves as a versatile tool that applies to other powdery micro or nanoscale materials for controlled modification of their surfaces.

  8. Method for estimating modulation transfer function from sample images.

    Science.gov (United States)

    Saiga, Rino; Takeuchi, Akihisa; Uesugi, Kentaro; Terada, Yasuko; Suzuki, Yoshio; Mizutani, Ryuta

    2018-02-01

    The modulation transfer function (MTF) represents the frequency domain response of imaging modalities. Here, we report a method for estimating the MTF from sample images. Test images were generated from a number of images, including those taken with an electron microscope and with an observation satellite. These original images were convolved with point spread functions (PSFs) including those of circular apertures. The resultant test images were subjected to a Fourier transformation. The logarithm of the squared norm of the Fourier transform was plotted against the squared distance from the origin. Linear correlations were observed in the logarithmic plots, indicating that the PSF of the test images can be approximated with a Gaussian. The MTF was then calculated from the Gaussian-approximated PSF. The obtained MTF closely coincided with the MTF predicted from the original PSF. The MTF of an x-ray microtomographic section of a fly brain was also estimated with this method. The obtained MTF showed good agreement with the MTF determined from an edge profile of an aluminum test object. We suggest that this approach is an alternative way of estimating the MTF, independently of the image type. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Different Characteristics of the Female Sexual Function Index in a Sample of Sexually Active and Inactive Women.

    Science.gov (United States)

    Hevesi, Krisztina; Mészáros, Veronika; Kövi, Zsuzsanna; Márki, Gabriella; Szabó, Marianna

    2017-09-01

    The Female Sexual Function Index (FSFI) is a widely used measurement tool to assess female sexual function along the six dimensions of desire, arousal, lubrication, orgasm, satisfaction, and pain. However, the structure of the questionnaire is not clear, and several studies have found high correlations among the dimensions, indicating that a common underlying "sexual function" factor might be present. To investigate whether female sexual function is best understood as a multidimensional construct or, alternatively, whether a common underlying factor explains most of the variance in FSFI scores, and to investigate the possible effect of the common practice of including sexually inactive women in studies using the FSFI. The sample consisted of 508 women: 202 university students, 177 patients with endometriosis, and 129 patients with polycystic ovary syndrome. Participants completed the FSFI, and confirmatory factor analyses were used to test the underlying structure of this instrument in the total sample and in samples including sexually active women only. The FSFI is a multidimensional self-report questionnaire composed of 19 items. Strong positive correlations were found among five of the six original factors on the FSFI. Confirmatory factor analyses showed that in the total sample items loaded mainly on the general sexual function factor and very little variance was explained by the specific factors. However, when only sexually active women were included in the analyses, a clear factor structure emerged, with items loading on their six specific factors, and most of the variance in FSFI scores was explained by the specific factors, rather than the general factor. University students reported higher scores, indicating better functioning compared with the patient samples. The reliable and valid assessment of female sexual function can contribute to better understanding, prevention, and treatment of different sexual difficulties and dysfunctions. This study provides a

  10. A method for ion distribution function evaluation using escaping neutral atom kinetic energy samples

    International Nuclear Information System (INIS)

    Goncharov, P.R.; Ozaki, T.; Veshchev, E.A.; Sudo, S.

    2008-01-01

    A reliable method to evaluate the probability density function for escaping atom kinetic energies is required for the analysis of neutral particle diagnostic data used to study the fast ion distribution function in fusion plasmas. Digital processing of solid state detector signals is proposed in this paper as an improvement of the simple histogram approach. Probability density function for kinetic energies of neutral particles escaping from the plasma has been derived in a general form taking into account the plasma ion energy distribution, electron capture and loss rates, superposition along the diagnostic sight line and the magnetic surface geometry. A pseudorandom number generator has been realized that enables a sample of escaping neutral particle energies to be simulated for given plasma parameters and experimental conditions. Empirical probability density estimation code has been developed and tested to reconstruct the probability density function from simulated samples assuming. Maxwellian and classical slowing down plasma ion energy distribution shapes for different temperatures and different slowing down times. The application of the developed probability density estimation code to the analysis of experimental data obtained by the novel Angular-Resolved Multi-Sightline Neutral Particle Analyzer has been studied to obtain the suprathermal particle distributions. The optimum bandwidth parameter selection algorithm has also been realized. (author)

  11. Closed-Form Representations of the Density Function and Integer Moments of the Sample Correlation Coefficient

    Directory of Open Access Journals (Sweden)

    Serge B. Provost

    2015-07-01

    Full Text Available This paper provides a simplified representation of the exact density function of R, the sample correlation coefficient. The odd and even moments of R are also obtained in closed forms. Being expressed in terms of generalized hypergeometric functions, the resulting representations are readily computable. Some numerical examples corroborate the validity of the results derived herein.

  12. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    Science.gov (United States)

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  13. Spatial distribution and optimal harvesting of an age-structured population in a fluctuating environment.

    Science.gov (United States)

    Engen, Steinar; Lee, Aline Magdalena; Sæther, Bernt-Erik

    2018-02-01

    We analyze a spatial age-structured model with density regulation, age specific dispersal, stochasticity in vital rates and proportional harvesting. We include two age classes, juveniles and adults, where juveniles are subject to logistic density dependence. There are environmental stochastic effects with arbitrary spatial scales on all birth and death rates, and individuals of both age classes are subject to density independent dispersal with given rates and specified distributions of dispersal distances. We show how to simulate the joint density fields of the age classes and derive results for the spatial scales of all spatial autocovariance functions for densities. A general result is that the squared scale has an additive term equal to the squared scale of the environmental noise, corresponding to the Moran effect, as well as additive terms proportional to the dispersal rate and variance of dispersal distance for the age classes and approximately inversely proportional to the strength of density regulation. We show that the optimal harvesting strategy in the deterministic case is to harvest only juveniles when their relative value (e.g. financial) is large, and otherwise only adults. With increasing environmental stochasticity there is an interval of increasing length of values of juveniles relative to adults where both age classes should be harvested. Harvesting generally tends to increase all spatial scales of the autocovariances of densities. Copyright © 2017. Published by Elsevier Inc.

  14. Clinical symptoms predict concurrent social and global functioning in an early psychosis sample.

    Science.gov (United States)

    Cacciotti-Saija, Cristina; Langdon, Robyn; Ward, Philip B; Hickie, Ian B; Guastella, Adam J

    2018-04-01

    Although well established in chronic schizophrenia, the key determinants of functioning remain unknown during the early phase of a psychotic disorder. The aim of this study was to comprehensively examine the social cognitive, basic neurocognitive and clinical predictors of concurrent social functioning and global functioning in an early psychosis sample. This study examined the relationship between social cognition, basic neurocognition and clinical symptoms with concurrent functioning in 51 early psychosis individuals. Assessments included a range of self-report, observational and clinician-rated measures of cognitive, symptom severity and functioning domains. Results revealed a significant association between self-reported social function and lower levels of both social interaction anxiety and negative psychotic symptoms. A significant association was also observed between lower levels of negative psychotic symptoms and observed social functioning. Lastly, results demonstrated a significant association between reduced negative psychotic symptoms and clinician-rated global functioning. Clinical domains such as negative symptoms and social interaction anxiety significantly contribute to an optimal model predicting outcome during the early phase of a psychotic disorder. These clinical features may also provide useful markers of an individual's capacity for social participation. Clinical implications include the need for early targeted intervention to address social anxiety and negative psychotic symptoms to facilitate optimum patient outcome. © 2015 Wiley Publishing Asia Pty Ltd.

  15. Exploring the Relationship between Family Functioning and Psycho-Pathology in a Sample in the Pediatric Age.

    Science.gov (United States)

    Pepe, Silvia; Tortolani, Daniela; Gentile, Simonetta; Di Ciommo, Vincenzo M

    2015-03-17

    The purpose of this study was to investigate differences in family functioning between families with clinical subjects in paediatric age and families taken from the Italian population. To this aim we used the Family Adaptability and Cohesion Evaluation Scale (FACES). Participants were children diagnosed with a psychopathology, recruited into the psychiatry department in a Paediatric Hospital of Rome. A total of 106 families participated in the study. The non-pathological sample is composed by 2,543 parents in different age periods of the life-cycle. Results showed significant differences in family functioning between pathological and non-pathological samples. Specifically, families from the pathological sample (particularly the ones who experienced eating disorders) were more frequently located in extreme or mid-range regions of Olson's circumplex model (p families in a clinical setting. Critical aspects and clinical applications are discussed.

  16. The application of statistical and/or non-statistical sampling techniques by internal audit functions in the South African banking industry

    Directory of Open Access Journals (Sweden)

    D.P. van der Nest

    2015-03-01

    Full Text Available This article explores the use by internal audit functions of audit sampling techniques in order to test the effectiveness of controls in the banking sector. The article focuses specifically on the use of statistical and/or non-statistical sampling techniques by internal auditors. The focus of the research for this article was internal audit functions in the banking sector of South Africa. The results discussed in the article indicate that audit sampling is still used frequently as an audit evidence-gathering technique. Non-statistical sampling techniques are used more frequently than statistical sampling techniques for the evaluation of the sample. In addition, both techniques are regarded as important for the determination of the sample size and the selection of the sample items

  17. Data Transformation Functions for Expanded Search Spaces in Geographic Sample Supervised Segment Generation

    Directory of Open Access Journals (Sweden)

    Christoff Fourie

    2014-04-01

    Full Text Available Sample supervised image analysis, in particular sample supervised segment generation, shows promise as a methodological avenue applicable within Geographic Object-Based Image Analysis (GEOBIA. Segmentation is acknowledged as a constituent component within typically expansive image analysis processes. A general extension to the basic formulation of an empirical discrepancy measure directed segmentation algorithm parameter tuning approach is proposed. An expanded search landscape is defined, consisting not only of the segmentation algorithm parameters, but also of low-level, parameterized image processing functions. Such higher dimensional search landscapes potentially allow for achieving better segmentation accuracies. The proposed method is tested with a range of low-level image transformation functions and two segmentation algorithms. The general effectiveness of such an approach is demonstrated compared to a variant only optimising segmentation algorithm parameters. Further, it is shown that the resultant search landscapes obtained from combining mid- and low-level image processing parameter domains, in our problem contexts, are sufficiently complex to warrant the use of population based stochastic search methods. Interdependencies of these two parameter domains are also demonstrated, necessitating simultaneous optimization.

  18. Dietary inflammatory index and memory function: population-based national sample of elderly Americans.

    Science.gov (United States)

    Frith, Emily; Shivappa, Nitin; Mann, Joshua R; Hébert, James R; Wirth, Michael D; Loprinzi, Paul D

    2018-03-01

    The objective of this study was to examine the association between dietary inflammatory potential and memory and cognitive functioning among a representative sample of the US older adult population. Cross-sectional data from the 2011-2012 and 2013-2014 National Health and Nutrition Examination Survey were utilised to identify an aggregate sample of adults 60-85 years of age (n 1723). Dietary inflammatory index (DII®) scores were calculated using 24-h dietary recall interviews. Three memory-related assessments were employed, including the Consortium to Establish a Registry for Alzheimer's disease (CERAD) Word Learning subset, the Animal Fluency test and the Digit Symbol Substitution Test (DSST). Inverse associations were observed between DII scores and the different memory parameters. Episodic memory (CERAD) (b adjusted=-0·39; 95 % CI -0·79, 0·00), semantic-based memory (Animal Fluency Test) (b adjusted=-1·18; 95 % CI -2·17, -0·20) and executive function and working-memory (DSST) (b adjusted=-2·80; 95 % CI -5·58, -0·02) performances were lowest among those with the highest mean DII score. Though inverse relationships were observed between DII scores and memory and cognitive functioning, future work is needed to further explore the neurobiological mechanisms underlying the complex relationship between inflammation-related dietary behaviour and memory and cognition.

  19. Analysis of Heavy-Tailed Time Series

    DEFF Research Database (Denmark)

    Xie, Xiaolei

    This thesis is about analysis of heavy-tailed time series. We discuss tail properties of real-world equity return series and investigate the possibility that a single tail index is shared by all return series of actively traded equities in a market. Conditions for this hypothesis to be true...... are identified. We study the eigenvalues and eigenvectors of sample covariance and sample auto-covariance matrices of multivariate heavy-tailed time series, and particularly for time series with very high dimensions. Asymptotic approximations of the eigenvalues and eigenvectors of such matrices are found...... and expressed in terms of the parameters of the dependence structure, among others. Furthermore, we study an importance sampling method for estimating rare-event probabilities of multivariate heavy-tailed time series generated by matrix recursion. We show that the proposed algorithm is efficient in the sense...

  20. The Moderating Effect of Chronological Age on the Relation Between Neuroticism and Physical Functioning: Cross-Sectional Evidence From Two French Samples.

    Science.gov (United States)

    Canada, Brice; Stephan, Yannick; Jaconelli, Alban; Duberstein, Paul R

    2016-01-01

    Prior studies of age-restricted samples have demonstrated that, in older adulthood, neuroticism is negatively associated with difficulties performing specific daily activities. No studies of neuroticism and physical functioning have been conducted on life-span samples. This study tested the hypothesis that the relationship between neuroticism and physical functioning is stronger in older people compared with younger and middle-aged adults. Data were obtained from 2 independent French samples (n = 1,132 and 1,661 for Samples 1 and 2, respectively) ranging in age from 18 to 97. In addition to reporting sociodemographics, participants completed the Big Five Inventory, the physical functioning scale of the 36-Item Short Form Health Survey, and measures of disease burden. In both samples, regression analysis indicated that neuroticism is more negatively associated with physical functioning with advancing age, controlling for gender, marital status, disease burden, and educational attainment. In life-span samples of more than 2,700 adults, neuroticism was more strongly associated with worse physical functioning among older people compared with younger and middle-aged adults. Longitudinal research is needed to confirm this finding and to identify potential mediators. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. The Effect of Asymmetrical Sample Training on Retention Functions for Hedonic Samples in Rats

    Science.gov (United States)

    Simmons, Sabrina; Santi, Angelo

    2012-01-01

    Rats were trained in a symbolic delayed matching-to-sample task to discriminate sample stimuli that consisted of the presence of food or the absence of food. Asymmetrical sample training was provided in which one group was initially trained with only the food sample and the other group was initially trained with only the no-food sample. In…

  2. Generating Correlated Gamma Sequences for Sea-Clutter Simulation

    Science.gov (United States)

    2012-03-01

    generation of correlated Gamma random fields via SIRP theory is examined in [Conte et al. 1991, Armstrong & Griffiths 1991]. In these papers , the Gamma...2 〉2 + |〈x[n]x∗[n+ k]〉|2 . (4) Because 〈 |x|2 〉2 = z̄2 and |〈x[n]x∗[n+ k]〉|2 ≥ 0, this results in 〈z[n]z[n+ k]〉 ≥ z̄2 if the real- isation of z[n] is...linear map- ping. In a practical situation, a process with a given auto-covariance function would be specified. It is shown that by using an

  3. Parental socioeconomic status and child intellectual functioning in a Norwegian sample.

    Science.gov (United States)

    Eilertsen, Thomas; Thorsen, Anders Lillevik; Holm, Silje Elisabeth Hasmo; Bøe, Tormod; Sørensen, Lin; Lundervold, Astri J

    2016-10-01

    Socioeconomic status (SES) in childhood has been linked to cognitive function and future academic and occupational success in studies from several countries. However, previous Nordic studies have shown inconsistent results regarding the strength of this link. We therefore investigated the association between SES and cognitive functioning in a sample of 255 Norwegian children, including 151 typically developing children and 104 children with a psychiatric diagnosis. The third edition of the Wechsler Intelligence Scale for Children (WISC-III) to assess cognitive function was used. SES was defined from maternal and paternal education and family income of typically developing children and of a subsample of children with a psychiatric diagnosis. Multiple adjusted regression analyses were used to investigate the relation between SES and cognitive functioning. The analyses showed that SES explained a significant part of the variance of the full-scale WISC-III score and two WISC-III indices (Verbal Comprehension and Freedom from Distractibility). Overall, the strength of the relations was weaker than expected from reports from other non-Nordic countries. Parental education was the only significant individual predictor, suggesting that income was of minor importance as a predictor of cognitive functioning. Further studies should investigate how diverse political and socioeconomic contexts influence the relation between SES and cognitive functioning. © 2016 The Authors. Scandinavian Journal of Psychology published by Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  4. High-throughput immunoturbidimetric assays for in-process determination of polyclonal antibody concentration and functionality in crude samples

    DEFF Research Database (Denmark)

    Bak, Hanne; Kyhse-Andersen, J.; Thomas, O.R.T.

    2007-01-01

    We present fast, simple immunoturbidimetric assays suitable for direct determination of antibody 'concentration' and 'functionality' in crude samples, such as in-process samples taken at various stages during antibody purification. Both assays display excellent linearity and analytical recovery. ...... antibodies, require only basic laboratory equipment, are robust, fast, cheap, easy to perform, and readily adapted to automation....

  5. Relational Intimacy Mediates Sexual Outcomes Associated With Impaired Sexual Function: Examination in a Clinical Sample.

    Science.gov (United States)

    Witherow, Marta Parkanyi; Chandraiah, Shambhavi; Seals, Samantha R; Sarver, Dustin E; Parisi, Kathryn E; Bugan, Antal

    2017-06-01

    Relational intimacy is hypothesized to underlie the association between female sexual functioning and various sexual outcomes, and married women and women with sexual dysfunction have been generally absent from prior studies investigating these associations, thus restricting generalizability. To investigate whether relational intimacy mediates sexual outcomes (sexual satisfaction, coital frequency, and sexual distress) in a sample of married women with and without impaired sexual functioning presenting in clinical settings. Using a cross-sectional design, 64 heterosexual married women with (n = 44) and without (n = 20) impaired sexual functioning completed a battery of validated measurements assessing relational intimacy, sexual dysfunction, sexual frequency, satisfaction, and distress. Intimacy measurements were combined using latent factor scores before analysis. Bias-corrected mediation models of the indirect effect were used to test mediation effects. Moderated mediation models examined whether indirect effects were influenced by age and marital duration. Patients completed the Female Sexual Function Index, the Couple's Satisfaction Index, the Sexual Satisfaction Scale for Women, the Inclusion of the Other in the Self Scale, and the Miller Social Intimacy Test. Mediation models showed that impaired sexual functioning is associated with all sexual outcomes directly and indirectly through relational intimacy. Results were predominantly independent of age and marital duration. Findings have important treatment implications for modifying interventions to focus on enhancing relational intimacy to improve the sexual functioning of women with impaired sexual functioning. The importance of the role relational intimacy plays in broad sexual outcomes of women with impaired sexual functioning is supported in clinically referred and married women. Latent factor scores to improve estimation of study constructs and the use of contemporary mediation analysis also are

  6. Effects of coagulation temperature on measurements of complement function in serum samples from patients with systemic lupus erythematosus

    DEFF Research Database (Denmark)

    Baatrup, G; Sturfelt, G; Junker, A

    1992-01-01

    Blood samples from 15 patients with systemic lupus erythematosus (SLE) and 15 healthy blood donors were allowed to coagulate for one hour at room temperature, followed by one hour at 4 or 37 degrees C. The complement activity of the serum samples was assessed by three different functional assays...

  7. Influence of secular trends and sample size on reference equations for lung function tests.

    Science.gov (United States)

    Quanjer, P H; Stocks, J; Cole, T J; Hall, G L; Stanojevic, S

    2011-03-01

    The aim of our study was to determine the contribution of secular trends and sample size to lung function reference equations, and establish the number of local subjects required to validate published reference values. 30 spirometry datasets collected between 1978 and 2009 provided data on healthy, white subjects: 19,291 males and 23,741 females aged 2.5-95 yrs. The best fit for forced expiratory volume in 1 s (FEV(1)), forced vital capacity (FVC) and FEV(1)/FVC as functions of age, height and sex were derived from the entire dataset using GAMLSS. Mean z-scores were calculated for individual datasets to determine inter-centre differences. This was repeated by subdividing one large dataset (3,683 males and 4,759 females) into 36 smaller subsets (comprising 18-227 individuals) to preclude differences due to population/technique. No secular trends were observed and differences between datasets comprising >1,000 subjects were small (maximum difference in FEV(1) and FVC from overall mean: 0.30- -0.22 z-scores). Subdividing one large dataset into smaller subsets reproduced the above sample size-related differences and revealed that at least 150 males and 150 females would be necessary to validate reference values to avoid spurious differences due to sampling error. Use of local controls to validate reference equations will rarely be practical due to the numbers required. Reference equations derived from large or collated datasets are recommended.

  8. Typology of perceived family functioning in an American sample of patients with advanced cancer.

    Science.gov (United States)

    Schuler, Tammy A; Zaider, Talia I; Li, Yuelin; Hichenberg, Shira; Masterson, Melissa; Kissane, David W

    2014-08-01

    Poor family functioning affects psychosocial adjustment and the occurrence of morbidity following bereavement in the context of a family's coping with advanced cancer. Family functioning typologies assist with targeted family-centered assessment and intervention to offset these complications in the palliative care setting. Our objective was to identify the number and nature of potential types in an American palliative care patient sample. Data from patients with advanced cancer (N = 1809) screened for eligibility for a larger randomized clinical trial were used. Cluster analyses determined whether patients could be classified into clinically meaningful and coherent groups, based on similarities in their perceptions of family functioning across the cohesiveness, expressiveness, and conflict resolution subscales of the Family Relations Index. Patients' reports of perceived family functioning yielded a model containing five meaningful family types. Cohesiveness, expressiveness, and conflict resolution appear to be useful dimensions by which to classify patient perceptions of family functioning. "At risk" American families may include those we have called hostile, low-communicating, and less-involved. Such families may benefit from adjuvant family-centered psychosocial services, such as family therapy. Copyright © 2014 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  9. Subspace Analysis of Indoor UWB Channels

    Directory of Open Access Journals (Sweden)

    Rachid Saadane

    2005-03-01

    Full Text Available This work aims at characterizing the second-order statistics of indoor ultra-wideband (UWB channels using channel sounding techniques. We present measurement results for different scenarios conducted in a laboratory setting at Institut Eurécom. These are based on an eigendecomposition of the channel autocovariance matrix, which allows for the analysis of the growth in the number of significant degrees of freedom of the channel process as a function of the signaling bandwidth as well as the statistical correlation between different propagation paths. We show empirical eigenvalue distributions as a function of the signal bandwidth for both line-of-sight and non-line-of-sight situations. Furthermore, we give examples where paths from different propagation clusters (possibly arising from reflection or diffraction show strong statistical dependence.

  10. Traumatic stress and psychological functioning in a South African adolescent community sample

    Directory of Open Access Journals (Sweden)

    Karl D. Swain

    2017-03-01

    Full Text Available Background: Traumatic stress may arise from various incidents often leading to posttraumatic stress disorder (PTSD. The lifetime prevalence of PTSD is estimated at 1% – 2% in Western Europe, 6% – 9% in North America and at just over 10% in countries exposed to long-term violence. In South Africa, the lifetime prevalence for PTSD in the general population is estimated at 2.3%. Aim: To examine the prevalence of posttraumatic stress symptomatology and related psychological functioning in a community sample of adolescents. Setting: Low-socioeconomic communities in KwaZulu-Natal. Methods: Home interviews with adolescents and their maternal caregivers were used to collect the data using standardised instruments. Adolescents completed the Trauma Symptom Checklist for Children; Children’s Depression Inventory; Children’s Somatization Inventory; and Revised Children’s Manifest Anxiety Scale. The Child Behaviour Checklist was completed by the caregivers. The sample comprised Grade 7 (n = 256 and Grade 10 (n = 68 learners. Sixty-five percent of the sample was female, and ages ranged from 9 to 18 (M = 13.11, s.d. = 1.54. Results: Almost 6% of the sample endorsed PTSD and an additional 4% of the participants had clinically significant traumatic stress symptomatology. There was a significant, large, positive correlation between posttraumatic stress and anxiety, and medium positive correlations between posttraumatic stress and depression and somatic symptoms. Conclusion: Posttraumatic stress symptomatology can be debilitating, often co-occurring with symptoms of depression, anxiety and somatic complications. This may lead to long-term academic, social and emotional consequences in this vulnerable group.

  11. Cannabis use and neurocognitive functioning in a non-clinical sample of users.

    Science.gov (United States)

    Thames, April D; Arbid, Natalie; Sayegh, Philip

    2014-05-01

    With the recent debates over marijuana legalization and increases in use, it is critical to examine its role in cognition. While many studies generally support the adverse acute effects of cannabis on neurocognition, the non-acute effects remain less clear. The current study used a cross-sectional design to examine relationships between recent and past cannabis use on neurocognitive functioning in a non-clinical adult sample. One hundred and fifty-eight participants were recruited through fliers distributed around local college campuses and the community. All participants completed the Brief Drug Use History Form, the Structured Clinical Interview for DSM-IV Disorders, and neurocognitive assessment, and underwent urine toxicology screening. Participants consisted of recent users (n=68), past users (n=41), and non-users (n=49). Recent users demonstrated significantly (pcannabis use in the last 4 weeks was negatively associated with global neurocognitive performance and all individual cognitive domains. Similarly, amount of daily cannabis use was negatively associated with global neurocognitive performance and individual cognitive domains. Our results support the widespread adverse effects of cannabis use on neurocognitive functioning. Although some of these adverse effects appear to attenuate with abstinence, past users' neurocognitive functioning was consistently lower than non-users. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Some Risk Factors of Chronic Functional Constipation Identified in a Pediatric Population Sample from Romania

    Directory of Open Access Journals (Sweden)

    Claudia Olaru

    2016-01-01

    Full Text Available We conducted an observational study over a 1-year period, including 234 children aged 4–18 years and their caregivers and a matching control group. 60.73% of the children from the study group were males. Average age for the onset of constipation was 26.39 months. The frequency of defecation was 1/4.59 days (1/1.13 days in the control group. 38.49% of the patients in the sample group had a positive family history of functional constipation. The majority of children with functional constipation come from single-parent families, are raised by relatives, or come from orphanages. Constipated subjects had their last meal of the day at later hours and consumed fast foods more frequently than the children in the control sample. We found a statistically significant difference between groups regarding obesity/overweight and constipation (χ2=104.94,  df=2,  p<0.001 and regarding physical activity and constipation (χ2=18.419;  df=3;  p<0.001. There was a positive correlation between the number of hours spent watching television/using the computer and the occurrence of the disease (F = 92.162, p<0.001, and 95% Cl. Children from broken families, with positive family history, defective dietary habits, obesity and sedentary behavior, are at higher risk to develop chronic functional constipation.

  13. Mediators of the relationship between life events and memory functioning in a community sample of adults

    NARCIS (Netherlands)

    Korten, N.C.M.; Sliwinski, M.J.; Comijs, H.C.; Smyth, J.M.

    2014-01-01

    The present study examines the association of frequency and severity of life events with memory functioning in a community sample of adults. We tested the hypothesis that stress-related cognitive interference mediated the effects of recent life events on cognition, in addition to examining the

  14. Effects of assisted training with neurofeedback on EEG measures, executive function and mood in a healthy sample

    Directory of Open Access Journals (Sweden)

    Milena Vasquez

    2015-01-01

    Full Text Available The training in neurofeedback (NF consists of teaching individuals to modify, adjust and enhance their brain activity pattern. The aim of our research was to evaluate the effect of training on cognitive processes, specifically executive function, and mood in a non-clinical sample. A sample of 30 female college students were assigned to three groups: RH: right hemisphere (n = 10, LH: left hemisphere (n = 10 and control (n = 10. The dominance pattern of beta and the inhibition of the theta pattern were trained in a single session. Measures of executive function (Iowa Gambling Test and questionnaires of mood were taken pre and post training. We found that NF training produced significant positive changes in executive performance in the RH group. In the EEG a tendency to improve beta rhythm after the training emerged too. Additionally, significant correlations were found between executive performance and negative mood in relation to theta frequency band. We conclude that the protocol seems effective to enhance some aspects of executive function as well as to decrease theta power improves the negative mood.

  15. Varying Associations Between Body Mass Index and Physical and Cognitive Function in Three Samples of Older Adults Living in Different Settings.

    Science.gov (United States)

    Kiesswetter, Eva; Schrader, Eva; Diekmann, Rebecca; Sieber, Cornel Christian; Volkert, Dorothee

    2015-10-01

    The study investigates variations in the associations between body mass index (BMI) and (a) physical and (b) cognitive function across three samples of older adults living in different settings, and moreover determines if the association between BMI and physical function is confounded by cognitive abilities. One hundred ninety-five patients of a geriatric day hospital, 322 persons receiving home care (HC), and 183 nursing home (NH) residents were examined regarding BMI, cognitive (Mini-Mental State Examination), and physical function (Barthel Index for activities of daily living). Differences in Mini-Mental State Examination and activities of daily living scores between BMI groups (Examination impairments increased from the geriatric day hospital over the HC to the NH sample, whereas prevalence rates of obesity and severe obesity (35%, 33%, 25%) decreased. In geriatric day hospital patients cognitive and physical function did not differ between BMI groups. In the HC and NH samples, cognitive abilities were highest in obese and severely obese subjects. Unadjusted mean activities of daily living scores differed between BMI groups in HC receivers (51.6±32.2, 61.8±26.1, 67.5±28.3, 72.0±23.4, 66.2±24.2, p = .002) and NH residents (35.6±28.6, 48.1±25.7, 39.9±28.7, 50.8±24.0, 57.1±28.2, p = .029). In both samples significance was lost after adjustment indicating cognitive function as dominant confounder. In older adults the associations between BMI and physical and cognitive function were dependent on the health and care status corresponding to the setting. In the HC and the NH samples, cognitive status, as measured by the Mini-Mental State Examination, emerged as an important confounder within the association between BMI and physical function. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Deep versus periventricular white matter lesions and cognitive function in a community sample of middle-aged participants.

    Science.gov (United States)

    Soriano-Raya, Juan José; Miralbell, Júlia; López-Cancio, Elena; Bargalló, Núria; Arenillas, Juan Francisco; Barrios, Maite; Cáceres, Cynthia; Toran, Pere; Alzamora, Maite; Dávalos, Antoni; Mataró, Maria

    2012-09-01

    The association of cerebral white matter lesions (WMLs) with cognitive status is not well understood in middle-aged individuals. Our aim was to determine the specific contribution of periventricular hyperintensities (PVHs) and deep white matter hyperintensities (DWMHs) to cognitive function in a community sample of asymptomatic participants aged 50 to 65 years. One hundred stroke- and dementia-free adults completed a comprehensive neuropsychological battery and brain MRI protocol. Participants were classified according to PVH and DWMH scores (Fazekas scale). We dichotomized our sample into low grade WMLs (participants without or with mild lesions) and high grade WMLs (participants with moderate or severe lesions). Analyses were performed separately in PVH and DWMH groups. High grade DWMHs were associated with significantly lower scores in executive functioning (-0.45 standard deviations [SD]), attention (-0.42 SD), verbal fluency (-0.68 SD), visual memory (-0.52 SD), visuospatial skills (-0.79 SD), and psychomotor speed (-0.46 SD). Further analyses revealed that high grade DWMHs were also associated with a three- to fourfold increased risk of impaired scores (i.e.,<1.5 SD) in executive functioning, verbal fluency, visuospatial skills, and psychomotor speed. Our findings suggest that only DWMHs, not PVHs, are related to diminished cognitive function in middle-aged individuals. (JINS, 2012, 18, 1-12).

  17. The Work Role Functioning Questionnaire v2.0 Showed Consistent Factor Structure Across Six Working Samples

    DEFF Research Database (Denmark)

    Abma, Femke I.; Bültmann, Ute; Amick, Benjamin C.

    2017-01-01

    Objective: The Work Role Functioning Questionnaire v2.0 (WRFQ) is an outcome measure linking a persons’ health to the ability to meet work demands in the twenty-first century. We aimed to examine the construct validity of the WRFQ in a heterogeneous set of working samples in the Netherlands...

  18. The Work Role Functioning Questionnaire v2.0 Showed Consistent Factor Structure Across Six Working Samples

    NARCIS (Netherlands)

    Abma, F.I.; Bultmann, U.; Amick III, B.C.; Arends, I.; Dorland, P.A.; Flach, P.A.; Klink, J.J.L van der; Ven H.A., van de; Bjørner, J.B.

    2017-01-01

    Objective The Work Role Functioning Questionnaire v2.0 (WRFQ) is an outcome measure linking a persons’ health to the ability to meet work demands in the twenty-first century. We aimed to examine the construct validity of the WRFQ in a heterogeneous set of working samples in the Netherlands with

  19. Performance of a Brazilian sample on the Portuguese translation of the BNI Screen for Higher Cerebral Functions.

    Science.gov (United States)

    Prigatano, George P; Souza, Lígia M N; Braga, Lucia W

    2018-03-01

    The Barrow Neurological Institute (BNI) Screen for Higher Cerebral Functions (BNIS) has been translated into several languages and found useful in evaluating multiple domains of cognitive and affective dysfunction, particularly in neuro-rehabilitation settings. Normative data from countries with high literacy rates have reported strikingly similar mean level of performance scores on this test, with age typically correlating higher with total score performance than education. In the present study, we obtain convenience sample normative data from a native Brazilian population on a Portuguese translation of the BNIS (i.e., BNIS-PT). The BNIS was translated into Portuguese by two native speaking Portuguese neuropsychologists who were also fluent in English. It was then administered to 201 normally functioning native Brazilian individuals who varied considerably in age and formal educational training. The mean BNIS total score was similar to what previous studies reported, but primarily in younger adults with at least 12 years of formal education. In this Brazilian sample, the correlation of educational level and BNIS total score was r = .68, p < .001. The correlation of age and BNIS total score was r = -.36, p < .001. This is the opposite pattern to that observed in previous standardization studies. The strong correlation of education with performance in various subtests was observed in all age groups (ages ranging from 15 to 85 years). This standardization study provides guidelines for calculating expected average performance levels on the BNIS-PT for Brazilian individuals with varying degrees of age and education. Educational level positively correlated with test performance on the BNIS-PT and was repeatedly observed to overshadow the effects of age, suggesting its important role in the development of higher cerebral functions in multiple domains in a Brazilian sample of normally functioning individuals.

  20. Statistical distribution sampling

    Science.gov (United States)

    Johnson, E. S.

    1975-01-01

    Determining the distribution of statistics by sampling was investigated. Characteristic functions, the quadratic regression problem, and the differential equations for the characteristic functions are analyzed.

  1. Radiology compared with xenon—133 scanning and bronchoscopic lobar sampling as methods for assessing regional lung function in patients with emphysema

    Science.gov (United States)

    Barter, C. E.; Hugh-Jones, P.; Laws, J. W.; Crosbie, W. A.

    1973-01-01

    Regional lung function was assessed by radiographic methods, by regional function studies using xenon-133 scans, and by lobar sampling with a mass spectrometer flow-meter at bronchoscopy in 12 patients who subsequently had bullae resected at operation. The information given by these three methods of regional assessment was subsequently compared with the findings at operation. When only one lobe was abnormal on the radiographs, these alone were adequate to locate the major site of the emphysema and the regional tests gave relatively little extra information. The xenon scan was sometimes helpful in assessing the state of the remaining lung, but this information could be deduced from the radiographs and overall lung function tests, especially the carbon monoxide transfer and mechanical measurements. Bronchoscopic sampling was helpful in determining whether the affected lobe was acting as a ventilated dead-space. When more than one lobe was affected the regional function tests supplemented the radiographs in defining the site of bullous change as well as locating dead space. Xenon scans, although widely employed for such preoperative assessments, added little to the topographical information obtained by careful radiology. The combination of radiology, lobar sampling, and overall function tests is recommended for assessing which emphysematous patients are likely to benefit from surgery. Images PMID:4685209

  2. Assessment of crystalline disorder in cryo-milled samples of indomethacin using atomic pair-wise distribution functions

    DEFF Research Database (Denmark)

    Bøtker, Johan P; Karmwar, Pranav; Strachan, Clare J

    2011-01-01

    to analyse the cryo-milled samples. The high similarity between the ¿-indomethacin cryogenic ball milled samples and the crude ¿-indomethacin indicated that milled samples retained residual order of the ¿-form. The PDF analysis encompassed the capability of achieving a correlation with the physical......The aim of this study was to investigate the usefulness of the atomic pair-wise distribution function (PDF) to detect the extension of disorder/amorphousness induced into a crystalline drug using a cryo-milling technique, and to determine the optimal milling times to achieve amorphisation. The PDF...... properties determined from DSC, ss-NMR and stability experiments. Multivariate data analysis (MVDA) was used to visualize the differences in the PDF and XRPD data. The MVDA approach revealed that PDF is more efficient in assessing the introduced degree of disorder in ¿-indomethacin after cryo-milling than...

  3. Large-scale prospective T cell function assays in shipped, unfrozen blood samples: experiences from the multicenter TRIGR trial.

    Science.gov (United States)

    Hadley, David; Cheung, Roy K; Becker, Dorothy J; Girgis, Rose; Palmer, Jerry P; Cuthbertson, David; Krischer, Jeffrey P; Dosch, Hans-Michael

    2014-02-01

    Broad consensus assigns T lymphocytes fundamental roles in inflammatory, infectious, and autoimmune diseases. However, clinical investigations have lacked fully characterized and validated procedures, equivalent to those of widely practiced biochemical tests with established clinical roles, for measuring core T cell functions. The Trial to Reduce Insulin-dependent diabetes mellitus in the Genetically at Risk (TRIGR) type 1 diabetes prevention trial used consecutive measurements of T cell proliferative responses in prospectively collected fresh heparinized blood samples shipped by courier within North America. In this article, we report on the quality control implications of this simple and pragmatic shipping practice and the interpretation of positive- and negative-control analytes in our assay. We used polyclonal and postvaccination responses in 4,919 samples to analyze the development of T cell immunocompetence. We have found that the vast majority of the samples were viable up to 3 days from the blood draw, yet meaningful responses were found in a proportion of those with longer travel times. Furthermore, the shipping time of uncooled samples significantly decreased both the viabilities of the samples and the unstimulated cell counts in the viable samples. Also, subject age was significantly associated with the number of unstimulated cells and T cell proliferation to positive activators. Finally, we observed a pattern of statistically significant increases in T cell responses to tetanus toxin around the timing of infant vaccinations. This assay platform and shipping protocol satisfy the criteria for robust and reproducible long-term measurements of human T cell function, comparable to those of established blood biochemical tests. We present a stable technology for prospective disease-relevant T cell analysis in immunological diseases, vaccination medicine, and measurement of herd immunity.

  4. Evaluation of Multiple-Sampling Function used with a Microtek flatbed scanner for Radiation Dosimetry Calibration of EBT2 Film

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Liyun [Department of Medical Imaging and Radiological Sciences, I-Shou University, Kaohsiung 82445, Taiwan (China); Ho, Sheng-Yow [Department of Nursing, Chang Jung Christian University, Tainan 71101, Taiwan (China); Department of Radiation Oncology, Chi Mei Medical Center, Liouying, Tainan 73657, Taiwan (China); Ding, Hueisch-Jy [Department of Medical Imaging and Radiological Sciences, I-Shou University, Kaohsiung 82445, Taiwan (China); Hwang, Ing-Ming [Department of Medical Imaging and Radiology, Shu Zen College of Medicine and Management, Kaohsiung 82144, Taiwan (China); Chen, Pang-Yu, E-mail: pangyuchen@yahoo.com.tw [Department of Radiation Oncology, Sinlau Christian Hospital, Tainan 70142, Taiwan (China); Lee, Tsair-Fwu, E-mail: tflee@kuas.edu.tw [Medical Physics and Informatics Laboratory, Department of Electronics Engineering, National Kaohsiung University of Applied Sciences, Kaohsiung 80778, Taiwan (China)

    2016-10-01

    The radiochromic EBT2 film is a widely used quality assurance device for radiation therapy. This study evaluated the film calibration performance of the multiple-sampling function, a function of the ScanWizard Pro scanning software provided by the manufacturer, when used with Microtek 9800XL plus (9800XL{sup +}) flatbed scanner. By using the PDD method, each one of the eight EBT2 films, four delivered by 290 monitor unit (MU) and four by 88 MU via 6-MV photon beams, was tightly sandwiched in a 30{sup 3}-cm{sup 3} water equivalent polystyrene phantom prior to irradiation. Before and after irradiation, all films were scanned using the Microtek 9800XL{sup +} scanner with five different modes of the multiple-sampling function, which could generate the image with the averaged result of multiple-sampling. The net optical densities (netOD) on the beam central axis of film were assigned to corresponding depth doses for calibration. For each sampling mode with either delivered MU, the depth-dose uncertainty of a single film from repeated scans and that of a single scan of the four films were analyzed. Finally, the calibration error and the combined calibration uncertainty between film determined depth-doses and delivered depth-doses were calculated and evaluated for each sampling mode. All standard deviations and the calibration error were demonstrated to be unrelated to the number of sampling lines. The calibration error of the 2-line and 16-line mode was within 3 cGy and better than that of the other modes. The combined uncertainty of the 2-line mode was the lowest, which was generally less than 6 cGy except for the delivered dose around 100 cGy. The evaluation described herein revealed that the EBT2 film calibrated with the 2-line mode has relatively lower error, scanning time and combined uncertianty. Therefore, it is recommended for routine EBT2 film calibration and verification of treatment plans.

  5. Evaluation of Multiple-Sampling Function used with a Microtek flatbed scanner for Radiation Dosimetry Calibration of EBT2 Film

    International Nuclear Information System (INIS)

    Chang, Liyun; Ho, Sheng-Yow; Ding, Hueisch-Jy; Hwang, Ing-Ming; Chen, Pang-Yu; Lee, Tsair-Fwu

    2016-01-01

    The radiochromic EBT2 film is a widely used quality assurance device for radiation therapy. This study evaluated the film calibration performance of the multiple-sampling function, a function of the ScanWizard Pro scanning software provided by the manufacturer, when used with Microtek 9800XL plus (9800XL + ) flatbed scanner. By using the PDD method, each one of the eight EBT2 films, four delivered by 290 monitor unit (MU) and four by 88 MU via 6-MV photon beams, was tightly sandwiched in a 30 3 -cm 3 water equivalent polystyrene phantom prior to irradiation. Before and after irradiation, all films were scanned using the Microtek 9800XL + scanner with five different modes of the multiple-sampling function, which could generate the image with the averaged result of multiple-sampling. The net optical densities (netOD) on the beam central axis of film were assigned to corresponding depth doses for calibration. For each sampling mode with either delivered MU, the depth-dose uncertainty of a single film from repeated scans and that of a single scan of the four films were analyzed. Finally, the calibration error and the combined calibration uncertainty between film determined depth-doses and delivered depth-doses were calculated and evaluated for each sampling mode. All standard deviations and the calibration error were demonstrated to be unrelated to the number of sampling lines. The calibration error of the 2-line and 16-line mode was within 3 cGy and better than that of the other modes. The combined uncertainty of the 2-line mode was the lowest, which was generally less than 6 cGy except for the delivered dose around 100 cGy. The evaluation described herein revealed that the EBT2 film calibrated with the 2-line mode has relatively lower error, scanning time and combined uncertianty. Therefore, it is recommended for routine EBT2 film calibration and verification of treatment plans.

  6. Adaptive sampling based on the cumulative distribution function of order statistics to delineate heavy-metal contaminated soils using kriging

    International Nuclear Information System (INIS)

    Juang, K.-W.; Lee, D.-Y.; Teng, Y.-L.

    2005-01-01

    Correctly classifying 'contaminated' areas in soils, based on the threshold for a contaminated site, is important for determining effective clean-up actions. Pollutant mapping by means of kriging is increasingly being used for the delineation of contaminated soils. However, those areas where the kriged pollutant concentrations are close to the threshold have a high possibility for being misclassified. In order to reduce the misclassification due to the over- or under-estimation from kriging, an adaptive sampling using the cumulative distribution function of order statistics (CDFOS) was developed to draw additional samples for delineating contaminated soils, while kriging. A heavy-metal contaminated site in Hsinchu, Taiwan was used to illustrate this approach. The results showed that compared with random sampling, adaptive sampling using CDFOS reduced the kriging estimation errors and misclassification rates, and thus would appear to be a better choice than random sampling, as additional sampling is required for delineating the 'contaminated' areas. - A sampling approach was derived for drawing additional samples while kriging

  7. A simple method for measurement of cerebral blood flow using 123I-IMP SPECT with calibrated standard input function by one point blood sampling. Validation of calibration by one point venous blood sampling as a substitute for arterial blood sampling

    International Nuclear Information System (INIS)

    Ito, Hiroshi; Akaizawa, Takashi; Goto, Ryoui

    1994-01-01

    In a simplified method for measurement of cerebral blood flow using one 123 I-IMP SPECT scan and one point arterial blood sampling (Autoradiography method), input function is obtained by calibrating a standard input function by one point arterial blood sampling. A purpose of this study is validation of calibration by one point venous blood sampling as a substitute for one point arterial blood sampling. After intravenous infusion of 123 I-IMP, frequent arterial and venous blood sampling were simultaneously performed on 12 patients of CNS disease without any heart and lung disease and 5 normal volunteers. The radioactivity ratio of venous whole blood which obtained from cutaneous cubital vein to arterial whole blood were 0.76±0.08, 0.80±0.05, 0.81±0.06, 0.83±0.11 at 10, 20, 30, 50 min after 123 I-IMP infusion, respectively. The venous blood radioactivities were always 20% lower than those of arterial blood radioactivity during 50 min. However, the ratio which obtained from cutaneous dorsal hand vein to artery were 0.93±0.02, 0.94±0.05, 0.98±0.04, 0.98±0.03, at 10, 20, 30, 50 min after 123 I-IMP infusion, respectively. The venous blood radioactivity was consistent with artery. These indicate that arterio-venous difference of radioactivity in a peripheral cutaneous vein like a dorsal hand vein is minimal due to arteriovenous shunt in palm. Therefore, a substitution by blood sampling from cutaneous dorsal hand vein for artery will be possible. Optimized time for venous blood sampling evaluated by error analysis was 20 min after 123 I-IMP infusion, which is 10 min later than that of arterial blood sampling. (author)

  8. Parent-child agreement on the Behavior Rating Inventory of Executive Functioning (BRIEF) in a community sample of adolescents.

    Science.gov (United States)

    Egan, Kaitlyn N; Cohen, L Adelyn; Limbers, Christine

    2018-03-06

    Despite its widespread use, a minimal amount is known regarding the agreement between parent and youth ratings of youth's executive functioning on the Behavior Rating Inventory of Executive Functioning (BRIEF) in typically developing youth. The present study examined parent-child agreement on the BRIEF with a community sample of adolescents and their parents. Ninety-seven parent-child dyads (M age  = 13.91 years; SD = .52) completed the BRIEF self- and parent-report forms and a demographic questionnaire. Intraclass Correlation Coefficients (ICCs) and paired sample t-tests were used to evaluate agreement between self- and parent-reports on the BRIEF. Total sample ICCs indicated moderate to good parent-child agreement (0.46-0.68). Parents from the total sample reported significantly higher mean T-scores for their adolescents on Inhibit, Working Memory, Planning/Organization, Behavioral Regulation Index (BRI), Metacognition Index, and Global Executive Composite. Differences were found in regard to gender and race/ethnicity: ICCs were higher between parent-girl dyads on the scales that comprise the BRI than between parent-boy dyads. Parent-adolescent ICCs were also higher for adolescents who self-identified as White in comparison to those who identified as Non-White/Mixed Race on Emotional Control. These findings suggest gender and racial/ethnic differences should be considered when examining parent-child agreement on the BRIEF in typically developing adolescents.

  9. Assessment of left ventricular function and mass by MR imaging: a stereological study based on the systematic slice sampling procedure.

    Science.gov (United States)

    Mazonakis, Michalis; Sahin, Bunyamin; Pagonidis, Konstantin; Damilakis, John

    2011-06-01

    The aim of this study was to combine the stereological technique with magnetic resonance (MR) imaging data for the volumetric and functional analysis of the left ventricle (LV). Cardiac MR examinations were performed in 13 consecutive subjects with known or suspected coronary artery disease. The end-diastolic volume (EDV), end-systolic volume, ejection fraction (EF), and mass were estimated by stereology using the entire slice set depicting LV and systematic sampling intensities of 1/2 and 1/3 that provided samples with every second and third slice, respectively. The repeatability of stereology was evaluated. Stereological assessments were compared with the reference values derived by manually tracing the endocardial and epicardial contours on MR images. Stereological EDV and EF estimations obtained by the 1/3 systematic sampling scheme were significantly different from those by manual delineation (P sampling intensity of 1/2 (P > .05). For these stereological approaches, a high correlation (r(2) = 0.80-0.93) and clinically acceptable limits of agreement were found with the reference method. Stereological estimations obtained by both sample sizes presented comparable coefficient of variation values of 2.9-5.8%. The mean time for stereological measurements on the entire slice set was 3.4 ± 0.6 minutes and it was reduced to 2.5 ± 0.5 minutes with the 1/2 systematic sampling scheme. Stereological analysis on systematic samples of MR slices generated by the 1/2 sampling intensity provided efficient and quick assessment of LV volumes, function, and mass. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.

  10. Communication: importance sampling including path correlation in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Pan, Feng; Tao, Guohua

    2013-03-07

    Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.

  11. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  12. An attempt at solving the problem of autocorrelation associated with use of mean approach for pooling cross-section and time series in regression modelling

    International Nuclear Information System (INIS)

    Nuamah, N.N.N.N.

    1990-12-01

    The paradoxical nature of results of the mean approach in pooling cross-section and time series data has been identified to be caused by the presence in the normal equations of phenomena such as autocovariances, multicollinear covariances, drift covariances and drift multicollinear covariances. This paper considers the problem of autocorrelation and suggests ways of solving it. (author). 4 refs

  13. Soil sampling

    International Nuclear Information System (INIS)

    Fortunati, G.U.; Banfi, C.; Pasturenzi, M.

    1994-01-01

    This study attempts to survey the problems associated with techniques and strategies of soil sampling. Keeping in mind the well defined objectives of a sampling campaign, the aim was to highlight the most important aspect of representativeness of samples as a function of the available resources. Particular emphasis was given to the techniques and particularly to a description of the many types of samplers which are in use. The procedures and techniques employed during the investigations following the Seveso accident are described. (orig.)

  14. Estimating the residential demand function for natural gas in Seoul with correction for sample selection bias

    International Nuclear Information System (INIS)

    Yoo, Seung-Hoon; Lim, Hea-Jin; Kwak, Seung-Jun

    2009-01-01

    Over the last twenty years, the consumption of natural gas in Korea has increased dramatically. This increase has mainly resulted from the rise of consumption in the residential sector. The main objective of the study is to estimate households' demand function for natural gas by applying a sample selection model using data from a survey of households in Seoul. The results show that there exists a selection bias in the sample and that failure to correct for sample selection bias distorts the mean estimate, of the demand for natural gas, downward by 48.1%. In addition, according to the estimation results, the size of the house, the dummy variable for dwelling in an apartment, the dummy variable for having a bed in an inner room, and the household's income all have positive relationships with the demand for natural gas. On the other hand, the size of the family and the price of gas negatively contribute to the demand for natural gas. (author)

  15. The Relationship between Theory of Mind and Executive Function in a Sample of Children from Mainland China

    Science.gov (United States)

    Yang, Juan; Zhou, Shijie; Yao, Shuqiao; Su, Linyan; McWhinnie, Chad

    2009-01-01

    To explore the relationship between theory of mind (ToM) and executive function (EF) in a sample of individuals from mainland China, 20 children with autism spectrum disorders (ASD), 26 children with Attention Deficit Hyperactivity Disorder (ADHD), and 30 normal control subjects were compared on two batteries of ToM tasks and EF tasks. Children…

  16. RNAdualPF: software to compute the dual partition function with sample applications in molecular evolution theory.

    Science.gov (United States)

    Garcia-Martin, Juan Antonio; Bayegan, Amir H; Dotu, Ivan; Clote, Peter

    2016-10-19

    RNA inverse folding is the problem of finding one or more sequences that fold into a user-specified target structure s 0 , i.e. whose minimum free energy secondary structure is identical to the target s 0 . Here we consider the ensemble of all RNA sequences that have low free energy with respect to a given target s 0 . We introduce the program RNAdualPF, which computes the dual partition function Z ∗ , defined as the sum of Boltzmann factors exp(-E(a,s 0 )/RT) of all RNA nucleotide sequences a compatible with target structure s 0 . Using RNAdualPF, we efficiently sample RNA sequences that approximately fold into s 0 , where additionally the user can specify IUPAC sequence constraints at certain positions, and whether to include dangles (energy terms for stacked, single-stranded nucleotides). Moreover, since we also compute the dual partition function Z ∗ (k) over all sequences having GC-content k, the user can require that all sampled sequences have a precise, specified GC-content. Using Z ∗ , we compute the dual expected energy 〈E ∗ 〉, and use it to show that natural RNAs from the Rfam 12.0 database have higher minimum free energy than expected, thus suggesting that functional RNAs are under evolutionary pressure to be only marginally thermodynamically stable. We show that C. elegans precursor microRNA (pre-miRNA) is significantly non-robust with respect to mutations, by comparing the robustness of each wild type pre-miRNA sequence with 2000 [resp. 500] sequences of the same GC-content generated by RNAdualPF, which approximately [resp. exactly] fold into the wild type target structure. We confirm and strengthen earlier findings that precursor microRNAs and bacterial small noncoding RNAs display plasticity, a measure of structural diversity. We describe RNAdualPF, which rapidly computes the dual partition function Z ∗ and samples sequences having low energy with respect to a target structure, allowing sequence constraints and specified GC

  17. Visualizing Influential Observations in Dependent Data

    KAUST Repository

    Genton, Marc G.

    2010-01-01

    We introduce the hair-plot to visualize influential observations in dependent data. It consists of all trajectories of the value of an estimator when each observation is modified in turn by an additive perturbation. We define two measures of influence: the local influence which describes the rate of departure from the original estimate due to a small perturbation of each observation; and the asymptotic influence which indicates the influence on the original estimate of the most extreme contamination for each observation. The cases of estimators defined as quadratic forms or ratios of quadratic forms are investigated in detail. Sample autocovariances, covariograms, and variograms belong to the first case. Sample autocorrelations, correlograms, and indices of spatial autocorrelation such as Moran\\'s I belong to the second case.We illustrate our approach on various datasets from time series analysis and spatial statistics. This article has supplementary material online. © 2010 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  18. IRAS bright galaxy sample. II. The sample and luminosity function

    International Nuclear Information System (INIS)

    Soifer, B.T.; Sanders, D.B.; Neugebauer, G.; Madore, B.F.; Danielson, G.E.; David Dunlap Observatory, Richmond Hill, Canada; Palomar Observatory; California Institute of Technology, Pasadena)

    1987-01-01

    A statistically complete sample of 324 of the brightest infrared galaxies discovered at 60 microns in the IRAS all-sky survey is described. The results show that far-infrared emission is a significant luminosity component in the local universe, representing 25 percent of the luminosity emitted by stars in the same volume. Above 10 to the 11th solar luminosities, the infrared luminous galaxies are the dominant population of objects in the universe, being as numerous as the Seyfert galaxies and more numerous than quasars at higher luminosities. The infrared luminosity appears to be independent of the optical luminosity of galaxies. Most infrared bright galaxies appear to require much of the interstellar matter to be contributing to the observed infrared luminosity. Approximately 60-80 percent of the far-infrared luminosity of the local universe can be attributed, directly or indirectly, to recent or ongoing star formation. 67 references

  19. Evaluation of common methods for sampling invertebrate pollinator assemblages: net sampling out-perform pan traps.

    Science.gov (United States)

    Popic, Tony J; Davila, Yvonne C; Wardle, Glenda M

    2013-01-01

    Methods for sampling ecological assemblages strive to be efficient, repeatable, and representative. Unknowingly, common methods may be limited in terms of revealing species function and so of less value for comparative studies. The global decline in pollination services has stimulated surveys of flower-visiting invertebrates, using pan traps and net sampling. We explore the relative merits of these two methods in terms of species discovery, quantifying abundance, function, and composition, and responses of species to changing floral resources. Using a spatially-nested design we sampled across a 5000 km(2) area of arid grasslands, including 432 hours of net sampling and 1296 pan trap-days, between June 2010 and July 2011. Net sampling yielded 22% more species and 30% higher abundance than pan traps, and better reflected the spatio-temporal variation of floral resources. Species composition differed significantly between methods; from 436 total species, 25% were sampled by both methods, 50% only by nets, and the remaining 25% only by pans. Apart from being less comprehensive, if pan traps do not sample flower-visitors, the link to pollination is questionable. By contrast, net sampling functionally linked species to pollination through behavioural observations of flower-visitation interaction frequency. Netted specimens are also necessary for evidence of pollen transport. Benefits of net-based sampling outweighed minor differences in overall sampling effort. As pan traps and net sampling methods are not equivalent for sampling invertebrate-flower interactions, we recommend net sampling of invertebrate pollinator assemblages, especially if datasets are intended to document declines in pollination and guide measures to retain this important ecosystem service.

  20. Energy Preserved Sampling for Compressed Sensing MRI

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2014-01-01

    Full Text Available The sampling patterns, cost functions, and reconstruction algorithms play important roles in optimizing compressed sensing magnetic resonance imaging (CS-MRI. Simple random sampling patterns did not take into account the energy distribution in k-space and resulted in suboptimal reconstruction of MR images. Therefore, a variety of variable density (VD based samplings patterns had been developed. To further improve it, we propose a novel energy preserving sampling (ePRESS method. Besides, we improve the cost function by introducing phase correction and region of support matrix, and we propose iterative thresholding algorithm (ITA to solve the improved cost function. We evaluate the proposed ePRESS sampling method, improved cost function, and ITA reconstruction algorithm by 2D digital phantom and 2D in vivo MR brains of healthy volunteers. These assessments demonstrate that the proposed ePRESS method performs better than VD, POWER, and BKO; the improved cost function can achieve better reconstruction quality than conventional cost function; and the ITA is faster than SISTA and is competitive with FISTA in terms of computation time.

  1. Symptom presentation and classroom functioning in a nonclinical sample of children with social phobia.

    Science.gov (United States)

    Bernstein, Gail A; Bernat, Debra H; Davis, Andrew A; Layne, Ann E

    2008-01-01

    This study investigates symptom presentation and school functioning in a nonclinical sample of children with social phobia (SP). Forty-five children with SP were identified via school-wide screenings and follow-up diagnostic interviews. Analyses examined types and intensity of fears, number of social situations avoided, interpersonal relationships, and classroom functioning. To identify characteristics unique to social phobic children, children with SP (n = 45) were compared to anxious children without SP (n = 56) on the above variables. Comorbidity in children with SP and factors associated with SP severity were also evaluated. Compared to anxious children without SP, children with SP feared and avoided a significantly greater number of social situations. In addition, they were significantly more likely to have trouble with making friends and to prefer being alone rather than with peers. All children with SP met criteria for at least one comorbid disorder. Significant factors explaining child-reported severity of SP were number of social situations avoided and intensity of fears. Greater severity of SP was significantly associated with poorer social skills, poorer leadership skills, greater attention difficulties, and greater learning problems in the classroom. It is important to understand the symptom presentation of SP so that children with SP are identified early and effective interventions are instituted. This is especially critical given the impact of SP on school functioning. Published 2007 Wiley-Liss, Inc.

  2. A Method against Interrupted-Sampling Repeater Jamming Based on Energy Function Detection and Band-Pass Filtering

    Directory of Open Access Journals (Sweden)

    Hui Yuan

    2017-01-01

    Full Text Available Interrupted-sampling repeater jamming (ISRJ is a new kind of coherent jamming to the large time-bandwidth linear frequency modulation (LFM signal. Many jamming modes, such as lifelike multiple false targets and dense false targets, can be made through setting up different parameters. According to the “storage-repeater-storage-repeater” characteristics of the ISRJ and the differences in the time-frequency-energy domain between the ISRJ signal and the target echo signal, one new method based on the energy function detection and band-pass filtering is proposed to suppress the ISRJ. The methods mainly consist of two parts: extracting the signal segments without ISRJ and constructing band-pass filtering function with low sidelobe. The simulation results show that the method is effective in the ISRJ with different parameters.

  3. Evaluation of common methods for sampling invertebrate pollinator assemblages: net sampling out-perform pan traps.

    Directory of Open Access Journals (Sweden)

    Tony J Popic

    Full Text Available Methods for sampling ecological assemblages strive to be efficient, repeatable, and representative. Unknowingly, common methods may be limited in terms of revealing species function and so of less value for comparative studies. The global decline in pollination services has stimulated surveys of flower-visiting invertebrates, using pan traps and net sampling. We explore the relative merits of these two methods in terms of species discovery, quantifying abundance, function, and composition, and responses of species to changing floral resources. Using a spatially-nested design we sampled across a 5000 km(2 area of arid grasslands, including 432 hours of net sampling and 1296 pan trap-days, between June 2010 and July 2011. Net sampling yielded 22% more species and 30% higher abundance than pan traps, and better reflected the spatio-temporal variation of floral resources. Species composition differed significantly between methods; from 436 total species, 25% were sampled by both methods, 50% only by nets, and the remaining 25% only by pans. Apart from being less comprehensive, if pan traps do not sample flower-visitors, the link to pollination is questionable. By contrast, net sampling functionally linked species to pollination through behavioural observations of flower-visitation interaction frequency. Netted specimens are also necessary for evidence of pollen transport. Benefits of net-based sampling outweighed minor differences in overall sampling effort. As pan traps and net sampling methods are not equivalent for sampling invertebrate-flower interactions, we recommend net sampling of invertebrate pollinator assemblages, especially if datasets are intended to document declines in pollination and guide measures to retain this important ecosystem service.

  4. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  5. BWIP-RANDOM-SAMPLING, Random Sample Generation for Nuclear Waste Disposal

    International Nuclear Information System (INIS)

    Sagar, B.

    1989-01-01

    1 - Description of program or function: Random samples for different distribution types are generated. Distribution types as required for performance assessment modeling of geologic nuclear waste disposal are provided. These are: - Uniform, - Log-uniform (base 10 or natural), - Normal, - Lognormal (base 10 or natural), - Exponential, - Bernoulli, - User defined continuous distribution. 2 - Method of solution: A linear congruential generator is used for uniform random numbers. A set of functions is used to transform the uniform distribution to the other distributions. Stratified, rather than random, sampling can be chosen. Truncated limits can be specified on many distributions, whose usual definition has an infinite support. 3 - Restrictions on the complexity of the problem: Generation of correlated random variables is not included

  6. Multiscale simulations of patchy particle systems combining Molecular Dynamics, Path Sampling and Green's Function Reaction Dynamics

    Science.gov (United States)

    Bolhuis, Peter

    Important reaction-diffusion processes, such as biochemical networks in living cells, or self-assembling soft matter, span many orders in length and time scales. In these systems, the reactants' spatial dynamics at mesoscopic length and time scales of microns and seconds is coupled to the reactions between the molecules at microscopic length and time scales of nanometers and milliseconds. This wide range of length and time scales makes these systems notoriously difficult to simulate. While mean-field rate equations cannot describe such processes, the mesoscopic Green's Function Reaction Dynamics (GFRD) method enables efficient simulation at the particle level provided the microscopic dynamics can be integrated out. Yet, many processes exhibit non-trivial microscopic dynamics that can qualitatively change the macroscopic behavior, calling for an atomistic, microscopic description. The recently developed multiscale Molecular Dynamics Green's Function Reaction Dynamics (MD-GFRD) approach combines GFRD for simulating the system at the mesocopic scale where particles are far apart, with microscopic Molecular (or Brownian) Dynamics, for simulating the system at the microscopic scale where reactants are in close proximity. The association and dissociation of particles are treated with rare event path sampling techniques. I will illustrate the efficiency of this method for patchy particle systems. Replacing the microscopic regime with a Markov State Model avoids the microscopic regime completely. The MSM is then pre-computed using advanced path-sampling techniques such as multistate transition interface sampling. I illustrate this approach on patchy particle systems that show multiple modes of binding. MD-GFRD is generic, and can be used to efficiently simulate reaction-diffusion systems at the particle level, including the orientational dynamics, opening up the possibility for large-scale simulations of e.g. protein signaling networks.

  7. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions

    Science.gov (United States)

    Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.

    2018-03-01

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  8. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions.

    Science.gov (United States)

    Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E

    2018-03-14

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  9. A Multidimensional Examination of the Acculturation and Psychological Functioning of a Sample of Immigrant Chinese Mothers in the US

    Science.gov (United States)

    Tahseen, Madiha; Cheah, Charissa S. L.

    2012-01-01

    The present research used the cluster analysis method to examine the acculturation of immigrant Chinese mothers (ICMs), and the demographic characteristics and psychological functioning associated with each acculturation style. The sample was comprised of 83 first-generation ICMs of preschool children residing in Maryland, United States (US).…

  10. Nonuniform sampling by quantiles

    Science.gov (United States)

    Craft, D. Levi; Sonstrom, Reilly E.; Rovnyak, Virginia G.; Rovnyak, David

    2018-03-01

    A flexible strategy for choosing samples nonuniformly from a Nyquist grid using the concept of statistical quantiles is presented for broad classes of NMR experimentation. Quantile-directed scheduling is intuitive and flexible for any weighting function, promotes reproducibility and seed independence, and is generalizable to multiple dimensions. In brief, weighting functions are divided into regions of equal probability, which define the samples to be acquired. Quantile scheduling therefore achieves close adherence to a probability distribution function, thereby minimizing gaps for any given degree of subsampling of the Nyquist grid. A characteristic of quantile scheduling is that one-dimensional, weighted NUS schedules are deterministic, however higher dimensional schedules are similar within a user-specified jittering parameter. To develop unweighted sampling, we investigated the minimum jitter needed to disrupt subharmonic tracts, and show that this criterion can be met in many cases by jittering within 25-50% of the subharmonic gap. For nD-NUS, three supplemental components to choosing samples by quantiles are proposed in this work: (i) forcing the corner samples to ensure sampling to specified maximum values in indirect evolution times, (ii) providing an option to triangular backfill sampling schedules to promote dense/uniform tracts at the beginning of signal evolution periods, and (iii) providing an option to force the edges of nD-NUS schedules to be identical to the 1D quantiles. Quantile-directed scheduling meets the diverse needs of current NUS experimentation, but can also be used for future NUS implementations such as off-grid NUS and more. A computer program implementing these principles (a.k.a. QSched) in 1D- and 2D-NUS is available under the general public license.

  11. On Invertible Sampling and Adaptive Security

    DEFF Research Database (Denmark)

    Ishai, Yuval; Kumarasubramanian, Abishek; Orlandi, Claudio

    2011-01-01

    functionalities was left open. We provide the first convincing evidence that the answer to this question is negative, namely that some (randomized) functionalities cannot be realized with adaptive security. We obtain this result by studying the following related invertible sampling problem: given an efficient...... sampling algorithm A, obtain another sampling algorithm B such that the output of B is computationally indistinguishable from the output of A, but B can be efficiently inverted (even if A cannot). This invertible sampling problem is independently motivated by other cryptographic applications. We show......, under strong but well studied assumptions, that there exist efficient sampling algorithms A for which invertible sampling as above is impossible. At the same time, we show that a general feasibility result for adaptively secure MPC implies that invertible sampling is possible for every A, thereby...

  12. Noise texture and signal detectability in propagation-based x-ray phase-contrast tomography

    International Nuclear Information System (INIS)

    Chou, Cheng-Ying; Anastasio, Mark A.

    2010-01-01

    Purpose: X-ray phase-contrast tomography (PCT) is a rapidly emerging imaging modality for reconstructing estimates of an object's three-dimensional x-ray refractive index distribution. Unlike conventional x-ray computed tomography methods, the statistical properties of the reconstructed images in PCT remain unexplored. The purpose of this work is to quantitatively investigate noise propagation in PCT image reconstruction. Methods: The authors derived explicit expressions for the autocovariance of the reconstructed absorption and refractive index images to characterize noise texture and understand how the noise properties are influenced by the imaging geometry. Concepts from statistical detection theory were employed to understand how the imaging geometry-dependent statistical properties affect the signal detection performance in a signal-known-exactly/background-known-exactly task. Results: The analytical formulas for the phase and absorption autocovariance functions were implemented numerically and compared to the corresponding empirical values, and excellent agreement was found. They observed that the reconstructed refractive images are highly spatially correlated, while the absorption images are not. The numerical results confirm that the strength of the covariance is scaled by the detector spacing. Signal detection studies were conducted, employing a numerical observer. The detection performance was found to monotonically increase as the detector-plane spacing was increased. Conclusions: The authors have conducted the first quantitative investigation of noise propagation in PCT image reconstruction. The reconstructed refractive images were found to be highly spatially correlated, while absorption images were not. This is due to the presence of a Fourier space singularity in the reconstruction formula for the refraction images. The statistical analysis may facilitate the use of task-based image quality measures to further develop and optimize this emerging

  13. Noise texture and signal detectability in propagation-based x-ray phase-contrast tomography

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Cheng-Ying; Anastasio, Mark A. [Department of Bio-Industrial Mechatronics Engineering, National Taiwan University, 1, Section 4, Roosevelt Road, Taipei, Taiwan 106, Taiwan (China); Department of Biomedical Engineering, Medical Imaging Research Center, Illinois Institute of Technology, 3440 S. Dearborn Street, E1-116, Chicago, Illinois 60616 (United States)

    2010-01-15

    Purpose: X-ray phase-contrast tomography (PCT) is a rapidly emerging imaging modality for reconstructing estimates of an object's three-dimensional x-ray refractive index distribution. Unlike conventional x-ray computed tomography methods, the statistical properties of the reconstructed images in PCT remain unexplored. The purpose of this work is to quantitatively investigate noise propagation in PCT image reconstruction. Methods: The authors derived explicit expressions for the autocovariance of the reconstructed absorption and refractive index images to characterize noise texture and understand how the noise properties are influenced by the imaging geometry. Concepts from statistical detection theory were employed to understand how the imaging geometry-dependent statistical properties affect the signal detection performance in a signal-known-exactly/background-known-exactly task. Results: The analytical formulas for the phase and absorption autocovariance functions were implemented numerically and compared to the corresponding empirical values, and excellent agreement was found. They observed that the reconstructed refractive images are highly spatially correlated, while the absorption images are not. The numerical results confirm that the strength of the covariance is scaled by the detector spacing. Signal detection studies were conducted, employing a numerical observer. The detection performance was found to monotonically increase as the detector-plane spacing was increased. Conclusions: The authors have conducted the first quantitative investigation of noise propagation in PCT image reconstruction. The reconstructed refractive images were found to be highly spatially correlated, while absorption images were not. This is due to the presence of a Fourier space singularity in the reconstruction formula for the refraction images. The statistical analysis may facilitate the use of task-based image quality measures to further develop and optimize this emerging

  14. Unbiased Sampling and Meshing of Isosurfaces

    KAUST Repository

    Yan, Dongming

    2014-05-07

    In this paper, we present a new technique to generate unbiased samples on isosurfaces. An isosurface, F(x,y,z) = c , of a function, F , is implicitly defined by trilinear interpolation of background grid points. The key idea of our approach is that of treating the isosurface within a grid cell as a graph (height) function in one of the three coordinate axis directions, restricted to where the slope is not too high, and integrating / sampling from each of these three. We use this unbiased sampling algorithm for applications in Monte Carlo integration, Poisson-disk sampling, and isosurface meshing.

  15. Unbiased Sampling and Meshing of Isosurfaces

    KAUST Repository

    Yan, Dongming; Wallner, Johannes; Wonka, Peter

    2014-01-01

    In this paper, we present a new technique to generate unbiased samples on isosurfaces. An isosurface, F(x,y,z) = c , of a function, F , is implicitly defined by trilinear interpolation of background grid points. The key idea of our approach is that of treating the isosurface within a grid cell as a graph (height) function in one of the three coordinate axis directions, restricted to where the slope is not too high, and integrating / sampling from each of these three. We use this unbiased sampling algorithm for applications in Monte Carlo integration, Poisson-disk sampling, and isosurface meshing.

  16. Sex assessment from carpals bones: discriminant function analysis in a contemporary Mexican sample.

    Science.gov (United States)

    Mastrangelo, Paola; De Luca, Stefano; Sánchez-Mejorada, Gabriela

    2011-06-15

    Sex assessment is one of the first essential steps in human identification, in both medico-legal cases and bio-archaeological contexts. Fragmentary human remains compromised by different types of burial or physical insults may frustrate the use of the traditional sex estimation methods, such as the analysis of the skull and pelvis. Currently, the application of discriminant functions to sex unidentified skeletal remains is steadily increasing. However, several studies have demonstrated that, due to variation in size and patterns of sexual dimorphism, discriminant functions are population-specific. In this study, in order to improve sex assessment from skeletal remains and to establish population-specific discriminant functions, the diagnostic values of the carpal bones were considered. A sample of 136 individuals (78 males, 58 females) of known sex and age was analyzed. They belong to a contemporary identified collection from the Laboratory of Physical Anthropology, Faculty of Medicine, UNAM (Universidad Nacional Autónoma de México, Mexico City). The age of the individuals ranged between 25 and 85 years. Between four and nine measurements of each carpal bone were taken. Independent t-tests confirm that all carpals are sexually dimorphic. Univariate measurements produce accuracy levels that range from 61.8% to 90.8%. Classification accuracies ranged between 81.3% and 92.3% in the multivariate stepwise discriminant analysis. In addition, intra- and inter-observer error tests were performed. These indicated that replication of measurements was satisfactory for the same observer over time and between observers. These results suggest that carpal bones can be used for assessing sex in both forensic and bio-archaeological identification procedures and that bone dimensions are population specific. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  17. Sample Reuse in Statistical Remodeling.

    Science.gov (United States)

    1987-08-01

    as the jackknife and bootstrap, is an expansion of the functional, T(Fn), or of its distribution function or both. Frangos and Schucany (1987a) used...accelerated bootstrap. In the same report Frangos and Schucany demonstrated the small sample superiority of that approach over the proposals that take...higher order terms of an Edgeworth expansion into account. In a second report Frangos and Schucany (1987b) examined the small sample performance of

  18. Sensitive determination of polycyclic aromatic hydrocarbons in water samples by HPLC coupled with SPE based on graphene functionalized with triethoxysilane.

    Science.gov (United States)

    Huang, Ke-Jing; Li, Jing; Liu, Yan-Ming; Wang, Lan

    2013-02-01

    The graphene functionalized with (3-aminopropyl) triethoxysilane was synthesized by a simple hydrothermal reaction and applied as SPE sorbents to extract trace polycyclic aromatic hydrocarbons (PAHs) from environmental water samples. These sorbents possess high adsorption capacity and extraction efficiency due to strong adsorption ability of carbon materials and large specific surface area of nanoparticles, and only 10 mg of sorbents are required to extract PAHs from 100 mL water samples. Several condition parameters, such as eluent and its volume, adsorbent amount, sample volume, sample pH, and sample flow rate, were optimized to achieve good sensitivity and precision. Under the optimized extraction conditions, the method showed good linearity in the range of 1-100 μg/L, repeatability of the extraction (the RSDs were between 1.8 and 2.9%, n = 6), and satisfactory detection limits of 0.029-0.1 μg/L. The recoveries of PAHs spiked in environmental water samples ranged from 84.6 to 109.5%. All these results demonstrated that this new SPE technique was a viable alternative to conventional enrichment techniques for the extraction and analysis of PAHs in complex samples. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Hypersexual behavior in an online sample of males: associations with personal distress and functional impairment.

    Science.gov (United States)

    Spenhoff, Miriam; Kruger, Tillmann H C; Hartmann, Uwe; Kobs, Julia

    2013-12-01

    The population of individuals reporting hypersexual behavior is heterogeneous. Prior research has implicated the importance of personal distress and functional impairment, as both may serve as indicators of problem severity and relevance. Still, little is known about associations with distress and impairment following hypersexuality. The purpose of this study was to investigate personal distress and functional impairment in a community sample of male self-identified "sex addicts" and to explore the associations with related variables. Three hundred forty-nine men completed an online survey that included questions about personal distress, functional impairment, motivation for behavior change, type of hypersexual behaviors, time spent on sexual behavior, and progression of sexual urges. The survey included the Sexual Addiction Screening Test-Revised (SAST-R) core. Specific survey questions about personal distress and functional impairment in six life areas were used to assess these variables. Chi-square and P-values were calculated to explore the interrelations among them. There were 75.3% (N = 253) who reported feeling distressed due to hypersexual behavior. Functional impairment in at least one life area was specified by 77.4% (N = 270), and most participants (56.2%) reported impairment regarding partner relationships. Personal distress and functional impairment in three areas were associated with a strong motivation for behavior change. Distress was associated with online pornography use, masturbation, and/or sexual contact with changing partners. The progression of sexual urges was related to distress, while time spent on sexual behavior was not. There were 92.9% of the distressed participants who scored above the SAST-R core scale cut-off, but also 59.0% of the participants with little or no distress scored in this range. Results underline the particular role of problems in social or intimate relationships in association with hypersexuality. Clustering

  20. Does the Social Functioning Scale reflect real-life social functioning? An experience sampling study in patients with a non-affective psychotic disorder and healthy control individuals.

    Science.gov (United States)

    Schneider, M; Reininghaus, U; van Nierop, M; Janssens, M; Myin-Germeys, I

    2017-12-01

    The ecological validity of retrospective measures of social functioning is currently unknown in patients with schizophrenia. In the present study, patients with a diagnosis of non-affective psychosis were compared with controls on two measures of social functioning: the Social Functioning Scale (SFS) and daily-life measures collected with the Experience Sampling Methodology (ESM). The associations between both measures were examined in each group of participants to test for the ecological validity of the SFS. A total of 126 participants with a non-affective psychotic disorder and 109 controls completed the SFS and a 6-day momentary ESM protocol assessing various aspects of social functioning. Multiple linear and multilevel regression analyses were performed to test for group differences in social functioning level and examine associations between the two assessment techniques. Lower social functioning was observed in patients compared with controls on retrospective and momentary measures. The SFS interpersonal domain (social engagement/withdrawal and interpersonal behaviour dimensions) was associated with the percentage of time spent alone and negative appraisal of social interactions. The SFS activity domain (pro-social and recreational activities dimensions) was negatively associated with time spent in leisure activities. The SFS showed some degree of ecological validity at assessing broad aspects of social functioning. Low scores on the SFS social engagement/withdrawal and interpersonal behaviour dimensions captured social isolation and social avoidance in daily life, but not lack of interest in socializing. Ecological validity of the SFS activity domain was low. ESM offers a rich alternative to classical assessment techniques of social functioning.

  1. Improved sample size determination for attributes and variables sampling

    International Nuclear Information System (INIS)

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs

  2. Flow injection preconcentration system using a new functionalized resin for determination of cadmium and nickel in tobacco samples

    International Nuclear Information System (INIS)

    Lemos, Valfredo Azevedo; Novaes, Cleber Galvao; Lima, Adriana da Silva; Vieira, Daniel Rodrigues

    2008-01-01

    A solid-phase extraction method combined with flow injection (FI) on-line flame atomic absorption spectrometry (FAAS) for the determination of cadmium and nickel in tobacco samples is presented. The 2-aminothiophenol functionalized Amberlite XAD-4 (AT-XAD) resin was synthesized by covalent coupling of the ligand with the copolymer through a methylene group. A minicolumn packed with AT-XAD was connected into the automated on-line preconcentration system. Elution of metal ions from minicolumn can be made with 0.50 mol L -1 hydrochloric acid solution. With a consumption of 21.0 mL of sample solution, detection limits (3 s) of 0.3 (Cd) and 0.8 μg L -1 (Ni) were achieved at a sample throughput of 18 h -1 . Enrichment factors (EF) of 99 (cadmium) and 43 (nickel) were obtained compared with the slope of the linear portion of the calibration curves before and after preconcentration. The contents of Cd and Ni in a certified reference material (NIST 1570a, spinach leaves) determined by the present method was in good agreement with the certified value. The developed procedure was also successfully applied to the determination of Cd and Ni in local tobacco samples

  3. A Differential Item Functional Analysis by Age of Perceived Interpersonal Discrimination in a Multi-racial/ethnic Sample of Adults.

    Science.gov (United States)

    Owens, Sherry; Kristjansson, Alfgeir L; Hunte, Haslyn E R

    2015-11-05

    We investigated whether individual items on the nine item William's Perceived Everyday Discrimination Scale (EDS) functioned differently by age (ethnic group. Overall, Asian and Hispanic respondents reported less discrimination than Whites; on the other hand, African Americans and Black Caribbeans reported more discrimination than Whites. Regardless of race/ethnicity, the younger respondents (aged ethnicity, the results were mixed for 19 out of 45 tests of DIF (40%). No differences in item function were observed among Black Caribbeans. "Being called names or insulted" and others acting as "if they are afraid" of the respondents were the only two items that did not exhibit differential item functioning by age across all racial/ethnic groups. Overall, our findings suggest that the EDS scale should be used with caution in multi-age multi-racial/ethnic samples.

  4. Effects of statistical quality, sampling rate and temporal filtering techniques on the extraction of functional parameters from the left ventricular time-activity curves

    Energy Technology Data Exchange (ETDEWEB)

    Guignard, P.A.; Chan, W. (Royal Melbourne Hospital, Parkville (Australia). Dept. of Nuclear Medicine)

    1984-09-01

    Several techniques for the processing of a series of curves derived from two left ventricular time-activity curves acquired at rest and during exercise with a nuclear stethoscope were evaluated. They were three and five point time smoothing. Fourier filtering preserving one to four harmonics (H), truncated curve Fourier filtering, and third degree polynomial curve fitting. Each filter's ability to recover, with fidelity, systolic and diastolic function parameters was evaluated under increasingly 'noisy' conditions and at several sampling rates. Third degree polynomial curve fittings and truncated Fourier filters exhibited very high sensitivity to noise. Three and five point time smoothing had moderate sensitivity to noise, but were highly affected by sampling rate. Fourier filtering preserving 2H or 3H produced the best compromise with high resilience to noise and independence of sampling rate as far as the recovery of these functional parameters is concerned.

  5. Effects of statistical quality, sampling rate and temporal filtering techniques on the extraction of functional parameters from the left ventricular time-activity curves

    International Nuclear Information System (INIS)

    Guignard, P.A.; Chan, W.

    1984-01-01

    Several techniques for the processing of a series of curves derived from two left ventricular time-activity curves acquired at rest and during exercise with a nuclear stethoscope were evaluated. They were three and five point time smoothing. Fourier filtering preserving one to four harmonics (H), truncated curve Fourier filtering, and third degree polynomial curve fitting. Each filter's ability to recover, with fidelity, systolic and diastolic function parameters was evaluated under increasingly 'noisy' conditions and at several sampling rates. Third degree polynomial curve fittings and truncated Fourier filters exhibited very high sensitivity to noise. Three and five point time smoothing had moderate sensitivity to noise, but were highly affected by sampling rate. Fourier filtering preserving 2H or 3H produced the best compromise with high resilience to noise and independence of sampling rate as far as the recovery of these functional parameters is concerned. (author)

  6. A comprehensive evaluation of potential lung function associated genes in the SpiroMeta general population sample.

    Directory of Open Access Journals (Sweden)

    Ma'en Obeidat

    Full Text Available Lung function measures are heritable traits that predict population morbidity and mortality and are essential for the diagnosis of chronic obstructive pulmonary disease (COPD. Variations in many genes have been reported to affect these traits, but attempts at replication have provided conflicting results. Recently, we undertook a meta-analysis of Genome Wide Association Study (GWAS results for lung function measures in 20,288 individuals from the general population (the SpiroMeta consortium.To comprehensively analyse previously reported genetic associations with lung function measures, and to investigate whether single nucleotide polymorphisms (SNPs in these genomic regions are associated with lung function in a large population sample.We analysed association for SNPs tagging 130 genes and 48 intergenic regions (+/-10 kb, after conducting a systematic review of the literature in the PubMed database for genetic association studies reporting lung function associations.The analysis included 16,936 genotyped and imputed SNPs. No loci showed overall significant association for FEV(1 or FEV(1/FVC traits using a carefully defined significance threshold of 1.3×10(-5. The most significant loci associated with FEV(1 include SNPs tagging MACROD2 (P = 6.81×10(-5, CNTN5 (P = 4.37×10(-4, and TRPV4 (P = 1.58×10(-3. Among ever-smokers, SERPINA1 showed the most significant association with FEV(1 (P = 8.41×10(-5, followed by PDE4D (P = 1.22×10(-4. The strongest association with FEV(1/FVC ratio was observed with ABCC1 (P = 4.38×10(-4, and ESR1 (P = 5.42×10(-4 among ever-smokers.Polymorphisms spanning previously associated lung function genes did not show strong evidence for association with lung function measures in the SpiroMeta consortium population. Common SERPINA1 polymorphisms may affect FEV(1 among smokers in the general population.

  7. Factors Affecting Cognitive Function in Older Adults: A Turkish Sample

    OpenAIRE

    Akdag, Beyza; Telci, Emine Aslan; Cavlak, Ugur

    2013-01-01

    Background: The purpose of this study was to determine the influential factors of cognitive function in older adults. Methods: In this study, 377 older adults (mean age: 74.71 ± 6.15 years) were examined. The Hodkinson Abbreviated Mental Test (HAMT) was used to describe cognitive function of the individuals. The Centers for Disease Control (CDC) Health-Related Quality of Life (HRQOL-4) survey tool was used to measure the quality of life. Possible influential factors of cognitive function w...

  8. Ventral Striatum Functional Connectivity as a Predictor of Adolescent Depressive Disorder in a Longitudinal Community-Based Sample.

    Science.gov (United States)

    Pan, Pedro Mario; Sato, João R; Salum, Giovanni A; Rohde, Luis A; Gadelha, Ary; Zugman, Andre; Mari, Jair; Jackowski, Andrea; Picon, Felipe; Miguel, Eurípedes C; Pine, Daniel S; Leibenluft, Ellen; Bressan, Rodrigo A; Stringaris, Argyris

    2017-11-01

    Previous studies have implicated aberrant reward processing in the pathogenesis of adolescent depression. However, no study has used functional connectivity within a distributed reward network, assessed using resting-state functional MRI (fMRI), to predict the onset of depression in adolescents. This study used reward network-based functional connectivity at baseline to predict depressive disorder at follow-up in a community sample of adolescents. A total of 637 children 6-12 years old underwent resting-state fMRI. Discovery and replication analyses tested intrinsic functional connectivity (iFC) among nodes of a putative reward network. Logistic regression tested whether striatal node strength, a measure of reward-related iFC, predicted onset of a depressive disorder at 3-year follow-up. Further analyses investigated the specificity of this prediction. Increased left ventral striatum node strength predicted increased risk for future depressive disorder (odds ratio=1.54, 95% CI=1.09-2.18), even after excluding participants who had depressive disorders at baseline (odds ratio=1.52, 95% CI=1.05-2.20). Among 11 reward-network nodes, only the left ventral striatum significantly predicted depression. Striatal node strength did not predict other common adolescent psychopathology, such as anxiety, attention deficit hyperactivity disorder, and substance use. Aberrant ventral striatum functional connectivity specifically predicts future risk for depressive disorder. This finding further emphasizes the need to understand how brain reward networks contribute to youth depression.

  9. Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Vicente [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bonney, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schroeder, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weirs, V. Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a class of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10-4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.

  10. Combined short scale roughness and surface dielectric function gradient effects on the determination of tip-sample force in atomic force microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Gusso, André, E-mail: gusso@metal.eeimvr.uff.br [Departamento de Ciências Exatas-EEIMVR, Universidade Federal Fluminense, Volta Redonda, RJ 27255-125 (Brazil)

    2013-11-11

    The contribution of tip roughness to the van der Waals force between an atomic force microscopy probe tip and the sample is calculated using the multilayer effective medium model, which allows us to consider the relevant case of roughness characterized by correlation length and amplitude in the nanometer scale. The effect of the surface dielectric function gradient is incorporated in the tip-sample force model. It is concluded that for rms roughness in the few nanometers range the effect of short scale tip roughness is quite significant.

  11. Improvement in quality of life and sexual functioning in a comorbid sample after the unified protocol transdiagnostic group treatment.

    Science.gov (United States)

    de Ornelas Maia, Ana Claudia Corrêa; Sanford, Jenny; Boettcher, Hannah; Nardi, Antonio E; Barlow, David

    2017-10-01

    Patients with multiple mental disorders often experience sexual dysfunction and reduced quality of life. The unified protocol (UP) is a transdiagnostic treatment for emotional disorders that has the potential to improve quality of life and sexual functioning via improved emotion management. The present study evaluates changes in quality of life and sexual functioning in a highly comorbid sample treated with the UP in a group format. Forty-eight patients were randomly assigned to either a UP active-treatment group or a medication-only control group. Treatment was delivered in 14 sessions over the course of 4 months. Symptoms of anxiety and depression were assessed using the Beck Anxiety Inventory and Beck Depression Inventory. Sexual functioning was assessed by the Arizona Sexual Experience Scale (ASEX), and quality of life was assessed by the World Health Organization Quality of Life-BREF scale (WHOQOL-BREF). Quality of life, anxiety and depression all significantly improved among participants treated with the UP. Some improvement in sexual functioning was also noted. The results support the efficacy of the UP in improving quality of life and sexual functioning in comorbid patients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Essays in financial economics and econometrics

    Science.gov (United States)

    La Spada, Gabriele

    Chapter 1 (my job market paper) asks the following question: Do asset managers reach for yield because of competitive pressures in a low rate environment? I propose a tournament model of money market funds (MMFs) to study this issue. I show that funds with different costs of default respond differently to changes in interest rates, and that it is important to distinguish the role of risk-free rates from that of risk premia. An increase in the risk premium leads funds with lower default costs to increase risk-taking, while funds with higher default costs reduce risk-taking. Without changes in the premium, low risk-free rates reduce risk-taking. My empirical analysis shows that these predictions are consistent with the risk-taking of MMFs during the 2006--2008 period. Chapter 2, co-authored with Fabrizio Lillo and published in Studies in Nonlinear Dynamics and Econometrics (2014), studies the effect of round-off error (or discretization) on stationary Gaussian long-memory process. For large lags, the autocovariance is rescaled by a factor smaller than one, and we compute this factor exactly. Hence, the discretized process has the same Hurst exponent as the underlying one. We show that in presence of round-off error, two common estimators of the Hurst exponent, the local Whittle (LW) estimator and the detrended fluctuation analysis (DFA), are severely negatively biased in finite samples. We derive conditions for consistency and asymptotic normality of the LW estimator applied to discretized processes and compute the asymptotic properties of the DFA for generic long-memory processes that encompass discretized processes. Chapter 3, co-authored with Fabrizio Lillo, studies the effect of round-off error on integrated Gaussian processes with possibly correlated increments. We derive the variance and kurtosis of the realized increment process in the limit of both "small" and "large" round-off errors, and its autocovariance for large lags. We propose novel estimators for the

  13. Sample heterogeneity in unipolar depression as assessed by functional connectivity analyses is dominated by general disease effects.

    Science.gov (United States)

    Feder, Stephan; Sundermann, Benedikt; Wersching, Heike; Teuber, Anja; Kugel, Harald; Teismann, Henning; Heindel, Walter; Berger, Klaus; Pfleiderer, Bettina

    2017-11-01

    Combinations of resting-state fMRI and machine-learning techniques are increasingly employed to develop diagnostic models for mental disorders. However, little is known about the neurobiological heterogeneity of depression and diagnostic machine learning has mainly been tested in homogeneous samples. Our main objective was to explore the inherent structure of a diverse unipolar depression sample. The secondary objective was to assess, if such information can improve diagnostic classification. We analyzed data from 360 patients with unipolar depression and 360 non-depressed population controls, who were subdivided into two independent subsets. Cluster analyses (unsupervised learning) of functional connectivity were used to generate hypotheses about potential patient subgroups from the first subset. The relationship of clusters with demographical and clinical measures was assessed. Subsequently, diagnostic classifiers (supervised learning), which incorporated information about these putative depression subgroups, were trained. Exploratory cluster analyses revealed two weakly separable subgroups of depressed patients. These subgroups differed in the average duration of depression and in the proportion of patients with concurrently severe depression and anxiety symptoms. The diagnostic classification models performed at chance level. It remains unresolved, if subgroups represent distinct biological subtypes, variability of continuous clinical variables or in part an overfitting of sparsely structured data. Functional connectivity in unipolar depression is associated with general disease effects. Cluster analyses provide hypotheses about potential depression subtypes. Diagnostic models did not benefit from this additional information regarding heterogeneity. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Position-dependent correlation function from the SDSS-III Baryon Oscillation Spectroscopic Survey Data Release 10 CMASS sample

    International Nuclear Information System (INIS)

    Chiang, Chi-Ting; Wagner, Christian; Sánchez, Ariel G.; Schmidt, Fabian; Komatsu, Eiichiro

    2015-01-01

    We report on the first measurement of the three-point function with the position-dependent correlation function from the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS) Data Release 10 CMASS sample. This new observable measures the correlation between two-point functions of galaxy pairs within different subvolumes, ξ-circumflex (ř,ř L ), where ř L is the location of a subvolume, and the corresponding mean overdensities, δ-bar (ř L ). This correlation, which we call the 'integrated three-point function', iζ(r)≡(ξ-circumflex (ř,ř L )δ-bar (ř L )), measures a three-point function of two short- and one long-wavelength modes, and is generated by nonlinear gravitational evolution and possibly also by the physics of inflation. The iζ(r) measured from the BOSS data lies within the scatter of those from the mock galaxy catalogs in redshift space, yielding a ten-percent-level determination of the amplitude of iζ(r). The tree-level perturbation theory in redshift space predicts how this amplitude depends on the linear and quadratic nonlinear galaxy bias parameters (b 1 and b 2 ), as well as on the amplitude and linear growth rate of matter fluctuations (σ 8 and f). Combining iζ(r) with the constraints on b 1σ 8 and fσ 8 from the global two-point correlation function and that on σ 8 from the weak lensing signal of BOSS galaxies, we measure b 2 =0.41±0.41 (68% C.L.) assuming standard perturbation theory at the tree level and the local bias model

  15. Modelling the autocovariance of the power spectrum of a solar-type oscillator

    DEFF Research Database (Denmark)

    Campante , T.L.; Karoff, Christoffer

    2010-01-01

    tool in the analysis of the more than 1000 solar-type stars expected to be observed as part of the Kepler Asteroseismic Investigation (KAI). We apply the aforementioned procedure to simulations of the Sun. Assuming different apparent magnitudes, we address the issues of how accurately and how precisely...

  16. The ocean sampling day consortium

    DEFF Research Database (Denmark)

    Kopf, Anna; Bicak, Mesude; Kottmann, Renzo

    2015-01-01

    Ocean Sampling Day was initiated by the EU-funded Micro B3 (Marine Microbial Biodiversity, Bioinformatics, Biotechnology) project to obtain a snapshot of the marine microbial biodiversity and function of the world’s oceans. It is a simultaneous global mega-sequencing campaign aiming to generate...... the largest standardized microbial data set in a single day. This will be achievable only through the coordinated efforts of an Ocean Sampling Day Consortium, supportive partnerships and networks between sites. This commentary outlines the establishment, function and aims of the Consortium and describes our...

  17. Direct sampling methods for inverse elastic scattering problems

    Science.gov (United States)

    Ji, Xia; Liu, Xiaodong; Xi, Yingxia

    2018-03-01

    We consider the inverse elastic scattering of incident plane compressional and shear waves from the knowledge of the far field patterns. Specifically, three direct sampling methods for location and shape reconstruction are proposed using the different component of the far field patterns. Only inner products are involved in the computation, thus the novel sampling methods are very simple and fast to be implemented. With the help of the factorization of the far field operator, we give a lower bound of the proposed indicator functionals for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functionals decay like the Bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functionals continuously dependent on the far field patterns, which further implies that the novel sampling methods are extremely stable with respect to data error. For the case when the observation directions are restricted into the limited aperture, we firstly introduce some data retrieval techniques to obtain those data that can not be measured directly and then use the proposed direct sampling methods for location and shape reconstructions. Finally, some numerical simulations in two dimensions are conducted with noisy data, and the results further verify the effectiveness and robustness of the proposed sampling methods, even for multiple multiscale cases and limited-aperture problems.

  18. Female sexual self-schema after interpersonal trauma: relationship to psychiatric and cognitive functioning in a clinical treatment-seeking sample.

    Science.gov (United States)

    Blain, Leah M; Galovski, Tara E; Peterson, Zoë D

    2011-04-01

    This study assessed the relationship between sexual self-schema and posttraumatic functioning in a clinical sample of 112 female sexual assault survivors. Contrary to hypotheses, posttraumatic stress disorder and depressive symptom severity were unrelated to the valence of sexual self-schema. Yet, negative posttraumatic cognitions were related to sexual self-schemas. Specifically, less positive self-views were associated with more negative schema (r = -.35). In a multivariate analysis, the measure of negative views of the world and others was associated with more positive schema. Results indicate that intervening to improve survivors' postassault appraisals of the self may help to reduce the impact of interpersonal trauma on women's sexual functioning. Copyright © 2011 International Society for Traumatic Stress Studies.

  19. Cerebral Small Vessel Disease: Cognition, Mood, Daily Functioning, and Imaging Findings from a Small Pilot Sample

    Directory of Open Access Journals (Sweden)

    John G. Baker

    2012-04-01

    Full Text Available Cerebral small vessel disease, a leading cause of cognitive decline, is considered a relatively homogeneous disease process, and it can co-occur with Alzheimer’s disease. Clinical reports of magnetic resonance imaging (MRI/computed tomography and single photon emission computed tomography (SPECT imaging and neuropsychology testing for a small pilot sample of 14 patients are presented to illustrate disease characteristics through findings from structural and functional imaging and cognitive assessment. Participants showed some decreases in executive functioning, attention, processing speed, and memory retrieval, consistent with previous literature. An older subgroup showed lower age-corrected scores at a single time point compared to younger participants. Performance on a computer-administered cognitive measure showed a slight overall decline over a period of 8–28 months. For a case study with mild neuropsychology findings, the MRI report was normal while the SPECT report identified perfusion abnormalities. Future research can test whether advances in imaging analysis allow for identification of cerebral small vessel disease before changes are detected in cognition.

  20. Flame Atomic Absorption Determination of Gold Ion in Aqueous Samples after Preconcentration Using 9-Acridinylamine Functionalized γ-Alumina Nanoparticles

    Directory of Open Access Journals (Sweden)

    Mohammad Karimi

    2013-01-01

    Full Text Available A simple and sensitive solid phase extraction utilizing 9-acridinylamine functionalized alumina nanoparticles was developed, and their potential use for preconcentration and subsequent determination of gold by flame atomic absorption spectrometry (FAAS was investigated. A number of parameters, namely, type, concentration, and volume of eluent, pH of the sample solution, flow rate of extraction, and volume of the sample, were evaluated. The effect of a variety of ions on preconcentration and recovery was also investigated. Gold ions were found to be recovered quantitatively at pH 3.0, with 0.1 mol L−1 thiourea in 2 mol L−1 H2SO4 as eluent. The limit of detection (LOD, defined as five times the standard deviation of the blank, was determined to be lower than 13.0 ppb. Under optimum conditions, the accuracy and precision (RSD% of the method were >98.0 and <1.5%, respectively. To gauge its ability in terms of application to real samples, the proposed method was successfully applied for determination of gold concentration in waste water samples and one soil standard material, and satisfactory results were obtained.

  1. Do neurotransmitters sampled by brain microdialysis reflect functional release?

    NARCIS (Netherlands)

    Westerink, BHC; Timmerman, W

    1999-01-01

    Brain microdialysis is an invasive sampling technique and will always cause damage to nervous tissue. For proper interpretation of the results, possible sources of interference need to be identified. The present review discusses the possible artefacts of the microdialysis technique and evaluates

  2. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    International Nuclear Information System (INIS)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy

  3. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    Science.gov (United States)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  4. Sample vial inserts: A better approach for sampling heterogeneous slurry samples in the SRS Defense Waste Processing Facility

    International Nuclear Information System (INIS)

    Coleman, C.J.; Goode, S.R.

    1996-01-01

    A convenient and effective new approach for analyzing DWPF samples involves the use of inserts with volumes of 1.5--3 ml placed in the neck of 14 ml sample vials. The inserts have rims that conform to the rim of the vials so that they sit straight and stable in the vial. The DWPF tank sampling system fills the pre-weighed insert rather than the entire vial, so the vial functions only as the insert holder. The shielded cell operator then removes the vial cap and decants the insert containing the sample into a plastic bottle, crucible, etc., for analysis. Inert materials such as Teflon, plastic, and zirconium are used for the insert so it is unnecessary to separate the insert from the sample for most analyses. The key technique advantage of using inserts to take DWPF samples versus filling sample vials is that it provides a convenient and almost foolproof way of obtaining and handling small volumes of slurry samples in a shielded cell without corrupting the sample. Since the insert allows the entire sample to be analyzed, this approach eliminates the errors inherent with subsampling heterogeneous slurries that comprise DWPF samples. Slurry samples can then be analyzed with confidence. Analysis times are dramatically reduced by eliminating the drying and vitrification steps normally used to produce a homogeneous solid sample. Direct dissolution and elemental analysis of slurry samples are achieved in 8 hours or less compared with 40 hours for analysis of vitrified slurry samples. Comparison of samples taken in inserts versus full vials indicate that the insert does not significantly affect sample composition

  5. Selective extraction of emerging contaminants from water samples by dispersive liquid-liquid microextraction using functionalized ionic liquids.

    Science.gov (United States)

    Yao, Cong; Li, Tianhao; Twu, Pamela; Pitner, William R; Anderson, Jared L

    2011-03-25

    Functionalized ionic liquids containing the tris(pentafluoroethyl)trifluorophosphate (FAP) anion were used as extraction solvents in dispersive liquid-liquid microextraction (DLLME) for the extraction of 14 emerging contaminants from water samples. The extraction efficiencies and selectivities were compared to those of an in situ IL DLLME method which uses an in situ metathesis reaction to exchange 1-butyl-3-methylimidazolium chloride (BMIM-Cl) to 1-butyl-3-methylimidazolium bis[(trifluoromethyl)sulfonyl]imide (BMIM-NTf(2)). Compounds containing tertiary amine functionality were extracted with high selectivity and sensitivity by the 1-(6-amino-hexyl)-1-methylpyrrolidinium tris(pentafluoroethyl)trifluorophosphate (HNH(2)MPL-FAP) IL compared to other FAP-based ILs and the BMIM-NTf(2) IL. On the other hand, polar or acidic compounds without amine groups exhibited higher enrichment factors using the BMIM-NTf(2) IL. The detection limits for the studied analytes varied from 0.1 to 55.1 μg/L using the traditional IL DLLME method with the HNH(2)MPL-FAP IL as extraction solvent, and from 0.1 to 55.8 μg/L using in situ IL DLLME method with BMIM-Cl+LiNTf(2) as extraction solvent. A 93-fold decrease in the detection limit of caffeine was observed when using the HNH(2)MPL-FAP IL compared to that obtained using in situ IL DLLME method. Real water samples including tap water and creek water were analyzed with both IL DLLME methods and yielded recoveries ranging from 91% to 110%. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Sample size determination for equivalence assessment with multiple endpoints.

    Science.gov (United States)

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  7. Prevalence of psychological trauma and association with current health and functioning in a sample of HIV-infected and HIV-uninfected Tanzanian adults.

    Directory of Open Access Journals (Sweden)

    Brian W Pence

    Full Text Available In high income nations, traumatic life experiences such as childhood sexual abuse are much more common in people living with HIV/AIDS (PLWHA than the general population, and trauma is associated with worse current health and functioning. Virtually no data exist on the prevalence or consequences of trauma for PLWHA in low income nations.We recruited four cohorts of Tanzanian patients in established medical care for HIV infection (n = 228, individuals newly testing positive for HIV (n = 267, individuals testing negative for HIV at the same sites (n = 182, and a random sample of community-dwelling adults (n = 249. We assessed lifetime prevalence of traumatic experiences, recent stressful life events, and current mental health and health-related physical functioning. Those with established HIV infection reported a greater number of childhood and lifetime traumatic experiences (2.1 and 3.0 respectively than the community cohort (1.8 and 2.3. Those with established HIV infection reported greater post-traumatic stress disorder (PTSD symptomatology and worse current health-related physical functioning. Each additional lifetime traumatic experience was associated with increased PTSD symptomatology and worse functioning.This study is the first to our knowledge in an HIV population from a low income nation to report the prevalence of a range of potentially traumatic life experiences compared to a matched community sample and to show that trauma history is associated with poorer health-related physical functioning. Our findings underscore the importance of considering psychosocial characteristics when planning to meet the health needs of PLWHA in low income countries.

  8. Prevalence of psychological trauma and association with current health and functioning in a sample of HIV-infected and HIV-uninfected Tanzanian adults.

    Science.gov (United States)

    Pence, Brian W; Shirey, Kristen; Whetten, Kathryn; Agala, Bernard; Itemba, Dafrosa; Adams, Julie; Whetten, Rachel; Yao, Jia; Shao, John

    2012-01-01

    In high income nations, traumatic life experiences such as childhood sexual abuse are much more common in people living with HIV/AIDS (PLWHA) than the general population, and trauma is associated with worse current health and functioning. Virtually no data exist on the prevalence or consequences of trauma for PLWHA in low income nations. We recruited four cohorts of Tanzanian patients in established medical care for HIV infection (n = 228), individuals newly testing positive for HIV (n = 267), individuals testing negative for HIV at the same sites (n = 182), and a random sample of community-dwelling adults (n = 249). We assessed lifetime prevalence of traumatic experiences, recent stressful life events, and current mental health and health-related physical functioning. Those with established HIV infection reported a greater number of childhood and lifetime traumatic experiences (2.1 and 3.0 respectively) than the community cohort (1.8 and 2.3). Those with established HIV infection reported greater post-traumatic stress disorder (PTSD) symptomatology and worse current health-related physical functioning. Each additional lifetime traumatic experience was associated with increased PTSD symptomatology and worse functioning. This study is the first to our knowledge in an HIV population from a low income nation to report the prevalence of a range of potentially traumatic life experiences compared to a matched community sample and to show that trauma history is associated with poorer health-related physical functioning. Our findings underscore the importance of considering psychosocial characteristics when planning to meet the health needs of PLWHA in low income countries.

  9. Uncertainty evaluation for IIR (infinite impulse response) filtering using a state-space approach

    International Nuclear Information System (INIS)

    Link, Alfred; Elster, Clemens

    2009-01-01

    A novel method is proposed for evaluating the uncertainty associated with the output of a discrete-time IIR filter when the input signal is corrupted by additive noise and the filter coefficients are uncertain. This task arises, for instance, when the noise-corrupted output of a measurement system is compensated by a digital filter which has been designed on the basis of the characteristics of the measurement system. We assume that the noise is either stationary or uncorrelated, and we presume knowledge about its autocovariance function or its time-dependent variances, respectively. Uncertainty evaluation is considered in line with the 'Guide to the Expression of Uncertainty in Measurement'. A state-space representation is used to derive a calculation scheme which allows the uncertainties to be evaluated in an easy way and also enables real-time applications. The proposed procedure is illustrated by an example

  10. Automated Asteroseismic Analysis of Solar-type Stars

    DEFF Research Database (Denmark)

    Karoff, Christoffer; Campante, T.L.; Chaplin, W.J.

    2010-01-01

    The rapidly increasing volume of asteroseismic observations on solar-type stars has revealed a need for automated analysis tools. The reason for this is not only that individual analyses of single stars are rather time consuming, but more importantly that these large volumes of observations open...... are calculated in a consistent way. Here we present a set of automated asterosesimic analysis tools. The main engine of these set of tools is an algorithm for modelling the autocovariance spectra of the stellar acoustic spectra allowing us to measure not only the frequency of maximum power and the large......, radius, luminosity, effective temperature, surface gravity and age based on grid modeling. All the tools take into account the window function of the observations which means that they work equally well for space-based photometry observations from e.g. the NASA Kepler satellite and ground-based velocity...

  11. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi

    2015-10-21

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  12. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi; Zhou, Lan; Najibi, Seyed Morteza; Gao, Xin; Huang, Jianhua Z.

    2015-01-01

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  13. GET electronics samples data analysis

    International Nuclear Information System (INIS)

    Giovinazzo, J.; Goigoux, T.; Anvar, S.; Baron, P.; Blank, B.; Delagnes, E.; Grinyer, G.F.; Pancin, J.; Pedroza, J.L.; Pibernat, J.; Pollacco, E.; Rebii, A.

    2016-01-01

    The General Electronics for TPCs (GET) has been developed to equip a generation of time projection chamber detectors for nuclear physics, and may also be used for a wider range of detector types. The goal of this paper is to propose first analysis procedures to be applied on raw data samples from the GET system, in order to correct for systematic effects observed on test measurements. We also present a method to estimate the response function of the GET system channels. The response function is required in analysis where the input signal needs to be reconstructed, in terms of time distribution, from the registered output samples.

  14. Aerobot Sampling and Handling System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Honeybee Robotics proposes to: ?Derive and document the functional and technical requirements for Aerobot surface sampling and sample handling across a range of...

  15. Mixed functional monomers-based monolithic adsorbent for the effective extraction of sulfonylurea herbicides in water and soil samples.

    Science.gov (United States)

    Pei, Miao; Zhu, Xiangyu; Huang, Xiaojia

    2018-01-05

    Effective extraction is a key step in the determination of sulfonylurea herbicides (SUHs) in complicated samples. According to the chemical properties of SUHs, a new monolithic adsorbent utilizing acrylamidophenylboronic acid and vinylimidazole as mixed functional monomers was synthesized. The new adsorbent was employed as the extraction phase of multiple monolithic fiber solid-phase microextraction (MMF-SPME) of SUHs, and the extracted SUHs were determined by high-performance liquid chromatography with diode array detection (HPLC-DAD). Results well evidence that the prepared adsorbent could extract SUHs in environmental waters and soil effectively through multiply interactions such as boronate affinity, dipole-dipole and π-π interactions. Under the optimized extraction conditions, the limits of detection for target SUHs in environmental water and soil samples were 0.018-0.17μg/L and 0.14-1.23μg/kg, respectively. At the same time, the developed method also displayed some analytical merits including wide linear dynamic ranges, good method reproducibility, satisfactory sensitivity and low consume of organic solvent. Finally, the developed were successfully applied to monitor trace SUHs in environmental water and soil samples. The recoveries at three fortified concentrations were in the range of 70.6-119% with RSD below 11% in all cases. The obtained results well demonstrate the excellent practical applicability of the developed MMF-SPME-HPLC-DAD method for the monitoring of SUHs in water and soil samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. The optimally sampled galaxy-wide stellar initial mass function. Observational tests and the publicly available GalIMF code

    Science.gov (United States)

    Yan, Zhiqiang; Jerabkova, Tereza; Kroupa, Pavel

    2017-11-01

    Here we present a full description of the integrated galaxy-wide initial mass function (IGIMF) theory in terms of the optimal sampling and compare it with available observations. Optimal sampling is the method we use to discretize the IMF deterministically into stellar masses. Evidence indicates that nature may be closer to deterministic sampling as observations suggest a smaller scatter of various relevant observables than random sampling would give, which may result from a high level of self-regulation during the star formation process. We document the variation of IGIMFs under various assumptions. The results of the IGIMF theory are consistent with the empirical relation between the total mass of a star cluster and the mass of its most massive star, and the empirical relation between the star formation rate (SFR) of a galaxy and the mass of its most massive cluster. Particularly, we note a natural agreement with the empirical relation between the IMF power-law index and the SFR of a galaxy. The IGIMF also results in a relation between the SFR of a galaxy and the mass of its most massive star such that, if there were no binaries, galaxies with SFR first time, we show optimally sampled galaxy-wide IMFs (OSGIMF) that mimic the IGIMF with an additional serrated feature. Finally, a Python module, GalIMF, is provided allowing the calculation of the IGIMF and OSGIMF dependent on the galaxy-wide SFR and metallicity. A copy of the python code model is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A126

  17. NaCl samples for optical luminescence dosimetry

    International Nuclear Information System (INIS)

    Catli, S.

    2005-01-01

    Optically stimulated luminescence (OSL) have been used broadly for luminescence dosimetry and dating. In many cases, it has been pointed out that the decay of the OSL do not generally behave according to a simple exponential function. In this study the Infra-red stimulated luminescence (IRSL) intensity from NaCl samples were experimentally measured. The decay curves for this sample were fitted to some functions and it is in good agreement with the function y = α + b exp(-cx). The IRSL decay curves from NaCl using different β-doses have been obtained and investigated their dose response

  18. posttraumatic stress and its relationship to physical health functioning in a sample of Iraq and Afghanistan War veterans seeking postdeployment VA health care.

    Science.gov (United States)

    Jakupcak, Matthew; Luterek, Jane; Hunt, Stephen; Conybeare, Daniel; McFall, Miles

    2008-05-01

    The relationship between posttraumatic stress and physical health functioning was examined in a sample of Iraq and Afghanistan War veterans seeking postdeployment VA care. Iraq and Afghanistan War veterans (N = 108) who presented for treatment to a specialty postdeployment care clinic completed self-report questionnaires that assessed symptoms of posttraumatic stress disorder (PTSD), chemical exposure, combat exposure, and physical health functioning. As predicted, PTSD symptom severity was significantly associated with poorer health functioning, even after accounting for demographic factors, combat and chemical exposure, and health risk behaviors. These results highlight the unique influence of PTSD on the physical health in treatment seeking Iraq and Afghanistan War veterans.

  19. Failure Probability Estimation Using Asymptotic Sampling and Its Dependence upon the Selected Sampling Scheme

    Directory of Open Access Journals (Sweden)

    Martinásková Magdalena

    2017-12-01

    Full Text Available The article examines the use of Asymptotic Sampling (AS for the estimation of failure probability. The AS algorithm requires samples of multidimensional Gaussian random vectors, which may be obtained by many alternative means that influence the performance of the AS method. Several reliability problems (test functions have been selected in order to test AS with various sampling schemes: (i Monte Carlo designs; (ii LHS designs optimized using the Periodic Audze-Eglājs (PAE criterion; (iii designs prepared using Sobol’ sequences. All results are compared with the exact failure probability value.

  20. Sampling soils for 137Cs using various field-sampling volumes

    International Nuclear Information System (INIS)

    Nyhan, J.W.; Schofield, T.G.; White, G.C.; Trujillo, G.

    1981-10-01

    The sediments from a liquid effluent receiving area at the Los Alamos National Laboratory and soils from intensive study area in the fallout pathway of Trinity were sampled for 137 Cs using 25-, 500-, 2500-, and 12 500-cm 3 field sampling volumes. A highly replicated sampling program was used to determine mean concentrations and inventories of 137 Cs at each site, as well as estimates of spatial, aliquoting, and counting variance components of the radionuclide data. The sampling methods were also analyzed as a function of soil size fractions collected in each field sampling volume and of the total cost of the program for a given variation in the radionuclide survey results. Coefficients of variation (CV) of 137 Cs inventory estimates ranged from 0.063 to 0.14 for Mortandad Canyon sediments, where CV values for Trinity soils were observed from 0.38 to 0.57. Spatial variance components of 137 Cs concentration data were usually found to be larger than either the aliquoting or counting variance estimates and were inversely related to field sampling volume at the Trinity intensive site. Subsequent optimization studies of the sampling schemes demonstrated that each aliquot should be counted once, and that only 2 to 4 aliquots out of an many as 30 collected need be assayed for 137 Cs. The optimization studies showed that as sample costs increased to 45 man-hours of labor per sample, the variance of the mean 137 Cs concentration decreased dramatically, but decreased very little with additional labor

  1. Image-derived and arterial blood sampled input functions for quantitative PET imaging of the angiotensin II subtype 1 receptor in the kidney

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Tao; Tsui, Benjamin M. W.; Li, Xin; Vranesic, Melin; Lodge, Martin A.; Gulaldi, Nedim C. M.; Szabo, Zsolt, E-mail: zszabo@jhmi.edu [Russell H. Morgan Department of Radiology and Radiological Science, The Johns Hopkins School of Medicine, Baltimore, Maryland 21287 (United States)

    2015-11-15

    Purpose: The radioligand {sup 11}C-KR31173 has been introduced for positron emission tomography (PET) imaging of the angiotensin II subtype 1 receptor in the kidney in vivo. To study the biokinetics of {sup 11}C-KR31173 with a compartmental model, the input function is needed. Collection and analysis of arterial blood samples are the established approach to obtain the input function but they are not feasible in patients with renal diseases. The goal of this study was to develop a quantitative technique that can provide an accurate image-derived input function (ID-IF) to replace the conventional invasive arterial sampling and test the method in pigs with the goal of translation into human studies. Methods: The experimental animals were injected with [{sup 11}C]KR31173 and scanned up to 90 min with dynamic PET. Arterial blood samples were collected for the artery derived input function (AD-IF) and used as a gold standard for ID-IF. Before PET, magnetic resonance angiography of the kidneys was obtained to provide the anatomical information required for derivation of the recovery coefficients in the abdominal aorta, a requirement for partial volume correction of the ID-IF. Different image reconstruction methods, filtered back projection (FBP) and ordered subset expectation maximization (OS-EM), were investigated for the best trade-off between bias and variance of the ID-IF. The effects of kidney uptakes on the quantitative accuracy of ID-IF were also studied. Biological variables such as red blood cell binding and radioligand metabolism were also taken into consideration. A single blood sample was used for calibration in the later phase of the input function. Results: In the first 2 min after injection, the OS-EM based ID-IF was found to be biased, and the bias was found to be induced by the kidney uptake. No such bias was found with the FBP based image reconstruction method. However, the OS-EM based image reconstruction was found to reduce variance in the subsequent

  2. Importance sampling large deviations in nonequilibrium steady states. I

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  3. Importance sampling large deviations in nonequilibrium steady states. I.

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  4. FUZZY ACCEPTANCE SAMPLING AND CHARACTERISTIC CURVES

    Directory of Open Access Journals (Sweden)

    Ebru Turano?lu

    2012-02-01

    Full Text Available Acceptance sampling is primarily used for the inspection of incoming or outgoing lots. Acceptance sampling refers to the application of specific sampling plans to a designated lot or sequence of lots. The parameters of acceptance sampling plans are sample sizes and acceptance numbers. In some cases, it may not be possible to define acceptance sampling parameters as crisp values. These parameters can be expressed by linguistic variables. The fuzzy set theory can be successfully used to cope with the vagueness in these linguistic expressions for acceptance sampling. In this paper, the main distributions of acceptance sampling plans are handled with fuzzy parameters and their acceptance probability functions are derived. Then the characteristic curves of acceptance sampling are examined under fuzziness. Illustrative examples are given.

  5. Variable screening and ranking using sampling-based sensitivity measures

    International Nuclear Information System (INIS)

    Wu, Y-T.; Mohanty, Sitakanta

    2006-01-01

    This paper presents a methodology for screening insignificant random variables and ranking significant important random variables using sensitivity measures including two cumulative distribution function (CDF)-based and two mean-response based measures. The methodology features (1) using random samples to compute sensitivities and (2) using acceptance limits, derived from the test-of-hypothesis, to classify significant and insignificant random variables. Because no approximation is needed in either the form of the performance functions or the type of continuous distribution functions representing input variables, the sampling-based approach can handle highly nonlinear functions with non-normal variables. The main characteristics and effectiveness of the sampling-based sensitivity measures are investigated using both simple and complex examples. Because the number of samples needed does not depend on the number of variables, the methodology appears to be particularly suitable for problems with large, complex models that have large numbers of random variables but relatively few numbers of significant random variables

  6. Statistical aspects of evolution under natural selection, with implications for the advantage of sexual reproduction.

    Science.gov (United States)

    Crouch, Daniel J M

    2017-10-27

    The prevalence of sexual reproduction remains mysterious, as it poses clear evolutionary drawbacks compared to reproducing asexually. Several possible explanations exist, with one of the most likely being that finite population size causes linkage disequilibria to randomly generate and impede the progress of natural selection, and that these are eroded by recombination via sexual reproduction. Previous investigations have either analysed this phenomenon in detail for small numbers of loci, or performed population simulations for many loci. Here we present a quantitative genetic model for fitness, based on the Price Equation, in order to examine the theoretical consequences of randomly generated linkage disequilibria when there are many loci. In addition, most previous work has been concerned with the long-term consequences of deleterious linkage disequilibria for population fitness. The expected change in mean fitness between consecutive generations, a measure of short-term evolutionary success, is shown under random environmental influences to be related to the autocovariance in mean fitness between the generations, capturing the effects of stochastic forces such as genetic drift. Interaction between genetic drift and natural selection, due to randomly generated linkage disequilibria, is demonstrated to be one possible source of mean fitness autocovariance. This suggests a possible role for sexual reproduction in reducing the negative effects of genetic drift, thereby improving the short-term efficacy of natural selection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. The Association of Domestic Violence and Social Resources With Functioning in an Adult Trauma-Affected Sample Living in Kurdistan, Northern Iraq.

    Science.gov (United States)

    Kane, Jeremy C; Hall, Brian J; Bolton, Paul; Murray, Laura K; Mohammed Amin Ahmed, Ahmed; Bass, Judith K

    2016-03-27

    Domestic violence (DV) and other experienced trauma types increase the risk for impaired functioning. Access to social resources may provide a buffer to existing risks and allow individuals to continue and build functioning. This cross-sectional study investigated the direct effects of DV and access to social resources (perceived social support, social integration, and frequency of social contact), as well as their potential interactive effects, on daily functioning among 894 male and female trauma survivors who attended primary care clinics in Kurdistan, Iraq in 2009 and 2010. Experiencing DV was not associated with functioning for males (p=.15) or females (p=.60), suggesting that in the context of a trauma-affected sample, the experience of DV may not significantly increase the risk for functional impairment. Greater amounts of social integration were associated with less functional impairment among males (p<.01) and females (p<.05); social integration was associated with less functional impairment among males only (p<.01); and frequency of social contact was associated with less functional impairment among females only (p<.05), indicating that the association between social resource type and functioning differed by gender. Social resources had a stronger effect on functioning among men compared to women. Among males who experienced DV, social integration was the only social resource associated with less functional impairment (p<.01); among male trauma survivors who did not experience DV, social support was the only resource associated with less functional impairment (p<.01). Further investigation into these associations is warranted to inform intervention strategies for survivors of DV and other traumas in post-conflict settings. © The Author(s) 2016.

  8. Public-speaking fears in a community sample. Prevalence, impact on functioning, and diagnostic classification.

    Science.gov (United States)

    Stein, M B; Walker, J R; Forde, D R

    1996-02-01

    Recent epidemiologic studies have revealed that social phobia is more prevalent than has been previously believed. An unresolved issue is the extent to which public-speaking fears constitute a recognizable form of social phobia in a community sample and, moreover, to what extent these fears are associated with functional morbidity. To examine the prevalence and impact of public-speaking fears and their relationship to social phobia in a community sample, we conducted a randomized telephone survey of 499 residents of Winnipeg, Manitoba, a medium-sized midwestern metropolitan area. One third of the respondents reported that they had excessive anxiety when they spoke to a large audience. The onset of fears was early (ie, 50%, 75%, and 90% by the ages of 13, 17, and 20 years, respectively). Anxious cognitions about public speaking included the following fears: doing or saying something embarrassing (64%), one's mind going blank (74%), being unable to continue talking (63%), saying foolish things or not making sense (59%), and trembling, shaking, or showing other signs of anxiety (80%). In total, 10% (n = 49) of the respondents reported that public-speaking anxiety had resulted in a marked interference with their work (2%), social life (1%), or education (4%), or had caused them marked distress (8%). Twenty-three persons (5%) had public-speaking anxiety in isolation (ie, without evidence of additional kinds of social fears). These data support the inclusion of severe forms of public-speaking fears within the social phobia construct and, furthermore, suggest that public-speaking anxiety may have a detrimental impact on the lives of many individuals in the community.

  9. The association of domestic violence and social resources with functioning in an adult trauma-affected sample living in Kurdistan, Northern Iraq

    Science.gov (United States)

    Kane, Jeremy C.; Hall, Brian J.; Bolton, Paul; Murray, Laura K.; Ahmed, Ahmed Mohammed Amin; Bass, Judith K.

    2016-01-01

    Ability to function in tasks and activities is an important aspect of daily living. There are factors that increase the risk for impaired functioning, such as experiences of domestic violence (DV) and other trauma types, and factors that provide a buffer to existing risks and allow the individual to continue and build functioning, such as access to social resources. This cross-sectional study investigated the direct effects of DV and access to social resources (perceived social support, social integration, and frequency of social contact), as well as their potential interactive effects, on daily functioning among 894 male and female trauma survivors who attended primary care clinics in Kurdistan, Iraq in 2009 and 2010. Experiencing DV was not associated with functioning for males (p=.15) or females (p=.60), suggesting that in the context of a trauma-affected sample, the experience of DV may not significantly increase the risk for functional impairment. Greater amounts of social integration were associated with less functional impairment among males (p<.01) and females (p<.05); social integration was associated with less functional impairment among males only (p<.01); and frequency of social contact was associated with less functional impairment among females only (p<.05), indicating that the association between social resource type and functioning differed by gender. Standardized beta coefficients indicated that social resources had a stronger effect on functioning among men compared to women. Among males who experienced DV, social integration was the only social resource associated with less functional impairment (p<.01); among male trauma survivors who did not experience DV, social support was the only resource associated with less functional impairment (p<.01). Further investigation into the association of social resources with functioning and how these differ by gender and DV exposure is warranted to inform intervention strategies for survivors of DV and other

  10. Two sample Bayesian prediction intervals for order statistics based on the inverse exponential-type distributions using right censored sample

    Directory of Open Access Journals (Sweden)

    M.M. Mohie El-Din

    2011-10-01

    Full Text Available In this paper, two sample Bayesian prediction intervals for order statistics (OS are obtained. This prediction is based on a certain class of the inverse exponential-type distributions using a right censored sample. A general class of prior density functions is used and the predictive cumulative function is obtained in the two samples case. The class of the inverse exponential-type distributions includes several important distributions such the inverse Weibull distribution, the inverse Burr distribution, the loglogistic distribution, the inverse Pareto distribution and the inverse paralogistic distribution. Special cases of the inverse Weibull model such as the inverse exponential model and the inverse Rayleigh model are considered.

  11. Sensor performance as a function of sampling (d) and optical blur (Fλ)

    NARCIS (Netherlands)

    Bijl, P.; Hogervorst, M.A.

    2009-01-01

    Detector sampling and optical blur are two major factors affecting Target Acquisition (TA) performance with modern EO and IR systems. In order to quantify their relative significance, we simulated five realistic LWIR and MWIR sensors from very under-sampled (detector pitch d >> diffraction blur Fλ)

  12. Comparison of sampling techniques for use in SYVAC

    International Nuclear Information System (INIS)

    Dalrymple, G.J.

    1984-01-01

    The Stephen Howe review (reference TR-STH-1) recommended the use of a deterministic generator (DG) sampling technique for sampling the input values to the SYVAC (SYstems Variability Analysis Code) program. This technique was compared with Monte Carlo simple random sampling (MC) by taking a 1000 run case of SYVAC using MC as the reference case. The results show that DG appears relatively inaccurate for most values of consequence when used with 11 sample intervals. If 22 sample intervals are used then DG generates cumulative distribution functions that are statistically similar to the reference distribution. 400 runs of DG or MC are adequate to generate a representative cumulative distribution function. The MC technique appears to perform better than DG for the same number of runs. However, the DG predicts higher doses and in view of the importance of generating data in the high dose region this sampling technique with 22 sample intervals is recommended for use in SYVAC. (author)

  13. Executive Cognitive Functioning and Cardiovascular Autonomic Regulation in a Population-Based sample of Working Adults

    Directory of Open Access Journals (Sweden)

    Cecilia Ulrika Dagsdotter Stenfors

    2016-10-01

    Full Text Available Objective: Executive cognitive functioning is essential in private and working life and is sensitive to stress and aging. Cardiovascular (CV health factors are related to cognitive decline and dementia, but there is relatively few studies of the role of CV autonomic regulation, a key component in stress responses and risk factor for cardiovascular disease (CVD, and executive processes. An emerging pattern of results from previous studies suggest that different executive processes may be differentially associated with CV autonomic regulationThe aim was thus to study the associations between multiple measures of CV autonomic regulation and measures of different executive cognitive processes. Method: Participants were 119 healthy working adults (79% women, from the Swedish Longitudinal Occupational Survey of Health. Electrocardiogram was sampled for analysis of heart rate variability measures, including the Standard Deviation of NN, here heart beats (SDNN, root of the mean squares of successive differences (RMSSD, high frequency (HF power band from spectral analyses, and QT variability index (QTVI, a measure of myocardial repolarization patterns. Executive cognitive functioning was measured by 7 neuropsychological tests. The relationships between CV autonomic regulation measures and executive cognitive measures were tested with bivariate and partial correlational analyses, controlling for demographic variables and mental health symptoms.Results: Higher SDNN and RMSSD and lower QTVI were significantly associated with better performance on cognitive tests tapping inhibition, updating, shifting and psychomotor speed. After adjustments for demographic factors however (age being the greatest confounder, only QTVI was clearly associated with these executive tests. No such associations were seen for working memory capacity. Conclusion: Poorer cardiovascular autonomic regulation in terms of lower SDNN & RMSSD and higher QTVI was associated with poorer

  14. Executive Cognitive Functioning and Cardiovascular Autonomic Regulation in a Population-Based Sample of Working Adults.

    Science.gov (United States)

    Stenfors, Cecilia U D; Hanson, Linda M; Theorell, Töres; Osika, Walter S

    2016-01-01

    Objective: Executive cognitive functioning is essential in private and working life and is sensitive to stress and aging. Cardiovascular (CV) health factors are related to cognitive decline and dementia, but there is relatively few studies of the role of CV autonomic regulation, a key component in stress responses and risk factor for cardiovascular disease (CVD), and executive processes. An emerging pattern of results from previous studies suggest that different executive processes may be differentially associated with CV autonomic regulation. The aim was thus to study the associations between multiple measures of CV autonomic regulation and measures of different executive cognitive processes. Method: Participants were 119 healthy working adults (79% women), from the Swedish Longitudinal Occupational Survey of Health. Electrocardiogram was sampled for analysis of heart rate variability (HRV) measures, including the Standard Deviation of NN, here heart beats (SDNN), root of the mean squares of successive differences (RMSSD), high frequency (HF) power band from spectral analyses, and QT variability index (QTVI), a measure of myocardial repolarization patterns. Executive cognitive functioning was measured by seven neuropsychological tests. The relationships between CV autonomic regulation measures and executive cognitive measures were tested with bivariate and partial correlational analyses, controlling for demographic variables, and mental health symptoms. Results: Higher SDNN and RMSSD and lower QTVI were significantly associated with better performance on cognitive tests tapping inhibition, updating, shifting, and psychomotor speed. After adjustments for demographic factors however (age being the greatest confounder), only QTVI was clearly associated with these executive tests. No such associations were seen for working memory capacity . Conclusion: Poorer CV autonomic regulation in terms of lower SDNN and RMSSD and higher QTVI was associated with poorer executive

  15. Open source laboratory sample rotator mixer and shaker

    Directory of Open Access Journals (Sweden)

    Karankumar C. Dhankani

    2017-04-01

    Full Text Available An open-source 3-D printable laboratory sample rotator mixer is developed here in two variants that allow users to opt for the level of functionality, cost saving and associated complexity needed in their laboratories. First, a laboratory sample rotator is designed and demonstrated that can be used for tumbling as well as gentle mixing of samples in a variety of tube sizes by mixing them horizontally, vertically, or any position in between. Changing the mixing angle is fast and convenient and requires no tools. This device is battery powered and can be easily transported to operate in various locations in a lab including desktops, benches, clean hoods, chemical hoods, cold rooms, glove boxes, incubators or biological hoods. Second, an on-board Arduino-based microcontroller is incorporated that adds the functionality of a laboratory sample shaker. These devices can be customized both mechanically and functionally as the user can simply select the operation mode on the switch or alter the code to perform custom experiments. The open source laboratory sample rotator mixer can be built by non-specialists for under US$30 and adding shaking functionality can be done for under $20 more. Thus, these open source devices are technically superior to the proprietary commercial equipment available on the market while saving over 90% of the costs.

  16. Out-of-Sample Generalizations for Supervised Manifold Learning for Classification.

    Science.gov (United States)

    Vural, Elif; Guillemot, Christine

    2016-03-01

    Supervised manifold learning methods for data classification map high-dimensional data samples to a lower dimensional domain in a structure-preserving way while increasing the separation between different classes. Most manifold learning methods compute the embedding only of the initially available data; however, the generalization of the embedding to novel points, i.e., the out-of-sample extension problem, becomes especially important in classification applications. In this paper, we propose a semi-supervised method for building an interpolation function that provides an out-of-sample extension for general supervised manifold learning algorithms studied in the context of classification. The proposed algorithm computes a radial basis function interpolator that minimizes an objective function consisting of the total embedding error of unlabeled test samples, defined as their distance to the embeddings of the manifolds of their own class, as well as a regularization term that controls the smoothness of the interpolation function in a direction-dependent way. The class labels of test data and the interpolation function parameters are estimated jointly with an iterative process. Experimental results on face and object images demonstrate the potential of the proposed out-of-sample extension algorithm for the classification of manifold-modeled data sets.

  17. An analysis of workers' tritium concentration in urine samples as a function of time after intake at Korean pressurised heavy water reactors.

    Science.gov (United States)

    Kim, Hee Geun; Kong, Tae Young

    2012-12-01

    In general, internal exposure from tritium at pressurised heavy water reactors (PHWRs) accounts for ∼20-40 % of the total radiation dose. Tritium usually reaches the equilibrium concentration after a few hours inside the body and is then excreted from the body with an effective half-life in the order of 10 d. In this study, tritium metabolism was reviewed using its excretion rate in urine samples of workers at Korean PHWRs. The tritium concentration in workers' urine samples was also measured as a function of time after intake. On the basis of the monitoring results, changes in the tritium concentration inside the body were then analysed.

  18. Analysing designed experiments in distance sampling

    Science.gov (United States)

    Stephen T. Buckland; Robin E. Russell; Brett G. Dickson; Victoria A. Saab; Donal N. Gorman; William M. Block

    2009-01-01

    Distance sampling is a survey technique for estimating the abundance or density of wild animal populations. Detection probabilities of animals inherently differ by species, age class, habitats, or sex. By incorporating the change in an observer's ability to detect a particular class of animals as a function of distance, distance sampling leads to density estimates...

  19. The two-sample problem with induced dependent censorship.

    Science.gov (United States)

    Huang, Y

    1999-12-01

    Induced dependent censorship is a general phenomenon in health service evaluation studies in which a measure such as quality-adjusted survival time or lifetime medical cost is of interest. We investigate the two-sample problem and propose two classes of nonparametric tests. Based on consistent estimation of the survival function for each sample, the two classes of test statistics examine the cumulative weighted difference in hazard functions and in survival functions. We derive a unified asymptotic null distribution theory and inference procedure. The tests are applied to trial V of the International Breast Cancer Study Group and show that long duration chemotherapy significantly improves time without symptoms of disease and toxicity of treatment as compared with the short duration treatment. Simulation studies demonstrate that the proposed tests, with a wide range of weight choices, perform well under moderate sample sizes.

  20. The Relationship Between Trait Procrastination, Internet Use, and Psychological Functioning: Results From a Community Sample of German Adolescents.

    Science.gov (United States)

    Reinecke, Leonard; Meier, Adrian; Beutel, Manfred E; Schemer, Christian; Stark, Birgit; Wölfling, Klaus; Müller, Kai W

    2018-01-01

    Adolescents with a strong tendency for irrational task delay (i.e., high trait procrastination) may be particularly prone to use Internet applications simultaneously to other tasks (e.g., during homework) and in an insufficiently controlled fashion. Both Internet multitasking and insufficiently controlled Internet usage may thus amplify the negative mental health implications that have frequently been associated with trait procrastination. The present study explored this role of Internet multitasking and insufficiently controlled Internet use for the relationship between trait procrastination and impaired psychological functioning in a community sample of N = 818 early and middle adolescents. Results from multiple regression analyses indicate that trait procrastination was positively related to Internet multitasking and insufficiently controlled Internet use. Insufficiently controlled Internet use, but not Internet multitasking, was found to partially statistically mediate the association between trait procrastination and adolescents' psychological functioning (i.e., stress, sleep quality, and relationship satisfaction with parents). The study underlines that adolescents with high levels of trait procrastination may have an increased risk for negative outcomes of insufficiently controlled Internet use.

  1. The Relationship Between Trait Procrastination, Internet Use, and Psychological Functioning: Results From a Community Sample of German Adolescents

    Directory of Open Access Journals (Sweden)

    Leonard Reinecke

    2018-06-01

    Full Text Available Adolescents with a strong tendency for irrational task delay (i.e., high trait procrastination may be particularly prone to use Internet applications simultaneously to other tasks (e.g., during homework and in an insufficiently controlled fashion. Both Internet multitasking and insufficiently controlled Internet usage may thus amplify the negative mental health implications that have frequently been associated with trait procrastination. The present study explored this role of Internet multitasking and insufficiently controlled Internet use for the relationship between trait procrastination and impaired psychological functioning in a community sample of N = 818 early and middle adolescents. Results from multiple regression analyses indicate that trait procrastination was positively related to Internet multitasking and insufficiently controlled Internet use. Insufficiently controlled Internet use, but not Internet multitasking, was found to partially statistically mediate the association between trait procrastination and adolescents’ psychological functioning (i.e., stress, sleep quality, and relationship satisfaction with parents. The study underlines that adolescents with high levels of trait procrastination may have an increased risk for negative outcomes of insufficiently controlled Internet use.

  2. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping

    2015-06-24

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  3. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping; Huang, Jianhua Z.; Zhang, Nan

    2015-01-01

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  4. No direct association among respiratory function, disease control and family functioning in a sample of Mexican children with intermittent asthma.

    Science.gov (United States)

    Rodriguez-Orozco, Alain Raimundo; Núñez-Tapia, Rosa María; Ramírez-Silva, Armando; Gómez-Alonso, Carlos

    2013-05-15

    Asthma has been linked to family disfunctioning and poor control of the disease.This study was conducted to analyze the interactions between the level of intermittent asthma control, family functioning and respiratory function and between quality of life of asthmatic patients and their caregivers.7 to 15 years old children with intermittent asthma were included. Asthma Control Test Questionnaire, Pediatric Asthma Quality of Life Questionnaire (PAQLQ) test, and flowmetry were applied to children and Pediatric Asthma Caregiver´s Quatily of Life Questionnaire (PAQCLQ) and the Family Functioning Perception Test (FF-SIL) were applied to their parents.The most affected areas of family functioning in dysfunctional families were adaptability and permeability. A medium to high strength of association was founded between the emotional function of parents and the emotional function of children, R2=0.552. The most remarkable associations were among parents' limitation of activities and parents' emotional function (r=0.837), parents' limitation of activities and child's emotional function (r=0.722), parents' emotional role and limitation of activities (r=0.837), parents' emotional role and emotional functioning of children with asthma (r=0.743) and the limitation of activities of children with asthma and the emotional function of children with asthma (r=0.870).No direct associations were founded among respiratory function, disease control and family functioning in Mexican children with intermittent asthma and emotional function of parents and children were associated in both groups.

  5. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    Science.gov (United States)

    Lachin, John M; McGee, Paula L; Greenbaum, Carla J; Palmer, Jerry; Pescovitz, Mark D; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(x), log(x+1) and square-root (√x) transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1) and √x transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately

  6. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    Directory of Open Access Journals (Sweden)

    John M Lachin

    Full Text Available Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet, repeated 2-hour Mixed Meal Tolerance Tests (MMTT were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC of the C-peptide values. The natural log(x, log(x+1 and square-root (√x transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years and adults (18+ years. The sample size needed to detect a given relative (percentage difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1 and √x transformed values in terms of the original units of measurement (pmol/ml. Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab versus masked placebo. These results provide the information needed to

  7. Efficiency of Executive Function: A Two-Generation Cross-Cultural Comparison of Samples From Hong Kong and the United Kingdom.

    Science.gov (United States)

    Ellefson, Michelle R; Ng, Florrie Fei-Yin; Wang, Qian; Hughes, Claire

    2017-05-01

    Although Asian preschoolers acquire executive functions (EFs) earlier than their Western counterparts, little is known about whether this advantage persists into later childhood and adulthood. To address this gap, in the current study we gave four computerized EF tasks (providing measures of inhibition, working memory, cognitive flexibility, and planning) to a large sample ( n = 1,427) of 9- to 16-year-olds and their parents. All participants lived in either the United Kingdom or Hong Kong. Our findings highlight the importance of combining developmental and cultural perspectives and show both similarities and contrasts across sites. Specifically, adults' EF performance did not differ between the two sites; age-related changes in executive function for both the children and the parents appeared to be culturally invariant, as did a modest intergenerational correlation. In contrast, school-age children and young adolescents in Hong Kong outperformed their United Kingdom counterparts on all four EF tasks, a difference consistent with previous findings from preschool children.

  8. Implementing reduced-risk integrated pest management in fresh-market cabbage: influence of sampling parameters, and validation of binomial sequential sampling plans for the cabbage looper (Lepidoptera Noctuidae).

    Science.gov (United States)

    Burkness, Eric C; Hutchison, W D

    2009-10-01

    Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.

  9. Galaxy redshift surveys with sparse sampling

    International Nuclear Information System (INIS)

    Chiang, Chi-Ting; Wullstein, Philipp; Komatsu, Eiichiro; Jee, Inh; Jeong, Donghui; Blanc, Guillermo A.; Ciardullo, Robin; Gronwall, Caryl; Hagen, Alex; Schneider, Donald P.; Drory, Niv; Fabricius, Maximilian; Landriau, Martin; Finkelstein, Steven; Jogee, Shardha; Cooper, Erin Mentuch; Tuttle, Sarah; Gebhardt, Karl; Hill, Gary J.

    2013-01-01

    Survey observations of the three-dimensional locations of galaxies are a powerful approach to measure the distribution of matter in the universe, which can be used to learn about the nature of dark energy, physics of inflation, neutrino masses, etc. A competitive survey, however, requires a large volume (e.g., V survey ∼ 10Gpc 3 ) to be covered, and thus tends to be expensive. A ''sparse sampling'' method offers a more affordable solution to this problem: within a survey footprint covering a given survey volume, V survey , we observe only a fraction of the volume. The distribution of observed regions should be chosen such that their separation is smaller than the length scale corresponding to the wavenumber of interest. Then one can recover the power spectrum of galaxies with precision expected for a survey covering a volume of V survey (rather than the volume of the sum of observed regions) with the number density of galaxies given by the total number of observed galaxies divided by V survey (rather than the number density of galaxies within an observed region). We find that regularly-spaced sampling yields an unbiased power spectrum with no window function effect, and deviations from regularly-spaced sampling, which are unavoidable in realistic surveys, introduce calculable window function effects and increase the uncertainties of the recovered power spectrum. On the other hand, we show that the two-point correlation function (pair counting) is not affected by sparse sampling. While we discuss the sparse sampling method within the context of the forthcoming Hobby-Eberly Telescope Dark Energy Experiment, the method is general and can be applied to other galaxy surveys

  10. Interpolating and sampling sequences in finite Riemann surfaces

    OpenAIRE

    Ortega-Cerda, Joaquim

    2007-01-01

    We provide a description of the interpolating and sampling sequences on a space of holomorphic functions on a finite Riemann surface, where a uniform growth restriction is imposed on the holomorphic functions.

  11. Research on self-absorption corrections for laboratory γ spectral analysis of soil samples

    International Nuclear Information System (INIS)

    Tian Zining; Jia Mingyan; Li Huibin; Cheng Ziwei; Ju Lingjun; Shen Maoquan; Yang Xiaoyan; Yan Ling; Fen Tiancheng

    2010-01-01

    Based on the calibration results of the point sources,dimensions of HPGe crystal were characterized.Linear attenuation coefficients and detection efficiencies of all kinds of samples were calculated,and the function F(μ) of φ75 mm x 25 mm sample was established. Standard surface source was used to simulate the source of different heights in the soil sample. And the function ε(h) which reflect the relationship between detection efficiencies and heights of the surface sources was determined. The detection efficiency of calibration source can be obtained by integration, F(μ) functions of soil samples established is consistent with the result of MCNP calculation code. Several φ75 mm x 25 mm soil samples were measured by the HPGe spectrometer,and the function F(μ) was used to correct the self absorption. F(μ) functions of soil samples of various dimensions can be calculated by MCNP calculation code established, and self absorption correction can be done. To verify the efficiency of calculation results, φ75 mm x 75 mm soil samples were measured. Several φ75 mm x 25 mm soil samples from aerosphere nuclear testing field was measured by the HPGe spectrometer,and the function F(μ) was used to correct the self absorption. The function F(m) was established, and the technical method which is used to correct the soil samples of unknown area is also given. The correction method of surface source greatly improves the gamma spectrum's metrical accuracy, and it will be widely applied to environmental radioactive investigation. (authors)

  12. Bessel beam CARS of axially structured samples

    Science.gov (United States)

    Heuke, Sandro; Zheng, Juanjuan; Akimov, Denis; Heintzmann, Rainer; Schmitt, Michael; Popp, Jürgen

    2015-06-01

    We report about a Bessel beam CARS approach for axial profiling of multi-layer structures. This study presents an experimental implementation for the generation of CARS by Bessel beam excitation using only passive optical elements. Furthermore, an analytical expression is provided describing the generated anti-Stokes field by a homogeneous sample. Based on the concept of coherent transfer functions, the underling resolving power of axially structured geometries is investigated. It is found that through the non-linearity of the CARS process in combination with the folded illumination geometry continuous phase-matching is achieved starting from homogeneous samples up to spatial sample frequencies at twice of the pumping electric field wave. The experimental and analytical findings are modeled by the implementation of the Debye Integral and scalar Green function approach. Finally, the goal of reconstructing an axially layered sample is demonstrated on the basis of the numerically simulated modulus and phase of the anti-Stokes far-field radiation pattern.

  13. Using ecological momentary assessment to investigate short-term variations in sexual functioning in a sample of peri-menopausal women from Iran.

    Directory of Open Access Journals (Sweden)

    Amir H Pakpour

    Full Text Available The investigation of short-term changes in female sexual functioning has received little attention so far. The aims of the study were to gain empirical knowledge on within-subject and within- and across-variable fluctuations in women's sexual functioning over time. More specifically, to investigate the stability of women´s self-reported sexual functioning and the moderating effects of contextual and interpersonal factors. A convenience sample of 206 women, recruited across eight Health care Clinics in Rasht, Iran. Ecological momentary assessment was used to examine fluctuations of sexual functioning over a six week period. A shortened version of the Female Sexual Function Index (FSFI was applied to assess sexual functioning. Self-constructed questions were included to assess relationship satisfaction, partner's sexual performance and stress levels. Mixed linear two-level model analyses revealed a link between orgasm and relationship satisfaction (Beta = 0.125, P = 0.074 with this link varying significantly between women. Analyses further revealed a significant negative association between stress and all six domains of women's sexual functioning. Women not only reported differing levels of stress over the course of the assessment period, but further differed from each other in how much stress they experienced and how much this influenced their sexual response. Orgasm and sexual satisfaction were both significantly associated with all other domains of sexual function (P<0.001. And finally, a link between partner performance and all domains of women`s sexual functioning (P<0.001 could be detected. Except for lubrication (P = 0.717, relationship satisfaction had a significant effect on all domains of the sexual response (P<0.001. Overall, our findings support the new group of criteria introduced in the DSM-5, called "associated features" such as partner factors and relationship factors. Consideration of these criteria is important and necessary for

  14. Influence of population versus convenience sampling on sample characteristics in studies of cognitive aging.

    Science.gov (United States)

    Brodaty, Henry; Mothakunnel, Annu; de Vel-Palumbo, Melissa; Ames, David; Ellis, Kathryn A; Reppermund, Simone; Kochan, Nicole A; Savage, Greg; Trollor, Julian N; Crawford, John; Sachdev, Perminder S

    2014-01-01

    We examined whether differences in findings of studies examining mild cognitive impairment (MCI) were associated with recruitment methods by comparing sample characteristics in two contemporaneous Australian studies, using population-based and convenience sampling. The Sydney Memory and Aging Study invited participants randomly from the electoral roll in defined geographic areas in Sydney. The Australian Imaging, Biomarkers and Lifestyle Study of Ageing recruited cognitively normal (CN) individuals via media appeals and MCI participants via referrals from clinicians in Melbourne and Perth. Demographic and cognitive variables were harmonized, and similar diagnostic criteria were applied to both samples retrospectively. CN participants recruited via convenience sampling were younger, better educated, more likely to be married and have a family history of dementia, and performed better cognitively than those recruited via population-based sampling. MCI participants recruited via population-based sampling had better memory performance and were less likely to carry the apolipoprotein E ε4 allele than clinically referred participants but did not differ on other demographic variables. A convenience sample of normal controls is likely to be younger and better functioning and that of an MCI group likely to perform worse than a purportedly random sample. Sampling bias should be considered when interpreting findings. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Nonparametric Transfer Function Models

    Science.gov (United States)

    Liu, Jun M.; Chen, Rong; Yao, Qiwei

    2009-01-01

    In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between ‘input’ and ‘output’ time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example. PMID:20628584

  16. Proteomic Challenges: Sample Preparation Techniques for Microgram-Quantity Protein Analysis from Biological Samples

    Directory of Open Access Journals (Sweden)

    Peter Feist

    2015-02-01

    Full Text Available Proteins regulate many cellular functions and analyzing the presence and abundance of proteins in biological samples are central focuses in proteomics. The discovery and validation of biomarkers, pathways, and drug targets for various diseases can be accomplished using mass spectrometry-based proteomics. However, with mass-limited samples like tumor biopsies, it can be challenging to obtain sufficient amounts of proteins to generate high-quality mass spectrometric data. Techniques developed for macroscale quantities recover sufficient amounts of protein from milligram quantities of starting material, but sample losses become crippling with these techniques when only microgram amounts of material are available. To combat this challenge, proteomicists have developed micro-scale techniques that are compatible with decreased sample size (100 μg or lower and still enable excellent proteome coverage. Extraction, contaminant removal, protein quantitation, and sample handling techniques for the microgram protein range are reviewed here, with an emphasis on liquid chromatography and bottom-up mass spectrometry-compatible techniques. Also, a range of biological specimens, including mammalian tissues and model cell culture systems, are discussed.

  17. Proteomic Challenges: Sample Preparation Techniques for Microgram-Quantity Protein Analysis from Biological Samples

    Science.gov (United States)

    Feist, Peter; Hummon, Amanda B.

    2015-01-01

    Proteins regulate many cellular functions and analyzing the presence and abundance of proteins in biological samples are central focuses in proteomics. The discovery and validation of biomarkers, pathways, and drug targets for various diseases can be accomplished using mass spectrometry-based proteomics. However, with mass-limited samples like tumor biopsies, it can be challenging to obtain sufficient amounts of proteins to generate high-quality mass spectrometric data. Techniques developed for macroscale quantities recover sufficient amounts of protein from milligram quantities of starting material, but sample losses become crippling with these techniques when only microgram amounts of material are available. To combat this challenge, proteomicists have developed micro-scale techniques that are compatible with decreased sample size (100 μg or lower) and still enable excellent proteome coverage. Extraction, contaminant removal, protein quantitation, and sample handling techniques for the microgram protein range are reviewed here, with an emphasis on liquid chromatography and bottom-up mass spectrometry-compatible techniques. Also, a range of biological specimens, including mammalian tissues and model cell culture systems, are discussed. PMID:25664860

  18. Proteomic challenges: sample preparation techniques for microgram-quantity protein analysis from biological samples.

    Science.gov (United States)

    Feist, Peter; Hummon, Amanda B

    2015-02-05

    Proteins regulate many cellular functions and analyzing the presence and abundance of proteins in biological samples are central focuses in proteomics. The discovery and validation of biomarkers, pathways, and drug targets for various diseases can be accomplished using mass spectrometry-based proteomics. However, with mass-limited samples like tumor biopsies, it can be challenging to obtain sufficient amounts of proteins to generate high-quality mass spectrometric data. Techniques developed for macroscale quantities recover sufficient amounts of protein from milligram quantities of starting material, but sample losses become crippling with these techniques when only microgram amounts of material are available. To combat this challenge, proteomicists have developed micro-scale techniques that are compatible with decreased sample size (100 μg or lower) and still enable excellent proteome coverage. Extraction, contaminant removal, protein quantitation, and sample handling techniques for the microgram protein range are reviewed here, with an emphasis on liquid chromatography and bottom-up mass spectrometry-compatible techniques. Also, a range of biological specimens, including mammalian tissues and model cell culture systems, are discussed.

  19. The depth distribution functions of the natural abundances of carbon isotopes in Alfisols thoroughly sampled by thin-layer sampling, and their relation to the dynamics of organic matter in theses soils

    International Nuclear Information System (INIS)

    Becker-Heidmann, P.

    1989-01-01

    The aim of this study was to gain fundamental statements on the relationship between the depth distributions of the natural abundances of 13 C and 14 C isotopes and the dynamics of the organic matter in Alfisols. For this purpose, six Alfisols were investigated: four forest soils from Northern Germany, two of them developed in Loess and two in glacial loam, one West German Loess soil used for fruit-growing and one agricultural granite-gneiss soil from the semiarid part of India. The soil was sampled as succesive horizontal layers of 2 cm depth from an area of 0.5 to 1 m 2 size, starting from the organic down to the C horizon or the lower part of the Bt. This kind of completely thin-layer-wise sampling was applied here for the first time. The carbon content and the natural abundances of the 13 C and the 14 C isotopes of each sample were determined. The δ 13 C value was measured by mass spectrometry. A vacuum preparation line with an electronically controlled cooling unit was constructed thereto. For the determination of the 14 C content, the sample carbon was transferred into benzene, and its activity was measured by liquid scintillation spectrometry. From the combination of the depth distribution functions of the 14 C activity and the δ 13 C value, and with the aid of additional analyses like C/N ratio and particle size distribution, a conclusive interpretation as to the dynamics of the organic matter in the investigated Alfisols is given. (orig./BBR)

  20. The influence of polycyclic aromatic hydrocarbons on lung function in a representative sample of the Canadian population.

    Science.gov (United States)

    Cakmak, Sabit; Hebbern, Chris; Cakmak, Jasmine D; Dales, Robert E

    2017-09-01

    We investigated the associations between exposure to polycyclic aromatic hydrocarbons (PAHs) and selected respiratory physiologic measures in cycles 2 and 3 of the Canadian Health Measures Survey, a nationally representative population sample. Using generalized linear mixed models, we tested the association between selected PAH metabolites and 1-second forced expiratory volume (FEV 1 ), forced vital capacity (FVC), and the ratio between the two (FEV 1 /FVC) in 3531 people from 6 to 79 years of age. An interquartile change in urinary PAH metabolite was associated with significant decrements in FEV 1 and FVC for eight PAHs, 2-hydroxynapthalene, 1-, and 2-hydroxyphenanthrene, 2-, 3-, and 9-hydroxyfluorene and 3- and 4-hydroxyphenanthrene. Exposure to PAH may negatively affect lung function in the Canadian population. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  1. Approximate determination of efficiency for activity measurements of cylindrical samples

    Energy Technology Data Exchange (ETDEWEB)

    Helbig, W [Nuclear Engineering and Analytics Rossendorf, Inc. (VKTA), Dresden (Germany); Bothe, M [Nuclear Engineering and Analytics Rossendorf, Inc. (VKTA), Dresden (Germany)

    1997-03-01

    Some calibration samples are necessary with the same geometrical parameters but of different materials, containing known activities A homogeniously distributed. Their densities are measured, their mass absorption coefficients may be unknown. These calibration samples are positioned in the counting geometry, for instance directly on the detector. The efficiency function {epsilon}(E) for each sample is gained by measuring the gamma spectra and evaluating all usable gamma energy peaks. From these {epsilon}(E) the common valid {epsilon}{sub geom}(E) will be deduced. For this purpose the functions {epsilon}{sub mu}(E) for these samples have to be established. (orig.)

  2. On Optimal, Minimal BRDF Sampling for Reflectance Acquisition

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Jensen, Henrik Wann; Ramamoorthi, Ravi

    2015-01-01

    The bidirectional reflectance distribution function (BRDF) is critical for rendering, and accurate material representation requires data-driven reflectance models. However, isotropic BRDFs are 3D functions, and measuring the reflectance of a flat sample can require a million incident and outgoing...... direction pairs, making the use of measured BRDFs impractical. In this paper, we address the problem of reconstructing a measured BRDF from a limited number of samples. We present a novel mapping of the BRDF space, allowing for extraction of descriptive principal components from measured databases......, such as the MERL BRDF database. We optimize for the best sampling directions, and explicitly provide the optimal set of incident and outgoing directions in the Rusinkiewicz parameterization for n = {1, 2, 5, 10, 20} samples. Based on the principal components, we describe a method for accurately reconstructing BRDF...

  3. Use of robotic systems for radiochemical sample changing and for analytical sample preparation

    International Nuclear Information System (INIS)

    Delmastro, J.R.; Hartenstein, S.D.; Wade, M.A.

    1989-01-01

    Two uses of the Perkin-Elmer (PE) robotic system will be presented. In the first, a PE robot functions as an automatic sample changer for up to five low energy photon spectrometry (LEPS) detectors operated with a Nuclear Data ND 6700 system. The entire system, including the robot, is controlled by an IBM PC-AT using software written in compiled BASIC. Problems associated with the development of the system and modifications to the robot will be presented. In the second, an evaluation study was performed to assess the abilities of the PE robotic system for performing complex analytical sample preparation procedures. For this study, a robotic system based upon the PE robot and auxiliary devices was constructed and programmed to perform the preparation of final product samples (UO 3 ) for accountability and impurity specification analyses. These procedures require sample dissolution, dilution, and liquid-liquid extraction steps. The results of an in-depth evaluation of all system components will be presented

  4. Latin hypercube sampling with inequality constraints

    International Nuclear Information System (INIS)

    Iooss, B.; Petelet, M.; Asserin, O.; Loredo, A.

    2010-01-01

    In some studies requiring predictive and CPU-time consuming numerical models, the sampling design of the model input variables has to be chosen with caution. For this purpose, Latin hypercube sampling has a long history and has shown its robustness capabilities. In this paper we propose and discuss a new algorithm to build a Latin hypercube sample (LHS) taking into account inequality constraints between the sampled variables. This technique, called constrained Latin hypercube sampling (cLHS), consists in doing permutations on an initial LHS to honor the desired monotonic constraints. The relevance of this approach is shown on a real example concerning the numerical welding simulation, where the inequality constraints are caused by the physical decreasing of some material properties in function of the temperature. (authors)

  5. Optically stimulated luminescence emission spectra from feldspars as a function of sample temperature

    DEFF Research Database (Denmark)

    Duller, G.A.T.; Bøtter-Jensen, L.

    1997-01-01

    samples have been measured at various sample temperatures. A small but consistent shift of the peak emission wavelength to shorter wavelengths at higher temperatures is observed. However, the magnitude of this shift is sufficiently small that it will not affect measurements of the thermal activation...... energy. A systematic difference is observed between the thermal activation energies measured when using different emission wavelengths. In particular, the thermal activation energy of the emission at 400 nm is typically 0.11 eV, while that at 570 nm from the same samples is 0.03-0.05 eV. Several possible...

  6. Performance test of SAUNA xenon mobile sampling system

    International Nuclear Information System (INIS)

    Hu Dan; Yang Bin; Yang Weigeng; Jia Huaimao; Wang Shilian; Li Qi; Zhao Yungang; Fan Yuanqing; Chen Zhanying; Chang Yinzhong; Liu Shujiang; Zhang Xinjun; Wang Jun

    2011-01-01

    In this article, the structure and basic functions of SAUNA noble gas xenon mobile sampling system are introduced. The sampling capability of this system is about 2.2 mL per day, as a result from a 684-h operation. The system can be transported to designated locations conveniently to collect xenon sample for routine or emergency environment monitoring. (authors)

  7. Quantitative analysis of light elements in thick samples by PIGE

    International Nuclear Information System (INIS)

    Mateus, R.; Jesus, A.P.; Ribeiro, J.P.

    2004-01-01

    PIGE analysis of thick and intermediate samples is usually performed with the help of standards, but this method gives only good results when the standard is very similar to the sample to be analysed. In this work, we present an alternative method for PIGE analysis of light elements in thick samples. This method is based on a code that integrates the nuclear reaction excitation function along the depth of the sample. For the integration procedure the sample is divided in sublayers, defined by the energy steps that were used to measure accurately the excitation function. This function is used as input. Within each sublayer the stopping power cross-sections may be assumed as constant. With these two conditions the calculus of the contribution of each sublayer for the total yield becomes an easy task. This work presents results for the analysis of lithium, boron, fluorine and sodium in thick samples. For this purpose, excitation functions of the reactions 7 Li(p,p ' γ) 7 Li, 19 F(p,p ' γ) 19 F, 10 B(p,αγ) 7 Be and 23 Na(p,p ' γ) 23 Na were employed. Calculated γ-ray yields were compared, at several proton energy values, with experimental yields for thick samples made of inorganic compounds of the referred elements. The agreement is better than 7.5%. Taking into consideration the experimental uncertainty of the measured yields and the errors related to the stopping power values used, this agreement shows that effects as the beam energy straggling, ignored in the calculation, seem to play a minor role

  8. Microstructure of Thin Films

    Science.gov (United States)

    1990-02-07

    Proceedings, Thin film Technologies II, 652, 256-263, (1986) B. Schmitt, J.P. Borgogno, G. Albrand and E. Pelletier, "In situ and air index measurements...34 SPIE Proceedings, "Optical Components and Systems", 805, 128 (1987) 11 B. Schmitt, J.P. Borgogno, G. Albrand and E. Pelletier. "In situ and air index...aT , m..a, lot,, o ,,f,02,d I4 k -1-1..... autocovariance lengths, less than 0.5 um, indicate that , 514n, ob0 o p’,Ofclllc....,,o,,oy0,1- agua sblrt

  9. Sexual satisfaction and distress in sexual functioning in a sample of the BDSM community: a comparison study between BDSM and non-BDSM contexts.

    Science.gov (United States)

    Pascoal, Patrícia Monteiro; Cardoso, Daniel; Henriques, Rui

    2015-04-01

    Little attention has been paid to distress in sexual functioning or the sexual satisfaction of people who practice BDSM (Bondage and Discipline, Domination and Submission, Sadism and Masochism). The purpose of this study was to describe sociodemographic characteristics and BDSM practices and compare BDSM practitioners' sexual outcomes (in BDSM and non-BDSM contexts). A convenience sample of 68 respondents completed an online survey that used a participatory research framework. Cronbach's alpha and average inter-item correlations assessed scale reliability, and the Wilcoxon paired samples test compared the total scores between BDSM and non-BDSM contexts separately for men and women. Open-ended questions about BDSM sexual practices were coded using a preexisting thematic tree. We used self-reported demographic factors, including age at the onset of BDSM interest, age at first BDSM experience, and favorite and most frequent BDSM practices. The Global Measure of Sexual Satisfaction measured the amount of sexual distress, including low desire, arousal, maintaining arousal, premature orgasm, and anorgasmia. The participants had an average age of 33.15 years old and were highly educated and waited 6 years after becoming interested in BDSM to act on their interests. The practices in which the participants most frequently engaged did not coincide with the practices in which they were most interested and were overwhelmingly conducted at home. Comparisons between genders in terms of distress in sexual functioning in BDSM and non-BDSM contexts demonstrate that, with the exception of maintaining arousal, we found distress in sexual functioning to be statistically the same in BDSM and non-BDSM contexts for women. For men, we found that distress in sexual functioning, with the exception of premature orgasm and anorgasmia, was statistically significantly lower in the BDSM context. There were no differences in sexual satisfaction between BDSM and non-BDSM contexts for men or women

  10. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2015-01-01

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman's two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  11. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  12. Active Fault Diagnosis in Sampled-data Systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2015-01-01

    The focus in this paper is on active fault diagnosis (AFD) in closed-loop sampleddata systems. Applying the same AFD architecture as for continuous-time systems does not directly result in the same set of closed-loop matrix transfer functions. For continuous-time systems, the LFT (linear fractional...... transformation) structure in the connection between the parametric faults and the matrix transfer function (also known as the fault signature matrix) applied for AFD is not directly preserved for sampled-data system. As a consequence of this, the AFD methods cannot directly be applied for sampled-data systems....... Two methods are considered in this paper to handle the fault signature matrix for sampled-data systems such that standard AFD methods can be applied. The first method is based on a discretization of the system such that the LFT structure is preserved resulting in the same LFT structure in the fault...

  13. An evaluation of soil sampling for 137Cs using various field-sampling volumes.

    Science.gov (United States)

    Nyhan, J W; White, G C; Schofield, T G; Trujillo, G

    1983-05-01

    The sediments from a liquid effluent receiving area at the Los Alamos National Laboratory and soils from an intensive study area in the fallout pathway of Trinity were sampled for 137Cs using 25-, 500-, 2500- and 12,500-cm3 field sampling volumes. A highly replicated sampling program was used to determine mean concentrations and inventories of 137Cs at each site, as well as estimates of spatial, aliquoting, and counting variance components of the radionuclide data. The sampling methods were also analyzed as a function of soil size fractions collected in each field sampling volume and of the total cost of the program for a given variation in the radionuclide survey results. Coefficients of variation (CV) of 137Cs inventory estimates ranged from 0.063 to 0.14 for Mortandad Canyon sediments, whereas CV values for Trinity soils were observed from 0.38 to 0.57. Spatial variance components of 137Cs concentration data were usually found to be larger than either the aliquoting or counting variance estimates and were inversely related to field sampling volume at the Trinity intensive site. Subsequent optimization studies of the sampling schemes demonstrated that each aliquot should be counted once, and that only 2-4 aliquots out of as many as 30 collected need be assayed for 137Cs. The optimization studies showed that as sample costs increased to 45 man-hours of labor per sample, the variance of the mean 137Cs concentration decreased dramatically, but decreased very little with additional labor.

  14. Multilevel sparse functional principal component analysis.

    Science.gov (United States)

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.

  15. Sample preparation and EFTEM of Meat Samples for Nanoparticle Analysis in Food

    International Nuclear Information System (INIS)

    Lari, L; Dudkiewicz, A

    2014-01-01

    Nanoparticles are used in industry for personal care products and the preparation of food. In the latter application, their functions include the prevention of microbes' growth, increase of the foods nutritional value and sensory quality. EU regulations require a risk assessment of the nanoparticles used in foods and food contact materials before the products can reach the market. However, availability of validated analytical methodologies for detection and characterisation of the nanoparticles in food hampers appropriate risk assessment. As part of a research on the evaluation of the methods for screening and quantification of Ag nanoparticles in meat we have tested a new TEM sample preparation alternative to resin embedding and cryo-sectioning. Energy filtered TEM analysis was applied to evaluate thickness and the uniformity of thin meat layers acquired at increasing input of the sample demonstrating that the protocols used ensured good stability under the electron beam, reliable sample concentration and reproducibility

  16. Sample preparation and EFTEM of Meat Samples for Nanoparticle Analysis in Food

    Science.gov (United States)

    Lari, L.; Dudkiewicz, A.

    2014-06-01

    Nanoparticles are used in industry for personal care products and the preparation of food. In the latter application, their functions include the prevention of microbes' growth, increase of the foods nutritional value and sensory quality. EU regulations require a risk assessment of the nanoparticles used in foods and food contact materials before the products can reach the market. However, availability of validated analytical methodologies for detection and characterisation of the nanoparticles in food hampers appropriate risk assessment. As part of a research on the evaluation of the methods for screening and quantification of Ag nanoparticles in meat we have tested a new TEM sample preparation alternative to resin embedding and cryo-sectioning. Energy filtered TEM analysis was applied to evaluate thickness and the uniformity of thin meat layers acquired at increasing input of the sample demonstrating that the protocols used ensured good stability under the electron beam, reliable sample concentration and reproducibility.

  17. The quantitative regional cerebral blood flow measurement with autoradiography method using 123I-IMP SPECT. Evaluation of arterialized venous blood sampling as a substitute for arterial blood sampling

    International Nuclear Information System (INIS)

    Ohnishi, Takashi; Yano, Takao; Nakano, Shinichi; Jinnouchi, Seishi; Nagamachi, Shigeki; Flores, L. II; Nakahara, Hiroshi; Watanabe, Katsushi.

    1996-01-01

    The purpose of this study is validation of calibrating a standard input function in autoradiography (ARG) method by one point venous blood sampling as a substitute for that by one point arterial blood sampling. Ten and 20 minutes after intravenous constant infusion of 123 I-IMP, arterialized venous blood sampling from a dorsal vein were performed on 15 patients having ischemic cerebrovascular disease. And arterial blood sampling from radial artery was performed 10 min after 123 I-IMP infusion. The mean difference rates of integrated input function between calibrated standard input function by arterial blood sampling at 10 min and that by venous blood sampling were 4.1±3% and 9.3±5.4% at 10 and 20 min after 123 I-IMP infusion, respectively. The ratio of venous blood radioactivity to arterial blood radioactivity at 10 min after 123 I-IMP infusion was 0.96±0.02. There was an excellent correlation between ARG method CBF values obtained by arterial blood sampling at 10 min and those obtained by arterialized venous blood sampling at 10 min. In conclusion, a substitution by arterialized venous blood sampling from dorsal hand vein for artery can be possible. The optimized time for arterialized venous blood sampling was 10 min after 123 I-IMP infusion. (author)

  18. Improved explosive collection and detection with rationally assembled surface sampling materials

    Energy Technology Data Exchange (ETDEWEB)

    Chouyyok, Wilaiwan; Bays, J. Timothy; Gerasimenko, Aleksandr A.; Cinson, Anthony D.; Ewing, Robert G.; Atkinson, David A.; Addleman, R. Shane

    2016-01-01

    Sampling and detection of trace explosives is a key analytical process in modern transportation safety. In this work we have explored some of the fundamental analytical processes for collection and detection of trace level explosive on surfaces with the most widely utilized system, thermal desorption IMS. The performance of the standard muslin swipe material was compared with chemically modified fiberglass cloth. The fiberglass surface was modified to include phenyl functional groups. When compared to standard muslin, the phenyl functionalized fiberglass sampling material showed better analyte release from the sampling material as well as improved response and repeatability from multiple uses of the same swipe. The improved sample release of the functionalized fiberglass swipes resulted in a significant increase in sensitivity. Various physical and chemical properties were systematically explored to determine optimal performance. The results herein have relevance to improving the detection of other explosive compounds and potentially to a wide range of other chemical sampling and field detection challenges.

  19. Interactive Sample Book (ISB)

    DEFF Research Database (Denmark)

    Heimdal, Elisabeth Jacobsen; Lenau, Torben Anker; Guglielmi, Michel

    2009-01-01

    supervisor Torben A. Lenau. Inspiration to use smart materials Interactive textiles are still quite an unknown phenomenon to many. It is thus often difficult to communicate what kind of potentials lie within these materials. This is why the ISB project was started, as a practice based research project...... and senses in relation to integrated decoration and function primarily to indoor applications. The result of the project will be a number of interactive textiles, to be gathered in an interactive sample book (ISB), in a similar way as the sample books of wallpapers one can take home from the shop and choose...... from. In other words, it is a kind of display material, which in a simple manner can illustrate how different techniques and smart materials work. The sample book should display a number of possibilities where sensor technology, smart materials and textiles are mixed to such an extent that the textile...

  20. Digital Microfluidics Sample Analyzer

    Science.gov (United States)

    Pollack, Michael G.; Srinivasan, Vijay; Eckhardt, Allen; Paik, Philip Y.; Sudarsan, Arjun; Shenderov, Alex; Hua, Zhishan; Pamula, Vamsee K.

    2010-01-01

    Three innovations address the needs of the medical world with regard to microfluidic manipulation and testing of physiological samples in ways that can benefit point-of-care needs for patients such as premature infants, for which drawing of blood for continuous tests can be life-threatening in their own right, and for expedited results. A chip with sample injection elements, reservoirs (and waste), droplet formation structures, fluidic pathways, mixing areas, and optical detection sites, was fabricated to test the various components of the microfluidic platform, both individually and in integrated fashion. The droplet control system permits a user to control droplet microactuator system functions, such as droplet operations and detector operations. Also, the programming system allows a user to develop software routines for controlling droplet microactuator system functions, such as droplet operations and detector operations. A chip is incorporated into the system with a controller, a detector, input and output devices, and software. A novel filler fluid formulation is used for the transport of droplets with high protein concentrations. Novel assemblies for detection of photons from an on-chip droplet are present, as well as novel systems for conducting various assays, such as immunoassays and PCR (polymerase chain reaction). The lab-on-a-chip (a.k.a., lab-on-a-printed-circuit board) processes physiological samples and comprises a system for automated, multi-analyte measurements using sub-microliter samples of human serum. The invention also relates to a diagnostic chip and system including the chip that performs many of the routine operations of a central labbased chemistry analyzer, integrating, for example, colorimetric assays (e.g., for proteins), chemiluminescence/fluorescence assays (e.g., for enzymes, electrolytes, and gases), and/or conductometric assays (e.g., for hematocrit on plasma and whole blood) on a single chip platform.

  1. The cognition battery of the NIH toolbox for assessment of neurological and behavioral function: validation in an adult sample.

    Science.gov (United States)

    Weintraub, Sandra; Dikmen, Sureyya S; Heaton, Robert K; Tulsky, David S; Zelazo, Philip David; Slotkin, Jerry; Carlozzi, Noelle E; Bauer, Patricia J; Wallner-Allen, Kathleen; Fox, Nathan; Havlik, Richard; Beaumont, Jennifer L; Mungas, Dan; Manly, Jennifer J; Moy, Claudia; Conway, Kevin; Edwards, Emmeline; Nowinski, Cindy J; Gershon, Richard

    2014-07-01

    This study introduces a special series on validity studies of the Cognition Battery (CB) from the U.S. National Institutes of Health Toolbox for the Assessment of Neurological and Behavioral Function (NIHTB) (Gershon, Wagster et al., 2013) in an adult sample. This first study in the series describes the sample, each of the seven instruments in the NIHTB-CB briefly, and the general approach to data analysis. Data are provided on test-retest reliability and practice effects, and raw scores (mean, standard deviation, range) are presented for each instrument and the gold standard instruments used to measure construct validity. Accompanying papers provide details on each instrument, including information about instrument development, psychometric properties, age and education effects on performance, and convergent and discriminant construct validity. One study in the series is devoted to a factor analysis of the NIHTB-CB in adults and another describes the psychometric properties of three composite scores derived from the individual measures representing fluid and crystallized abilities and their combination. The NIHTB-CB is designed to provide a brief, comprehensive, common set of measures to allow comparisons among disparate studies and to improve scientific communication.

  2. BRDF of Salt Pan Regolith Samples

    Science.gov (United States)

    Georgiev, Georgi T.; Gatebe, Charles K.; Butler, James J.; King, Michael D.

    2008-01-01

    Laboratory Bi-directional Reflectance Distribution Function (BRDF) measurements of salt pan regolith samples are presented in this study in an effort to understand the role of spatial and spectral variability of the natural biome. The samples were obtained from Etosha Pan, Namibia (19.20 deg S, 15.93 deg E, alt. 1100 m). It is shown how the BRDF depends on the measurement geometry - incident and scatter angles and on the sample particle sizes. As a demonstration of the application of the results, airborne BRDF measurements acquires with NASA's Cloud Absorption Radiometer (CAR) over the same general site where the regolith samples were collected are compared with the laboratory results. Good agreement between laboratory measured and field measured BRDF is reported.

  3. Data Transformation Functions for Expanded Search Spaces in Geographic Sample Supervised Segment Generation

    OpenAIRE

    Christoff Fourie; Elisabeth Schoepfer

    2014-01-01

    Sample supervised image analysis, in particular sample supervised segment generation, shows promise as a methodological avenue applicable within Geographic Object-Based Image Analysis (GEOBIA). Segmentation is acknowledged as a constituent component within typically expansive image analysis processes. A general extension to the basic formulation of an empirical discrepancy measure directed segmentation algorithm parameter tuning approach is proposed. An expanded search landscape is defined, c...

  4. Adaptive sampling method in deep-penetration particle transport problem

    International Nuclear Information System (INIS)

    Wang Ruihong; Ji Zhicheng; Pei Lucheng

    2012-01-01

    Deep-penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, a kind of particle transport random walking system under the emission point as a sampling station is built. Then, an adaptive sampling scheme is derived for better solution with the achieved information. The main advantage of the adaptive scheme is to choose the most suitable sampling number from the emission point station to obtain the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is introduced. Its main principle is to define the importance function due to the particle state and to ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive scheme under the emission point as a station could overcome the difficulty of underestimation of the result in some degree, and the adaptive importance sampling method gets satisfied results as well. (authors)

  5. Multi-frequency direct sampling method in inverse scattering problem

    Science.gov (United States)

    Kang, Sangwoo; Lambert, Marc; Park, Won-Kwang

    2017-10-01

    We consider the direct sampling method (DSM) for the two-dimensional inverse scattering problem. Although DSM is fast, stable, and effective, some phenomena remain unexplained by the existing results. We show that the imaging function of the direct sampling method can be expressed by a Bessel function of order zero. We also clarify the previously unexplained imaging phenomena and suggest multi-frequency DSM to overcome traditional DSM. Our method is evaluated in simulation studies using both single and multiple frequencies.

  6. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    Science.gov (United States)

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  7. Variational Approach to Enhanced Sampling and Free Energy Calculations

    Science.gov (United States)

    Valsson, Omar; Parrinello, Michele

    2014-08-01

    The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.

  8. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  9. Volume Ray Casting with Peak Finding and Differential Sampling

    KAUST Repository

    Knoll, A.

    2009-11-01

    Direct volume rendering and isosurfacing are ubiquitous rendering techniques in scientific visualization, commonly employed in imaging 3D data from simulation and scan sources. Conventionally, these methods have been treated as separate modalities, necessitating different sampling strategies and rendering algorithms. In reality, an isosurface is a special case of a transfer function, namely a Dirac impulse at a given isovalue. However, artifact-free rendering of discrete isosurfaces in a volume rendering framework is an elusive goal, requiring either infinite sampling or smoothing of the transfer function. While preintegration approaches solve the most obvious deficiencies in handling sharp transfer functions, artifacts can still result, limiting classification. In this paper, we introduce a method for rendering such features by explicitly solving for isovalues within the volume rendering integral. In addition, we present a sampling strategy inspired by ray differentials that automatically matches the frequency of the image plane, resulting in fewer artifacts near the eye and better overall performance. These techniques exhibit clear advantages over standard uniform ray casting with and without preintegration, and allow for high-quality interactive volume rendering with sharp C0 transfer functions. © 2009 IEEE.

  10. Detector Sampling of Optical/IR Spectra: How Many Pixels per FWHM?

    Science.gov (United States)

    Robertson, J. Gordon

    2017-08-01

    Most optical and IR spectra are now acquired using detectors with finite-width pixels in a square array. Each pixel records the received intensity integrated over its own area, and pixels are separated by the array pitch. This paper examines the effects of such pixellation, using computed simulations to illustrate the effects which most concern the astronomer end-user. It is shown that coarse sampling increases the random noise errors in wavelength by typically 10-20 % at 2 pixels per Full Width at Half Maximum, but with wide variation depending on the functional form of the instrumental Line Spread Function (i.e. the instrumental response to a monochromatic input) and on the pixel phase. If line widths are determined, they are even more strongly affected at low sampling frequencies. However, the noise in fitted peak amplitudes is minimally affected by pixellation, with increases less than about 5%. Pixellation has a substantial but complex effect on the ability to see a relative minimum between two closely spaced peaks (or relative maximum between two absorption lines). The consistent scale of resolving power presented by Robertson to overcome the inadequacy of the Full Width at Half Maximum as a resolution measure is here extended to cover pixellated spectra. The systematic bias errors in wavelength introduced by pixellation, independent of signal/noise ratio, are examined. While they may be negligible for smooth well-sampled symmetric Line Spread Functions, they are very sensitive to asymmetry and high spatial frequency sub-structure. The Modulation Transfer Function for sampled data is shown to give a useful indication of the extent of improperly sampled signal in an Line Spread Function. The common maxim that 2 pixels per Full Width at Half Maximum is the Nyquist limit is incorrect and most Line Spread Functions will exhibit some aliasing at this sample frequency. While 2 pixels per Full Width at Half Maximum is nevertheless often an acceptable minimum for

  11. Waste sampling and characterization facility (WSCF)

    International Nuclear Information System (INIS)

    1994-10-01

    The Waste Sampling and Characterization Facility (WSCF) complex consists of the main structure (WSCF) and four support structures located in the 600 Area of the Hanford site east of the 200 West area and south of the Hanford Meterology Station. WSCF is to be used for low level sample analysis, less than 2 mRem. The Laboratory features state-of-the-art analytical and low level radiological counting equipment for gaseous, soil, and liquid sample analysis. In particular, this facility is to be used to perform Resource Conservation and Recovery Act (RCRA) of 1976 and Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) of 1980 sample analysis in accordance with U.S. Environmental Protection Agency Protocols, room air and stack monitoring sample analysis, waste water treatment process support, and contractor laboratory quality assurance checks. The samples to be analyzed contain very low concentrations of radioisotopes. The main reason that WSCF is considered a Nuclear Facility is due to the storage of samples at the facility. This maintenance Implementation Plan has been developed for maintenace functions associate with the WSCF

  12. ESTIMATION OF PARAMETERS AND RELIABILITY FUNCTION OF EXPONENTIATED EXPONENTIAL DISTRIBUTION: BAYESIAN APPROACH UNDER GENERAL ENTROPY LOSS FUNCTION

    Directory of Open Access Journals (Sweden)

    Sanjay Kumar Singh

    2011-06-01

    Full Text Available In this Paper we propose Bayes estimators of the parameters of Exponentiated Exponential distribution and Reliability functions under General Entropy loss function for Type II censored sample. The proposed estimators have been compared with the corresponding Bayes estimators obtained under Squared Error loss function and maximum likelihood estimators for their simulated risks (average loss over sample space.

  13. SKATE: a docking program that decouples systematic sampling from scoring.

    Science.gov (United States)

    Feng, Jianwen A; Marshall, Garland R

    2010-11-15

    SKATE is a docking prototype that decouples systematic sampling from scoring. This novel approach removes any interdependence between sampling and scoring functions to achieve better sampling and, thus, improves docking accuracy. SKATE systematically samples a ligand's conformational, rotational and translational degrees of freedom, as constrained by a receptor pocket, to find sterically allowed poses. Efficient systematic sampling is achieved by pruning the combinatorial tree using aggregate assembly, discriminant analysis, adaptive sampling, radial sampling, and clustering. Because systematic sampling is decoupled from scoring, the poses generated by SKATE can be ranked by any published, or in-house, scoring function. To test the performance of SKATE, ligands from the Asetex/CDCC set, the Surflex set, and the Vertex set, a total of 266 complexes, were redocked to their respective receptors. The results show that SKATE was able to sample poses within 2 A RMSD of the native structure for 98, 95, and 98% of the cases in the Astex/CDCC, Surflex, and Vertex sets, respectively. Cross-docking accuracy of SKATE was also assessed by docking 10 ligands to thymidine kinase and 73 ligands to cyclin-dependent kinase. 2010 Wiley Periodicals, Inc.

  14. A novel sampling method for multiple multiscale targets from scattering amplitudes at a fixed frequency

    Science.gov (United States)

    Liu, Xiaodong

    2017-08-01

    A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.

  15. Magnetorheological measurements with consideration for the internal magnetic field in samples

    Energy Technology Data Exchange (ETDEWEB)

    Kordonski, W; Gorodkin, S [QED Technologies International, 1040 University Ave., Rochester, NY 14607 (United States)], E-mail: kordonski@qedmrf.com

    2009-02-01

    The magnetically induced yield stress in a sample of suspension of magnetic particles is associated with formation of a field-oriented structure, the strength of which depends on the degree of particles magnetization. This factor is largely defined by the actual magnetic field strength in the sample. At the same time it is common practice to present and analyze magnetorheological characteristics as a function of the applied magnetic field. Uncertainty of an influence function in magnetorheology hampers interpretation of data obtained with different measurement configurations. It was shown in this paper that rheological response of magnetorheological fluid to the applied magnetic field is defined by the sample's actual (internal) magnetic field intensity, which, in turn, depends on sample geometry and field orientation all other factors being equal. Utilization of the sample's actual field as an influence function in magnetorheology allows proper interpretation of data obtained with different measuring system configurations. Optimization of the actual internal field is a promising approach in designing of energy efficient magnetorheological devices.

  16. SIMS analysis using a new novel sample stage

    International Nuclear Information System (INIS)

    Miwa, Shiro; Nomachi, Ichiro; Kitajima, Hideo

    2006-01-01

    We have developed a novel sample stage for Cameca IMS-series instruments that allows us to adjust the tilt of the sample holder and to vary the height of the sample surface from outside the vacuum chamber. A third function of the stage is the capability to cool sample to -150 deg. C using liquid nitrogen. Using this stage, we can measure line profiles of 10 mm in length without any variation in the secondary ion yields. By moving the sample surface toward the input lens, the primary ion beam is well focused when the energy of the primary ions is reduced. Sample cooling is useful for samples such as organic materials that are easily damaged by primary ions or electrons

  17. SIMS analysis using a new novel sample stage

    Energy Technology Data Exchange (ETDEWEB)

    Miwa, Shiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan)]. E-mail: Shiro.Miwa@jp.sony.com; Nomachi, Ichiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan); Kitajima, Hideo [Nanotechnos Corp., 5-4-30 Nishihashimoto, Sagamihara 229-1131 (Japan)

    2006-07-30

    We have developed a novel sample stage for Cameca IMS-series instruments that allows us to adjust the tilt of the sample holder and to vary the height of the sample surface from outside the vacuum chamber. A third function of the stage is the capability to cool sample to -150 deg. C using liquid nitrogen. Using this stage, we can measure line profiles of 10 mm in length without any variation in the secondary ion yields. By moving the sample surface toward the input lens, the primary ion beam is well focused when the energy of the primary ions is reduced. Sample cooling is useful for samples such as organic materials that are easily damaged by primary ions or electrons.

  18. Improved survival prediction from lung function data in a large population sample

    DEFF Research Database (Denmark)

    Miller, M.R.; Pedersen, O.F.; Lange, P.

    2008-01-01

    Studies relating tung function to survival commonly express lung function impairment as a percent of predicted but this retains age, height and sex bias. We have studied alternative methods of expressing forced expiratory volume in 1 s (FEV1) for predicting all cause and airway related lung disease.......1 respectively. Cut levels of lung function were used to categorise impairment and the HR for multivariate prediction of all cause and airway related lung disease mortality were 10 and 2044 respectively for the worst category of FEV1/ht(2) compared to 5 and 194 respectively for the worst category of FEV1PP....... In univariate predictions of all cause mortality the HR for FEV1/ht(2) categories was 2-4 times higher than those for FEV1PP and 3-10 times higher for airway related tung disease mortality. We conclude that FEV1/ht(2) is superior to FEV1PP for predicting survival. in a general population and this method...

  19. On the fairness of the main galaxy sample of SDSS

    International Nuclear Information System (INIS)

    Meng Kelai; Pan Jun; Feng Longlong; Ma Bin

    2011-01-01

    Flux-limited and volume-limited galaxy samples are constructed from the Sloan Digital Sky Survey (SDSS) data releases DR4, DR6 and DR7 for statistical analysis. The two-point correlation functions ξ(s), monopole of three-point correlation functions ζ 0 , projected two-point correlation function w p and pairwise velocity dispersion σ 12 are measured to test if galaxy samples are fair for these statistics. We find that with the increment of sky coverage of subsequent data releases in SDSS, ξ(s) of the flux-limited sample is extremely robust and insensitive to local structures at low redshift. However, for volume-limited samples fainter than L* at large scales s > or approx. 10 h -1 Mpc, the deviation of ξ(s) from different SDSS data releases (DR7, DR6 and DR4) increases with the increment of absolute magnitude. The case of ζ 0 (s) is similar to that of ξ(s). In the weakly nonlinear regime, there is no agreement between ζ 0 of different data releases in all luminosity bins. Furthermore, w p of volume-limited samples of DR7 in luminosity bins fainter than -M r,0.1 = [18.5, 19.5] are significantly larger and σ 12 of the two faintest volume-limited samples of DR7 display a very different scale dependence than results from DR4 and DR6. Our findings call for caution in understanding clustering analysis results of SDSS faint galaxy samples and higher order statistics of SDSS volume-limited samples in the weakly nonlinear regime. The first zero-crossing points of ξ(s) from volume-limited samples are also investigated and discussed. (research papers)

  20. Effects of electroconvulsive therapy on amygdala function in major depression - a longitudinal functional magnetic resonance imaging study.

    Science.gov (United States)

    Redlich, R; Bürger, C; Dohm, K; Grotegerd, D; Opel, N; Zaremba, D; Meinert, S; Förster, K; Repple, J; Schnelle, R; Wagenknecht, C; Zavorotnyy, M; Heindel, W; Kugel, H; Gerbaulet, M; Alferink, J; Arolt, V; Zwanzger, P; Dannlowski, U

    2017-09-01

    Electroconvulsive therapy (ECT) is one of the most effective treatments for severe depression. However, little is known regarding brain functional processes mediating ECT effects. In a non-randomized prospective study, functional magnetic resonance imaging data during the automatic processing of subliminally presented emotional faces were obtained twice, about 6 weeks apart, in patients with major depressive disorder (MDD) before and after treatment with ECT (ECT, n = 24). Additionally, a control sample of MDD patients treated solely with pharmacotherapy (MED, n = 23) and a healthy control sample (HC, n = 22) were obtained. Before therapy, both patient groups equally showed elevated amygdala reactivity to sad faces compared with HC. After treatment, a decrease in amygdala activity to negative stimuli was discerned in both patient samples indicating a normalization of amygdala function, suggesting mechanisms potentially unspecific for ECT. Moreover, a decrease in amygdala activity to sad faces was associated with symptomatic improvements in the ECT sample (r spearman = -0.48, p = 0.044), and by tendency also for the MED sample (r spearman = -0.38, p = 0.098). However, we did not find any significant association between pre-treatment amygdala function to emotional stimuli and individual symptom improvement, neither for the ECT sample, nor for the MED sample. In sum, the present study provides first results regarding functional changes in emotion processing due to ECT treatment using a longitudinal design, thus validating and extending our knowledge gained from previous treatment studies. A limitation was that ECT patients received concurrent medication treatment.

  1. Efficient estimation for ergodic diffusions sampled at high frequency

    DEFF Research Database (Denmark)

    Sørensen, Michael

    A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...

  2. Sinc-function based Network

    DEFF Research Database (Denmark)

    Madsen, Per Printz

    1998-01-01

    The purpose of this paper is to describe a neural network (SNN), that is based on Shannons ideas of reconstruction of a real continuous function from its samples. The basic function, used in this network, is the Sinc-function. Two learning algorithms are described. A simple one called IM...

  3. A Unimodal Model for Double Observer Distance Sampling Surveys.

    Directory of Open Access Journals (Sweden)

    Earl F Becker

    Full Text Available Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.

  4. Design compliance matrix waste sample container filling system for nested, fixed-depth sampling system

    International Nuclear Information System (INIS)

    BOGER, R.M.

    1999-01-01

    This design compliance matrix document provides specific design related functional characteristics, constraints, and requirements for the container filling system that is part of the nested, fixed-depth sampling system. This document addresses performance, external interfaces, ALARA, Authorization Basis, environmental and design code requirements for the container filling system. The container filling system will interface with the waste stream from the fluidic pumping channels of the nested, fixed-depth sampling system and will fill containers with waste that meet the Resource Conservation and Recovery Act (RCRA) criteria for waste that contains volatile and semi-volatile organic materials. The specifications for the nested, fixed-depth sampling system are described in a Level 2 Specification document (HNF-3483, Rev. 1). The basis for this design compliance matrix document is the Tank Waste Remediation System (TWRS) desk instructions for design Compliance matrix documents (PI-CP-008-00, Rev. 0)

  5. Experimental study of glass sampling devices

    International Nuclear Information System (INIS)

    Jouan, A.; Moncouyoux, J.P.; Meyere, A.

    1992-01-01

    Two high-level liquid waste containment glass sampling systems have been designed and built. The first device fits entirely inside a standard glass storage canister, and may thus be used in facilities not initially designed for this function. It has been tested successfully in the nonradioactive prototype unit at Marcoule. The work primarily covered the design and construction of an articulated arm supporting the sampling vessel, and the mechanisms necessary for filling the vessel and recovering the sample. System actuation and operation are fully automatic, and the resulting sample is representative of the glass melt. Implementation of the device is delicate however, and its reliability is estimated at about 75%. A second device was designed specifically for new vitrification facilities. It is installed directly on the glass melting furnace, and meets process operating and quality control requirements. Tests conducted at the Marcoule prototype vitrification facility demonstrated the feasibility of the system. Special attention was given to the sampling vessel transfer mechanisms, with two filling and controlled sample cooling options

  6. Carboxylic acid-functionalized SBA-15 nanorods for gemcitabine delivery

    International Nuclear Information System (INIS)

    Bahrami, Zohreh; Badiei, Alireza; Ziarani, Ghodsi Mohammadi

    2015-01-01

    The present study deals with the functionalization of mesoporous silica nanoparticles as drug delivery systems. Mono, di, and tri amino-functionalized SBA-15 nanorods were synthesized by post-grafting method using (3-aminopropyl) triethoxysilane, N-(2-aminoethyl-)3- aminopropyltrimethoxysilane, and 3-[2-(2-aminoethylamino) ethylamino] propyl trimethoxysilane, respectively. The carboxylic acid derivatives of the amino-functionalized samples were obtained using succinic anhydride. Tminopropyltrimethoxysilanehe obtained modified materials were investigated as matrixes for the anticancer drug (gemcitabine) delivery. The prepared samples were characterized by SAXS, N 2 adsorption/desorption, SEM, transmission electron microscopy, thermogravimetric analysis, and FTIR and UV spectroscopies. The adsorption and release properties of all samples were studied. It was revealed that the adsorption capacity and release behavior of gemcitabine were highly dependent on the type of the introduced functional groups. The carboxylic acid-modified samples have higher loading content, due to the strong interaction with gemcitabine. The maximum content of deposited drug in the modified SBA-15 nanorods is close to 40 wt%. It was found that the surface functionalization leads toward significant decrease of the drug release rate. The carboxylic acid-functionalized samples have slower release rate in contrast with the amino-functionalized samples

  7. SamplingStrata: An R Package for the Optimization of Strati?ed Sampling

    Directory of Open Access Journals (Sweden)

    Giulio Barcaroli

    2014-11-01

    Full Text Available When designing a sampling survey, usually constraints are set on the desired precision levels regarding one or more target estimates (the Ys. If a sampling frame is available, containing auxiliary information related to each unit (the Xs, it is possible to adopt a stratified sample design. For any given strati?cation of the frame, in the multivariate case it is possible to solve the problem of the best allocation of units in strata, by minimizing a cost function sub ject to precision constraints (or, conversely, by maximizing the precision of the estimates under a given budget. The problem is to determine the best stratification in the frame, i.e., the one that ensures the overall minimal cost of the sample necessary to satisfy precision constraints. The Xs can be categorical or continuous; continuous ones can be transformed into categorical ones. The most detailed strati?cation is given by the Cartesian product of the Xs (the atomic strata. A way to determine the best stratification is to explore exhaustively the set of all possible partitions derivable by the set of atomic strata, evaluating each one by calculating the corresponding cost in terms of the sample required to satisfy precision constraints. This is una?ordable in practical situations, where the dimension of the space of the partitions can be very high. Another possible way is to explore the space of partitions with an algorithm that is particularly suitable in such situations: the genetic algorithm. The R package SamplingStrata, based on the use of a genetic algorithm, allows to determine the best strati?cation for a population frame, i.e., the one that ensures the minimum sample cost necessary to satisfy precision constraints, in a multivariate and multi-domain case.

  8. Efficient Sample Tracking With OpenLabFramework

    DEFF Research Database (Denmark)

    List, Markus; Schmidt, Steffen; Trojnar, Jakub

    2014-01-01

    of samples created and need to be replaced with state-of-the-art laboratory information management systems. Such systems have been developed in large numbers, but they are often limited to specific research domains and types of data. One domain so far neglected is the management of libraries of vector clones...... and genetically engineered cell lines. OpenLabFramework is a newly developed web-application for sample tracking, particularly laid out to fill this gap, but with an open architecture allowing it to be extended for other biological materials and functional data. Its sample tracking mechanism is fully customizable...

  9. Using Model, Cover, Copy, Compare, a Token Economy Program, and Discrete Trail Match to Sample Training for Teaching Functional Life Skills for a 13-Year-Old Middle School Student with Moderate Disabilities

    Directory of Open Access Journals (Sweden)

    KATHERINE J. HOOT

    2014-08-01

    Full Text Available The purpose of this study was to evaluate the effectiveness of model, cover, copy, compare (MCCC, token system, and match to sample for teaching basic functional life skills with a middle age single student with disabilities. MCCC is a student-managed strategy that teaches discrete skills through errorless correction. Match to sample is another strategy that teaches how to identify and discriminate based on a visual representation of the identical information. The effectiveness of MCCC and match to sample was evaluated using a multiple baseline design. The results indicated that MCCC and match to sample was effective in teaching a single middle age school student with disabilities his name, phone number, home address, and emergency contact name and phone number. Maintenance of the basic functional life skills was also found; except for the emergency contact name. However, even if maintenance was not conducted on the final set, emergency phone number was maintained; this is attributed to the length of teaching sessions on the final set. The MCCC and match to sample interventions were both easy to implement and employ in the special education middle school classroom.

  10. The algorithm and program complex for splitting on a parts the records of acoustic waves recorded during the work of plasma actuator flush-mounted in the model plane nozzle with the purpose of analyzing their robust spectral and correlation characteristics

    International Nuclear Information System (INIS)

    Chernousov, A D; Malakhov, D V; Skvortsova, N N

    2014-01-01

    Currently acute problem of developing new technologies by reducing the noise of aircraft engines, including the directional impact on the noise on the basis of the interaction of plasma disturbances and sound generation pulsations. One of the devices built on this principle being developed in GPI RAS. They are plasma actuators (group of related to each other gaps, built on the perimeter of the nozzle) of various shapes and forms. In this paper an algorithm was developed which allows to separate impulses from the received experimental data, acquired during the work of plasma actuator flush-mounted in the model plane nozzle. The algorithm can be adjusted manually under a variety of situations (work of actuator in a nozzle with or without airflow, adjustment to different frequencies and pulse duration of the actuator). And program complex is developed on the basis of MatLab software, designed for building sustainable robust spectral and autocovariation functions of acoustic signals recorded during the experiments with the model of a nozzle with working actuator

  11. Guidance for establishment and implementation of a national sample management program in support of EM environmental sampling and analysis activities

    International Nuclear Information System (INIS)

    1994-01-01

    The role of the National Sample Management Program (NSMP) proposed by the Department of Energy's Office of Environmental Management (EM) is to be a resource for EM programs and for local Field Sample Management Programs (FSMPs). It will be a source of information on sample analysis and data collection within the DOE complex. Therefore the NSMP's primary role is to coordinate and function as a central repository for information collected from the FSMPs. An additional role of the NSMP is to monitor trends in data collected from the FSMPs over time and across sites and laboratories. Tracking these trends will allow identification of potential problems in the sampling and analysis process

  12. Ventilatory Function in Relation to Mining Experience and Smoking in a Random Sample of Miners and Non-miners in a Witwatersrand Town1

    Science.gov (United States)

    Sluis-Cremer, G. K.; Walters, L. G.; Sichel, H. S.

    1967-01-01

    The ventilatory capacity of a random sample of men over the age of 35 years in the town of Carletonville was estimated by the forced expiratory volume and the peak expiratory flow rate. Five hundred and sixty-two persons were working or had worked in gold-mines and 265 had never worked in gold-mines. No difference in ventilatory function was found between the miners and non-miners other than that due to the excess of chronic bronchitis in miners. PMID:6017134

  13. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  14. Optimizing sampling approaches along ecological gradients

    DEFF Research Database (Denmark)

    Schweiger, Andreas; Irl, Severin D. H.; Steinbauer, Manuel

    2016-01-01

    1. Natural scientists and especially ecologists use manipulative experiments or field observations along gradients to differentiate patterns driven by processes from those caused by random noise. A well-conceived sampling design is essential for identifying, analysing and reporting underlying...... patterns in a statistically solid and reproducible manner, given the normal restrictions in labour, time and money. However, a technical guideline about an adequate sampling design to maximize prediction success under restricted resources is lacking. This study aims at developing such a solid...... and reproducible guideline for sampling along gradients in all fields of ecology and science in general. 2. We conducted simulations with artificial data for five common response types known in ecology, each represented by a simple function (no response, linear, exponential, symmetric unimodal and asymmetric...

  15. The Galaxy mass function up to z =4 in the GOODS-MUSIC sample: into the epoch of formation of massive galaxies

    Science.gov (United States)

    Fontana, A.; Salimbeni, S.; Grazian, A.; Giallongo, E.; Pentericci, L.; Nonino, M.; Fontanot, F.; Menci, N.; Monaco, P.; Cristiani, S.; Vanzella, E.; de Santis, C.; Gallozzi, S.

    2006-12-01

    Aims.The goal of this work is to measure the evolution of the Galaxy Stellar Mass Function and of the resulting Stellar Mass Density up to redshift ≃4, in order to study the assembly of massive galaxies in the high redshift Universe. Methods: .We have used the GOODS-MUSIC catalog, containing 3000 Ks-selected galaxies with multi-wavelength coverage extending from the U band to the Spitzer 8 μm band, of which 27% have spectroscopic redshifts and the remaining fraction have accurate photometric redshifts. On this sample we have applied a standard fitting procedure to measure stellar masses. We compute the Galaxy Stellar Mass Function and the resulting Stellar Mass Density up to redshift ≃4, taking into proper account the biases and incompleteness effects. Results: .Within the well known trend of global decline of the Stellar Mass Density with redshift, we show that the decline of the more massive galaxies may be described by an exponential timescale of ≃6 Gyr up to z≃ 1.5, and proceeds much faster thereafter, with an exponential timescale of ≃0.6 Gyr. We also show that there is some evidence for a differential evolution of the Galaxy Stellar Mass Function, with low mass galaxies evolving faster than more massive ones up to z≃ 1{-}1.5 and that the Galaxy Stellar Mass Function remains remarkably flat (i.e. with a slope close to the local one) up to z≃ 1{-}1.3. Conclusions: .The observed behaviour of the Galaxy Stellar Mass Function is consistent with a scenario where about 50% of present-day massive galaxies formed at a vigorous rate in the epoch between redshift 4 and 1.5, followed by a milder evolution until the present-day epoch.

  16. Imbalanced Learning for Functional State Assessment

    Science.gov (United States)

    Li, Feng; McKenzie, Frederick; Li, Jiang; Zhang, Guangfan; Xu, Roger; Richey, Carl; Schnell, Tom

    2011-01-01

    This paper presents results of several imbalanced learning techniques applied to operator functional state assessment where the data is highly imbalanced, i.e., some function states (majority classes) have much more training samples than other states (minority classes). Conventional machine learning techniques usually tend to classify all data samples into majority classes and perform poorly for minority classes. In this study, we implemented five imbalanced learning techniques, including random undersampling, random over-sampling, synthetic minority over-sampling technique (SMOTE), borderline-SMOTE and adaptive synthetic sampling (ADASYN) to solve this problem. Experimental results on a benchmark driving lest dataset show thai accuracies for minority classes could be improved dramatically with a cost of slight performance degradations for majority classes,

  17. Smoothing the redshift distributions of random samples for the baryon acoustic oscillations: applications to the SDSS-III BOSS DR12 and QPM mock samples

    Science.gov (United States)

    Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen

    2017-12-01

    We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.

  18. The RECONS 10 Parsec Sample

    Science.gov (United States)

    Henry, Todd; Dieterich, Sergio; Finch, C.; Ianna, P. A.; Jao, W.-C.; Riedel, Adric; Subasavage, John; Winters, J.; RECONS Team

    2018-01-01

    The sample of stars, brown dwarfs, and exoplanets known within 10 parsecs of our Solar System as of January 1, 2017 is presented. The current census is comprised of 416 objects made up of 371 stars (including the Sun and white dwarfs) and 45 brown dwarfs. The stars are known to be orbited by 43 planets (eight in our Solar System and 35 exoplanets). There are 309 systems within 10 pc, including 275 with stellar primaries and 34 systems containing only brown dwarfs.Via a long-term astrometric effort at CTIO, the RECONS (REsearch Consortium On Nearby Stars, www.recons.org) team has added 44 stellar systems to the sample, accounting for one of every seven systems known within 10 pc. Overall, the 278 red dwarfs clearly dominate the sample, accounting for 75% of all stars known within 10 pc. The completeness of the sample is assessed, indicating that a few red, brown, and white dwarfs within 10 pc may be discovered, both as primaries and secondaries, although we estimate that 90% of the stellar systems have been identified. The evolution of the 10 pc sample over the past century is outlined to illustrate our growing knowledge of the solar neighborhood.The luminosity and mass functions for stars within 10 pc are described. In contrast to many studies, once all known close multiples are resolved into individual components, the true mass function rises to the end of the stellar main sequence, followed by a precipitous drop in the number of brown dwarfs, which are outnumbered 8.2 to 1 by stars. Of the 275 stellar primaries in the sample, 182 (66%) are single, 75 (27%) have at least one stellar companion, only 8 (3%) have a brown dwarf companion, and 19 (7%) systems are known to harbor planets. Searches for brown dwarf companions to stars in this sample have been quite rigorous, so the brown dwarf companion rate is unlikely to rise significantly. In contrast, searches for exoplanets, particularly terrestrial planets, have been limited. Thus, overall the solar neighborhood is

  19. Study on sampling of continuous linear system based on generalized Fourier transform

    Science.gov (United States)

    Li, Huiguang

    2003-09-01

    In the research of signal and system, the signal's spectrum and the system's frequency characteristic can be discussed through Fourier Transform (FT) and Laplace Transform (LT). However, some singular signals such as impulse function and signum signal don't satisfy Riemann integration and Lebesgue integration. They are called generalized functions in Maths. This paper will introduce a new definition -- Generalized Fourier Transform (GFT) and will discuss generalized function, Fourier Transform and Laplace Transform under a unified frame. When the continuous linear system is sampled, this paper will propose a new method to judge whether the spectrum will overlap after generalized Fourier transform (GFT). Causal and non-causal systems are studied, and sampling method to maintain system's dynamic performance is presented. The results can be used on ordinary sampling and non-Nyquist sampling. The results also have practical meaning on research of "discretization of continuous linear system" and "non-Nyquist sampling of signal and system." Particularly, condition for ensuring controllability and observability of MIMO continuous systems in references 13 and 14 is just an applicable example of this paper.

  20. In situ sampling cart development engineering task plan

    International Nuclear Information System (INIS)

    DeFord, D.K.

    1995-01-01

    This Engineering Task Plan (ETP) supports the development for facility use of the next generation in situ sampling system for characterization of tank vapors. In situ sampling refers to placing sample collection devices (primarily sorbent tubes) directly into the tank headspace, then drawing tank gases through the collection devices to obtain samples. The current in situ sampling system is functional but was not designed to provide the accurate flow measurement required by today's data quality objectives (DQOs) for vapor characterization. The new system will incorporate modern instrumentation to achieve much tighter control. The next generation system will be referred to in this ETP as the New In Situ System (NISS) or New System. The report describes the current sampling system and the modifications that are required for more accuracy

  1. Relativistic rise measurements with very fine sampling intervals

    International Nuclear Information System (INIS)

    Ludlam, T.; Platner, E.D.; Polychronakos, V.A.; Lindenbaum, S.J.; Kramer, M.A.; Teramoto, Y.

    1980-01-01

    The motivation of this work was to determine whether the technique of charged particle identification via the relativistic rise in the ionization loss can be significantly improved by virtue of very small sampling intervals. A fast-sampling ADC and a longitudinal drift geometry were used to provide a large number of samples from a single drift chamber gap, achieving sampling intervals roughly 10 times smaller than any previous study. A single layer drift chamber was used, and tracks of 1 meter length were simulated by combining together samples from many identified particles in this detector. These data were used to study the resolving power for particle identification as a function of sample size, averaging technique, and the number of discrimination levels (ADC bits) used for pulse height measurements

  2. Development and optimization of a novel sample preparation method cored on functionalized nanofibers mat-solid-phase extraction for the simultaneous efficient extraction of illegal anionic and cationic dyes in foods.

    Science.gov (United States)

    Qi, Feifei; Jian, Ningge; Qian, Liangliang; Cao, Weixin; Xu, Qian; Li, Jian

    2017-09-01

    A simple and efficient three-step sample preparation method was developed and optimized for the simultaneous analysis of illegal anionic and cationic dyes (acid orange 7, metanil yellow, auramine-O, and chrysoidine) in food samples. A novel solid-phase extraction (SPE) procedure based on nanofibers mat (NFsM) was proposed after solvent extraction and freeze-salting out purification. The preferred SPE sorbent was selected from five functionalized NFsMs by orthogonal experimental design, and the optimization of SPE parameters was achieved through response surface methodology (RSM) based on the Box-Behnken design (BBD). Under the optimal conditions, the target analytes could be completely adsorbed by polypyrrole-functionalized polyacrylonitrile NFsM (PPy/PAN NFsM), and the eluent was directly analyzed by high-performance liquid chromatography-diode array detection (HPLC-DAD). The limits of detection (LODs) were between 0.002 and 0.01 mg kg -1 , and satisfactory linearity with correlation coefficients (R > 0.99) for each dye in all samples was achieved. Compared with the Chinese standard method and the published methods, the proposed method was simplified greatly with much lower requirement of sorbent (5.0 mg) and organic solvent (2.8 mL) and higher sample preparation speed (10 min/sample), while higher recovery (83.6-116.5%) and precision (RSDs < 7.1%) were obtained. With this developed method, we have successfully detected illegal ionic dyes in three common representative foods: yellow croaker, soybean products, and chili seasonings. Graphical abstract Schematic representation of the process of the three-step sample preparation.

  3. USAXS and SAXS from cancer-bearing breast tissue samples

    International Nuclear Information System (INIS)

    Fernandez, M.; Suhonen, H.; Keyrilaeinen, J.; Bravin, A.; Fiedler, S.; Karjalainen-Lindsberg, M.-L.; Leidenius, M.; Smitten, K. von; Suortti, P.

    2008-01-01

    USAXS and SAXS patterns from cancer-bearing human breast tissue samples were recorded at beamline ID02 of the ESRF using a Bonse-Hart camera and a pinhole camera. The samples were classified as being ductal carcinoma, grade II, and ductal carcinoma in situ, partly invasive. The samples included areas of healthy collagen, invaded collagen, necrotic ducts with calcifications, and adipose tissue. The scattering patterns were analyzed in different ways to separate the scattering contribution and the direct beam from the observed rocking curve (RC) of the analyzer. It was found that USAXS from all tissues was weak, and the effects on the analyzer RC were observed only in the low-intensity tails of the patterns. The intrinsic RC was convolved with different model functions for the impulse response of the sample, and the best fit with experiment was obtained by the Pearson VII function. Significantly different distributions for the Pearson exponent m were obtained in benign and malignant regions of the samples. For a comparison with analyzer-based imaging (ABI) or diffraction enhanced imaging (DEI) a 'long-slit' integration of the patterns was performed, and this emphasized the scattering contribution in the tails of the rocking curve

  4. Use of reference samples for more accurate RBS analyses

    International Nuclear Information System (INIS)

    Lanford, W.A.; Pelicon, P.; Zorko, B.; Budnar, M.

    2002-01-01

    While one of the primary assets of RBS analysis is that it is quantitative without use of reference samples, for certain types of analyses the precision of the method can be improved by measuring RBS spectra of unknowns relative to the RBS spectra of a similar known sample. The advantage of such an approach is that one can reduce (or eliminate) the uncertainties that arise from error in the detector solid angle, beam current integration efficiency, scattering cross-section, and stopping powers. We have used this approach extensively to determine the composition (x) of homogeneous thin films of TaN x using as reference samples films of pure Ta. Our approach is to measure R=(Ta count) unknown /(Ta count) standard and use RUMP to determine the function x(R). Once the function x(R) has been determined, this approach makes it easy to analyze many samples quickly. Other analyses for which this approach has proved useful are determination of the composition (x) of WN x , SiO x H y and SiN x H y , using W, SiO 2 and amorphous Si as reference samples, respectively

  5. The RBANS Effort Index: base rates in geriatric samples.

    Science.gov (United States)

    Duff, Kevin; Spering, Cynthia C; O'Bryant, Sid E; Beglinger, Leigh J; Moser, David J; Bayless, John D; Culp, Kennith R; Mold, James W; Adams, Russell L; Scott, James G

    2011-01-01

    The Effort Index (EI) of the RBANS was developed to assist clinicians in discriminating patients who demonstrate good effort from those with poor effort. However, there are concerns that older adults might be unfairly penalized by this index, which uses uncorrected raw scores. Using five independent samples of geriatric patients with a broad range of cognitive functioning (e.g., cognitively intact, nursing home residents, probable Alzheimer's disease), base rates of failure on the EI were calculated. In cognitively intact and mildly impaired samples, few older individuals were classified as demonstrating poor effort (e.g., 3% in cognitively intact). However, in the more severely impaired geriatric patients, over one third had EI scores that fell above suggested cutoff scores (e.g., 37% in nursing home residents, 33% in probable Alzheimer's disease). In the cognitively intact sample, older and less educated patients were more likely to have scores suggestive of poor effort. Education effects were observed in three of the four clinical samples. Overall cognitive functioning was significantly correlated with EI scores, with poorer cognition being associated with greater suspicion of low effort. The current results suggest that age, education, and level of cognitive functioning should be taken into consideration when interpreting EI results and that significant caution is warranted when examining EI scores in elders suspected of having dementia.

  6. Functional approximations to posterior densities: a neural network approach to efficient sampling

    NARCIS (Netherlands)

    L.F. Hoogerheide (Lennart); J.F. Kaashoek (Johan); H.K. van Dijk (Herman)

    2002-01-01

    textabstractThe performance of Monte Carlo integration methods like importance sampling or Markov Chain Monte Carlo procedures greatly depends on the choice of the importance or candidate density. Usually, such a density has to be "close" to the target density in order to yield numerically accurate

  7. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    Energy Technology Data Exchange (ETDEWEB)

    Shine, E. P.; Poirier, M. R.

    2013-10-29

    statisticians used carefully thought out designs that systematically and economically provided plans for data collection from the DWPF process. Key shared features of the sampling designs used at DWPF and the Gy sampling methodology were the specification of a standard for sample representativeness, an investigation that produced data from the process to study the sampling function, and a decision framework used to assess whether the specification was met based on the data. Without going into detail with regard to the seven errors identified by Pierre Gy, as excellent summaries are readily available such as Pitard [1989] and Smith [2001], SRS engineers understood, for example, that samplers can be biased (Gy's extraction error), and developed plans to mitigate those biases. Experiments that compared installed samplers with more representative samples obtained directly from the tank may not have resulted in systematically partitioning sampling errors into the now well-known error categories of Gy, but did provide overall information on the suitability of sampling systems. Most of the designs in this report are related to the DWPF vessels, not the large SRS Tank Farm tanks. Samples from the DWPF Slurry Mix Evaporator (SME), which contains the feed to the DWPF melter, are characterized using standardized analytical methods with known uncertainty. The analytical error is combined with the established error from sampling and processing in DWPF to determine the melter feed composition. This composition is used with the known uncertainty of the models in the Product Composition Control System (PCCS) to ensure that the wasteform that is produced is comfortably within the acceptable processing and product performance region. Having the advantage of many years of processing that meets the waste glass product acceptance criteria, the DWPF process has provided a considerable amount of data about itself in addition to the data from many special studies. Demonstrating representative

  8. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    International Nuclear Information System (INIS)

    Shine, E. P.; Poirier, M. R.

    2013-01-01

    statisticians used carefully thought out designs that systematically and economically provided plans for data collection from the DWPF process. Key shared features of the sampling designs used at DWPF and the Gy sampling methodology were the specification of a standard for sample representativeness, an investigation that produced data from the process to study the sampling function, and a decision framework used to assess whether the specification was met based on the data. Without going into detail with regard to the seven errors identified by Pierre Gy, as excellent summaries are readily available such as Pitard [1989] and Smith [2001], SRS engineers understood, for example, that samplers can be biased (Gy's extraction error), and developed plans to mitigate those biases. Experiments that compared installed samplers with more representative samples obtained directly from the tank may not have resulted in systematically partitioning sampling errors into the now well-known error categories of Gy, but did provide overall information on the suitability of sampling systems. Most of the designs in this report are related to the DWPF vessels, not the large SRS Tank Farm tanks. Samples from the DWPF Slurry Mix Evaporator (SME), which contains the feed to the DWPF melter, are characterized using standardized analytical methods with known uncertainty. The analytical error is combined with the established error from sampling and processing in DWPF to determine the melter feed composition. This composition is used with the known uncertainty of the models in the Product Composition Control System (PCCS) to ensure that the wasteform that is produced is comfortably within the acceptable processing and product performance region. Having the advantage of many years of processing that meets the waste glass product acceptance criteria, the DWPF process has provided a considerable amount of data about itself in addition to the data from many special studies. Demonstrating representative sampling

  9. Reliability and validity of the Persian lower extremity functional scale (LEFS) in a heterogeneous sample of outpatients with lower limb musculoskeletal disorders.

    Science.gov (United States)

    Negahban, Hossein; Hessam, Masumeh; Tabatabaei, Saeid; Salehi, Reza; Sohani, Soheil Mansour; Mehravar, Mohammad

    2014-01-01

    The aim was to culturally translate and validate the Persian lower extremity functional scale (LEFS) in a heterogeneous sample of outpatients with lower extremity musculoskeletal disorders (n = 304). This is a prospective methodological study. After a standard forward-backward translation, psychometric properties were assessed in terms of test-retest reliability, internal consistency, construct validity, dimensionality, and ceiling or floor effects. The acceptable level of intraclass correlation coefficient >0.70 and Cronbach's alpha coefficient >0.70 was obtained for the Persian LEFS. Correlations between Persian LEFS and Short-Form 36 Health Survey (SF-36) subscales of Physical Health component (rs range = 0.38-0.78) were higher than correlations between Persian LEFS and SF-36 subscales of Mental Health component (rs range = 0.15-0.39). A corrected item--total correlation of >0.40 (Spearman's rho) was obtained for all items of the Persian LEFS. Horn's parallel analysis detected a total of two factors. No ceiling or floor effects were detected for the Persian LEFS. The Persian version of the LEFS is a reliable and valid instrument that can be used to measure functional status in Persian-speaking patients with different musculoskeletal disorders of the lower extremity. Implications for Rehabilitation The Persian lower extremity functional scale (LEFS) is a reliable, internally consistent and valid instrument, with no ceiling or floor effects, to determine functional status of heterogeneous patients with musculoskeletal disorders of the lower extremity. The Persian version of the LEFS can be used in clinical and research settings to measure function in Iranian patients with different musculoskeletal disorders of the lower extremity.

  10. Sampling from the normal and exponential distributions

    International Nuclear Information System (INIS)

    Chaplin, K.R.; Wills, C.A.

    1982-01-01

    Methods for generating random numbers from the normal and exponential distributions are described. These involve dividing each function into subregions, and for each of these developing a method of sampling usually based on an acceptance rejection technique. When sampling from the normal or exponential distribution, each subregion provides the required random value with probability equal to the ratio of its area to the total area. Procedures written in FORTRAN for the CYBER 175/CDC 6600 system are provided to implement the two algorithms

  11. Sensitivity of Mantel Haenszel Model and Rasch Model as Viewed From Sample Size

    OpenAIRE

    ALWI, IDRUS

    2011-01-01

    The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size,  200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 tim...

  12. Applying meta-pathway analyses through metagenomics to identify the functional properties of the major bacterial communities of a single spontaneous cocoa bean fermentation process sample.

    Science.gov (United States)

    Illeghems, Koen; Weckx, Stefan; De Vuyst, Luc

    2015-09-01

    A high-resolution functional metagenomic analysis of a representative single sample of a Brazilian spontaneous cocoa bean fermentation process was carried out to gain insight into its bacterial community functioning. By reconstruction of microbial meta-pathways based on metagenomic data, the current knowledge about the metabolic capabilities of bacterial members involved in the cocoa bean fermentation ecosystem was extended. Functional meta-pathway analysis revealed the distribution of the metabolic pathways between the bacterial members involved. The metabolic capabilities of the lactic acid bacteria present were most associated with the heterolactic fermentation and citrate assimilation pathways. The role of Enterobacteriaceae in the conversion of substrates was shown through the use of the mixed-acid fermentation and methylglyoxal detoxification pathways. Furthermore, several other potential functional roles for Enterobacteriaceae were indicated, such as pectinolysis and citrate assimilation. Concerning acetic acid bacteria, metabolic pathways were partially reconstructed, in particular those related to responses toward stress, explaining their metabolic activities during cocoa bean fermentation processes. Further, the in-depth metagenomic analysis unveiled functionalities involved in bacterial competitiveness, such as the occurrence of CRISPRs and potential bacteriocin production. Finally, comparative analysis of the metagenomic data with bacterial genomes of cocoa bean fermentation isolates revealed the applicability of the selected strains as functional starter cultures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Hanford Sampling Quality Management Plan (HSQMP)

    International Nuclear Information System (INIS)

    Hyatt, J.E.

    1995-01-01

    This document provides a management tool for evaluating and designing the appropriate elements of a field sampling program. This document provides discussion of the elements of a program and is to be used as a guidance document during the preparation of project and/or function specific documentation. This document does not specify how a sampling program shall be organized. The HSQMP is to be used as a companion document to the Hanford Analytical Services Quality Assurance Plan (HASQAP) DOE/RL-94-55. The generation of this document was enhanced by conducting baseline evaluations of current sampling organizations. Valuable input was received from members of field and Quality Assurance organizations. The HSQMP is expected to be a living document. Revisions will be made as regulations and or Hanford Site conditions warrant changes in the best management practices. Appendices included are: summary of the sampling and analysis work flow process, a user's guide to the Data Quality Objective process, and a self-assessment checklist

  14. The Influences of Employment Status and Daily Stressors on Physiological Functioning in a Sample of Midlife and Older Adults.

    Science.gov (United States)

    Wong, Jen D; Shobo, Yetunde

    2016-06-01

    This study examines the influences of employment status and the moderating role of daily stressors on cortisol levels and responsivity in 182 workers and 253 retirees between 55 and 75 years old from the National Survey of Midlife Development in the United States (MIDUS-II). As a part of the Daily Diary Study, participants completed telephone interviews about their daily experiences across eight evenings and provided saliva samples across 4 days. Multilevel models showed that workers who experienced greater number of non-work related daily stressors significantly exhibited higher cortisol level at 30 min post awakening (b = 0.252, SE = 0.109, p stressors. Findings demonstrate the important consideration of daily stressors in identifying the ways in which social roles influence physiological functioning in midlife and late adulthood. © The Author(s) 2016.

  15. An automated blood sampling system used in positron emission tomography

    International Nuclear Information System (INIS)

    Eriksson, L.; Bohm, C.; Kesselberg, M.

    1988-01-01

    Fast dynamic function studies with positron emission tomography (PET), has the potential to give accurate information of physiological functions of the brain. This capability can be realised if the positron camera system accurately quantitates the tracer uptake in the brain with sufficiently high efficiency and in sufficiently short time intervals. However, in addition, the tracer concentration in blood, as a function of time, must be accurately determined. This paper describes and evaluates an automated blood sampling system. Two different detector units are compared. The use of the automated blood sampling system is demonstrated in studies of cerebral blood flow, in studies of the blood-brain barrier transfer of amino acids and of the cerebral oxygen consumption. 5 refs.; 7 figs

  16. Theory of sampling and its application in tissue based diagnosis

    Directory of Open Access Journals (Sweden)

    Kayser Gian

    2009-02-01

    the detected objects (for example grey value distribution around an object. It serves also for the definition of parameters of the probability function in so – called active segmentation. Conclusion The method is useful in standardization of images derived from immunohistochemically stained slides, and implemented in the EAMUS™ system http://www.diagnomX.de. It can also be applied for the search of "objects possessing an amplification function", i.e. a rare event with "steering function". A formula to calculate the efficiency and potential error rate of the described sampling procedures is given.

  17. Eating disorder severity and functional impairment: moderating effects of illness duration in a clinical sample.

    Science.gov (United States)

    Davidsen, Annika Helgadóttir; Hoyt, William T; Poulsen, Stig; Waaddegaard, Mette; Lau, Marianne

    2017-09-01

    The aim was to examine duration of illness and body mass index as possible moderators of the relationship between eating disorder severity and functional impairment, as well as psychological distress as a possible mediator of this relationship. The study included 159 patients diagnosed with bulimia nervosa, binge eating disorder or eating disorder not otherwise specified. Regression analysis was applied to assess the effect of the hypothesized moderators and mediators. Eating disorder severity was measured with the Eating Disorder Examination Questionnaire, functional impairment was measured with the Sheehan Disability Scale, and psychological distress was measured with the Symptom Check List-90-R. Duration of illness and body mass index were assessed clinically. Duration of illness significantly moderated the relationship between eating disorder severity and functional impairment; the relationship was strongest for patients with a shorter duration of illness. Psychological distress partly mediated the relationship between eating disorder severity and functional impairment. Duration of illness significantly moderated the relationship between psychological distress and functional impairment; the strongest relationship was seen for patients with a shorter duration of illness. Body mass index was not a significant moderator of the relationship between ED severity and functional impairment. Overall, this study established a link between ED severity, psychological distress and functional impairment indicating that both eating disorder severity and psychological distress are more strongly related to impaired role functioning for patients with more recent onset of an eating disorder. More research in the complex relationship between ED severity and functional impairment is needed.

  18. Magnetic separation techniques in sample preparation for biological analysis: a review.

    Science.gov (United States)

    He, Jincan; Huang, Meiying; Wang, Dongmei; Zhang, Zhuomin; Li, Gongke

    2014-12-01

    Sample preparation is a fundamental and essential step in almost all the analytical procedures, especially for the analysis of complex samples like biological and environmental samples. In past decades, with advantages of superparamagnetic property, good biocompatibility and high binding capacity, functionalized magnetic materials have been widely applied in various processes of sample preparation for biological analysis. In this paper, the recent advancements of magnetic separation techniques based on magnetic materials in the field of sample preparation for biological analysis were reviewed. The strategy of magnetic separation techniques was summarized. The synthesis, stabilization and bio-functionalization of magnetic nanoparticles were reviewed in detail. Characterization of magnetic materials was also summarized. Moreover, the applications of magnetic separation techniques for the enrichment of protein, nucleic acid, cell, bioactive compound and immobilization of enzyme were described. Finally, the existed problems and possible trends of magnetic separation techniques for biological analysis in the future were proposed. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Stimulation of mineral-specific luminescence from multi-mineral samples

    DEFF Research Database (Denmark)

    Duller, G.A.T.; Bøtter-Jensen, L.; Poolton, N.R.J.

    1995-01-01

    Grains of quartz and potassium-rich feldspar have been mixed in known ratios to produce samples of known mineralogical composition, analogous to those found in natural sedimentary deposits. The variation of the green light stimulated luminescence (GLSL), as a function of sample temperature......, was measured for each of these mixtures in order to attempt to isolate a luminescence signal that originates specifically from just one of the components. As the sample is heated from room temperature to 450 degrees C, thermal quenching reduces the signal from the quartz component to near zero, while that from...... geological samples....

  20. Respiratory motion sampling in 4DCT reconstruction for radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Chi Yuwei; Liang Jian; Qin Xu; Yan Di [Department of Radiation Oncology, Columbia University, New York, New York 10032 (United States); Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, Michigan 48073 (United States)

    2012-04-15

    Purpose: Phase-based and amplitude-based sorting techniques are commonly used in four-dimensional CT (4DCT) reconstruction. However, effect of these sorting techniques on 4D dose calculation has not been explored. In this study, the authors investigated a candidate 4DCT sorting technique by comparing its 4D dose calculation accuracy with that for phase-based and amplitude-based sorting techniques.Method: An optimization model was formed using organ motion probability density function (PDF) in the 4D dose convolution. The objective function for optimization was defined as the maximum difference between the expected 4D dose in organ of interest and the 4D dose calculated using a 4DCT sorted by a candidate sampling method. Sorting samples, as optimization variables, were selected on the respiratory motion PDF assessed during the CT scanning. Breathing curves obtained from patients' 4DCT scanning, as well as 3D dose distribution from treatment planning, were used in the study. Given the objective function, a residual error analysis was performed, and k-means clustering was found to be an effective sampling scheme to improve the 4D dose calculation accuracy and independent with the patient-specific dose distribution. Results: Patient data analysis demonstrated that the k-means sampling was superior to the conventional phase-based and amplitude-based sorting and comparable to the optimal sampling results. For phase-based sorting, the residual error in 4D dose calculations may not be further reduced to an acceptable accuracy after a certain number of phases, while for amplitude-based sorting, k-means sampling, and the optimal sampling, the residual error in 4D dose calculations decreased rapidly as the number of 4DCT phases increased to 6.Conclusion: An innovative phase sorting method (k-means method) is presented in this study. The method is dependent only on tumor motion PDF. It could provide a way to refine the phase sorting in 4DCT reconstruction and is effective

  1. Latex Rubber Gloves as a Sampling Dosimeter Using a Novel Surrogate Sampling Device.

    Science.gov (United States)

    Sankaran, Gayatri; Lopez, Terry; Ries, Steve; Ross, John; Vega, Helen; Eastmond, David A; Krieger, Robert I

    2015-01-01

    Pesticide exposure during harvesting of crops occurs primarily to the workers' hands. When harvesters wear latex rubber gloves for personal safety and hygiene harvesting reasons, gloves accumulate pesticide residues. Hence, characterization of the gloves' properties may be useful for pesticide exposure assessments. Controlled field studies were conducted using latex rubber gloves to define the factors that influence the transfer of pesticides to the glove and that would affect their use as a residue monitoring device. A novel sampling device called the Brinkman Contact Transfer Unit (BCTU) was constructed to study the glove characteristics and residue transfer and accumulation under controlled conditions on turf. The effectiveness of latex rubber gloves as sampling dosimeters was evaluated by measuring the transferable pesticide residues as a function of time. The validation of latex rubber gloves as a residue sampling dosimeter was performed by comparing pesticide transfer and dissipation from the gloves, with the turf transferable residues sampled using the validated California (CA) Roller, a standard measure of residue transfer. The observed correlation (Pearson's correlation coefficient R(2)) between the two methods was .84 for malathion and .96 for fenpropathrin, indicating that the BCTU is a useful, reliable surrogate tool for studying available residue transfer to latex rubber gloves under experimental conditions. Perhaps more importantly, these data demonstrate that latex gloves worn by workers may be useful quantifiable matrices for measuring pesticide exposure.

  2. Hemodynamic and glucometabolic factors fail to predict renal function in a random population sample

    DEFF Research Database (Denmark)

    Pareek, M.; Nielsen, M.; Olesen, Thomas Bastholm

    2015-01-01

    indices of beta-cell function (HOMA-2B), insulin sensitivity (HOMA-2S), and insulin resistance (HOMA-2IR)), traditional cardiovascular risk factors (age, sex, smoking status, body mass index, diabetes mellitus, total serum cholesterol), and later renal function determined as serum cystatin C in 238 men...

  3. Support vector regression to predict porosity and permeability: Effect of sample size

    Science.gov (United States)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function

  4. Is Using the Strengths and Difficulties Questionnaire in a Community Sample the Optimal Way to Assess Mental Health Functioning?

    Directory of Open Access Journals (Sweden)

    Sharmila Vaz

    Full Text Available An important characteristic of a screening tool is its discriminant ability or the measure's accuracy to distinguish between those with and without mental health problems. The current study examined the inter-rater agreement and screening concordance of the parent and teacher versions of SDQ at scale, subscale and item-levels, with the view of identifying the items that have the most informant discrepancies; and determining whether the concordance between parent and teacher reports on some items has the potential to influence decision making. Cross-sectional data from parent and teacher reports of the mental health functioning of a community sample of 299 students with and without disabilities from 75 different primary schools in Perth, Western Australia were analysed. The study found that: a Intraclass correlations between parent and teacher ratings of children's mental health using the SDQ at person level was fair on individual child level; b The SDQ only demonstrated clinical utility when there was agreement between teacher and parent reports using the possible or 90% dichotomisation system; and c Three individual items had positive likelihood ratio scores indicating clinical utility. Of note was the finding that the negative likelihood ratio or likelihood of disregarding the absence of a condition when both parents and teachers rate the item as absent was not significant. Taken together, these findings suggest that the SDQ is not optimised for use in community samples and that further psychometric evaluation of the SDQ in this context is clearly warranted.

  5. Is Using the Strengths and Difficulties Questionnaire in a Community Sample the Optimal Way to Assess Mental Health Functioning?

    Science.gov (United States)

    Vaz, Sharmila; Cordier, Reinie; Boyes, Mark; Parsons, Richard; Joosten, Annette; Ciccarelli, Marina; Falkmer, Marita; Falkmer, Torbjorn

    2016-01-01

    An important characteristic of a screening tool is its discriminant ability or the measure's accuracy to distinguish between those with and without mental health problems. The current study examined the inter-rater agreement and screening concordance of the parent and teacher versions of SDQ at scale, subscale and item-levels, with the view of identifying the items that have the most informant discrepancies; and determining whether the concordance between parent and teacher reports on some items has the potential to influence decision making. Cross-sectional data from parent and teacher reports of the mental health functioning of a community sample of 299 students with and without disabilities from 75 different primary schools in Perth, Western Australia were analysed. The study found that: a) Intraclass correlations between parent and teacher ratings of children's mental health using the SDQ at person level was fair on individual child level; b) The SDQ only demonstrated clinical utility when there was agreement between teacher and parent reports using the possible or 90% dichotomisation system; and c) Three individual items had positive likelihood ratio scores indicating clinical utility. Of note was the finding that the negative likelihood ratio or likelihood of disregarding the absence of a condition when both parents and teachers rate the item as absent was not significant. Taken together, these findings suggest that the SDQ is not optimised for use in community samples and that further psychometric evaluation of the SDQ in this context is clearly warranted.

  6. Quantification of regional cerebral blood flow (rCBF) measurement with one point sampling by sup 123 I-IMP SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Munaka, Masahiro [University of Occupational and Enviromental Health, Kitakyushu (Japan); Iida, Hidehiro; Murakami, Matsutaro

    1992-02-01

    A handy method of quantifying regional cerebral blood flow (rCBF) measurement by {sup 123}I-IMP SPECT was designed. A standard input function was made and the sampling time to calibrate this standard input function by one point sampling was optimized. An average standard input function was obtained from continuous arterial samplings of 12 healthy adults. The best sampling time was the minimum differential value between the integral calculus value of the standard input function calibrated by one point sampling and the input funciton by continuous arterial samplings. This time was 8 minutes after an intravenous injection of {sup 123}I-IMP and an error was estimated to be {+-}4.1%. The rCBF values by this method were evaluated by comparing them with the rCBF values of the input function with continuous arterial samplings in 2 healthy adults and a patient with cerebral infarction. A significant correlation (r=0.764, p<0.001) was obtained between both. (author).

  7. System design description for sampling fuel in K basins

    International Nuclear Information System (INIS)

    Baker, R.B.

    1996-01-01

    This System Design Description provides: (1) statements of the Spent Nuclear Fuel Projects (SNFP) needs requiring sampling of fuel in the K East and K West Basins, (2) the sampling equipment functions and requirements, (3) a general work plan and the design logic being followed to develop the equipment, and (4) a summary description of the design for the sampling equipment. The report summarizes the integrated application of both the subject equipment and the canister sludge sampler in near-term characterization campaigns at K Basins

  8. The Safeguards analysis applied to the RRP. Automatic sampling authentication system

    International Nuclear Information System (INIS)

    Ono, Sawako; Nakashima, Shinichi; Iwamoto, Tomonori

    2004-01-01

    The sampling for analysis from vessels and columns at the Rokkasho Reprocessing Plant (RRP) is performed mostly by the automatic sampling system. The safeguards sample for the verification also will be taken using these sampling systems and transfer to the OSL though the pneumatic transfer network owned and controlled by operator. In order to maintaining sample integrity and continuity of knowledge (CoK) for throughout the sample processing. It is essential to develop and establish the authentication measures for the automatic sampling system including transfer network. We have developed the Automatic Sampling Authentication System (ASAS) under consultation by IAEA. This paper describes structure, function and concept of ASAS. (author)

  9. Where Will All Your Samples Go?

    Science.gov (United States)

    Lehnert, K.

    2017-12-01

    define standards that institutions must comply with to function as a trustworthy sample repository similar to trustworthy digital repositories. The iSamples Research Coordination Network of the EarthCube program aims to address some of these questions in workshops planned for 2018. This panel session offers an opportunity to ignite the discussion.

  10. Effect of Leisure Activities on Inflammation and Cognitive Function in an Aging Sample

    Science.gov (United States)

    Friedman, Elliot; Quinn, Jill; Chen, Ding-Geng (Din); Mapstone, Mark

    2012-01-01

    Cardiovascular disease risk factors (CVDRFs) increase the risk of dementia. The purpose of this study was to examine whether leisure activities (mental, physical, and social activities) modified the effect of CVDRFs on inflammatory markers and cognitive function in middle and old age. A secondary-data analysis study was conducted using data from 405 middle-age participants (40 –59 years) and 342 old-age participants (60 – 84 years) who participated in the Survey of Midlife Development in the United States. CVDRFs were obtained from a combination of self-report medical history and blood-based biomarkers. Three CVDRF groups (≤1, 2, and ≥3 CVDRFs) were identified. More CVDRFs were significantly associated with higher levels of inflammatory markers in both age groups, and associated with lower levels of executive function in the old age group. CVDRFs were not related to the frequency of leisure activities in either age group. After controlling for covariates, higher levels of physical activities were significantly associated with lower levels of inflammatory markers, and higher levels of mental activities were associated with higher levels of cognitive function. In the old age group, physical activities also moderated the effect of CVDRFs on episodic memory, and mental activities moderated the effect of CVDRFs on interleukin-6. Multiple CVDRFs may be associated with poorer cognitive function and higher inflammatory markers, but middle-age and older adults with CVDRFs may not engage in frequent physical and cognitive activities that may be protective. It is important to develop strategies to facilitate engagement in these activities from midlife. PMID:22377120

  11. Time-Frequency Based Instantaneous Frequency Estimation of Sparse Signals from an Incomplete Set of Samples

    Science.gov (United States)

    2014-06-17

    100 0 2 4 Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function 0 50 100 0 2 4 L- Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function ...bilinear or higher order autocorrelation functions will increase the number of missing samples, the analysis shows that accurate instantaneous...frequency estimation can be achieved even if we deal with only few samples, as long as the auto-correlation function is properly chosen to coincide with

  12. Observer-Based Stabilization of Spacecraft Rendezvous with Variable Sampling and Sensor Nonlinearity

    Directory of Open Access Journals (Sweden)

    Zhuoshi Li

    2013-01-01

    Full Text Available This paper addresses the observer-based control problem of spacecraft rendezvous with nonuniform sampling period. The relative dynamic model is based on the classical Clohessy-Wiltshire equation, and sensor nonlinearity and sampling are considered together in a unified framework. The purpose of this paper is to perform an observer-based controller synthesis by using sampled and saturated output measurements, such that the resulting closed-loop system is exponentially stable. A time-dependent Lyapunov functional is developed which depends on time and the upper bound of the sampling period and also does not grow along the input update times. The controller design problem is solved in terms of the linear matrix inequality method, and the obtained results are less conservative than using the traditional Lyapunov functionals. Finally, a numerical simulation example is built to show the validity of the developed sampled-data control strategy.

  13. Minimum variance Monte Carlo importance sampling with parametric dependence

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.; Halton, J.; Maynard, C.W.

    1981-01-01

    An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de

  14. Quantitative analysis of tip-sample interaction in non-contact scanning force spectroscopy

    International Nuclear Information System (INIS)

    Palacios-Lidon, Elisa; Colchero, Jaime

    2006-01-01

    Quantitative characterization of tip-sample interaction in scanning force microscopy is fundamental for optimum image acquisition as well as data interpretation. In this work we discuss how to characterize the electrostatic and van der Waals contribution to tip-sample interaction in non-contact scanning force microscopy precisely. The spectroscopic technique presented is based on the simultaneous measurement of cantilever deflection, oscillation amplitude and frequency shift as a function of tip-sample voltage and tip-sample distance as well as on advanced data processing. Data are acquired at a fixed lateral position as interaction images, with the bias voltage as fast scan, and tip-sample distance as slow scan. Due to the quadratic dependence of the electrostatic interaction with tip-sample voltage the van der Waals force can be separated from the electrostatic force. Using appropriate data processing, the van der Waals interaction, the capacitance and the contact potential can be determined as a function of tip-sample distance. The measurement of resonance frequency shift yields very high signal to noise ratio and the absolute calibration of the measured quantities, while the acquisition of cantilever deflection allows the determination of the tip-sample distance

  15. A novel glucose biosensor based on phosphonic acid-functionalized silica nanoparticles for sensitive detection of glucose in real samples

    International Nuclear Information System (INIS)

    Zhao, Wenbo; Fang, Yi; Zhu, Qinshu; Wang, Kuai; Liu, Min; Huang, Xiaohua; Shen, Jian

    2013-01-01

    An effective strategy for preparation amperometric biosensor by using the phosphonic acid-functionalized silica nanoparticles (PFSi NPs) as special modified materials is proposed. In such a strategy, glucose oxidase (GOD) was selected as model protein to fabricate glucose biosensor in the presence of phosphonic acid-functionalized silica nanoparticles (PFSi NPs). The PFSi NPs were first modified on the surface of glassy carbon (GC) electrode, then, GOD was adsorbed onto the PFSi NPs film by drop-coating. The PFSi NPs were characterized by transmission electron microscopy (TEM) and nuclear magnetic resonance (NMR) spectra. The interaction of PFSi NPs with GOD was investigated by the circular dicroism spectroscopy (CD). The results showed PFSi NPs could essentially maintain the native conformation of GOD. The direct electron transfer of GOD on (PFSi NPs)/GCE electrode exhibited excellent electrocatalytic activity for the oxidation of glucose. The proposed biosensor modified with PFSi NPs displayed a fast amperometric response (5 s) to glucose, a good linear current–time relation over a wide range of glucose concentrations from 5.00 × 10 −4 to 1.87 × 10 −1 M, and a low detection limit of 2.44 × 10 −5 M (S/N = 3). Moreover, the biosensor can be used for assessment of the concentration of glucose in many real samples (relative error < 3%). The GOD biosensor modified with PFSi NPs will have essential meaning and practical application in future that attributed to the simple method of fabrication and good performance

  16. A large-scale cryoelectronic system for biological sample banking

    Science.gov (United States)

    Shirley, Stephen G.; Durst, Christopher H. P.; Fuchs, Christian C.; Zimmermann, Heiko; Ihmig, Frank R.

    2009-11-01

    We describe a polymorphic electronic infrastructure for managing biological samples stored over liquid nitrogen. As part of this system we have developed new cryocontainers and carrier plates attached to Flash memory chips to have a redundant and portable set of data at each sample. Our experimental investigations show that basic Flash operation and endurance is adequate for the application down to liquid nitrogen temperatures. This identification technology can provide the best sample identification, documentation and tracking that brings added value to each sample. The first application of the system is in a worldwide collaborative research towards the production of an AIDS vaccine. The functionality and versatility of the system can lead to an essential optimization of sample and data exchange for global clinical studies.

  17. An overview of data acquisition, signal coding and data analysis techniques for MST radars

    Science.gov (United States)

    Rastogi, P. K.

    1986-01-01

    An overview is given of the data acquisition, signal processing, and data analysis techniques that are currently in use with high power MST/ST (mesosphere stratosphere troposphere/stratosphere troposphere) radars. This review supplements the works of Rastogi (1983) and Farley (1984) presented at previous MAP workshops. A general description is given of data acquisition and signal processing operations and they are characterized on the basis of their disparate time scales. Then signal coding, a brief description of frequently used codes, and their limitations are discussed, and finally, several aspects of statistical data processing such as signal statistics, power spectrum and autocovariance analysis, outlier removal techniques are discussed.

  18. Restoring a smooth function from its noisy integrals

    Science.gov (United States)

    Goulko, Olga; Prokof'ev, Nikolay; Svistunov, Boris

    2018-05-01

    Numerical (and experimental) data analysis often requires the restoration of a smooth function from a set of sampled integrals over finite bins. We present the bin hierarchy method that efficiently computes the maximally smooth function from the sampled integrals using essentially all the information contained in the data. We perform extensive tests with different classes of functions and levels of data quality, including Monte Carlo data suffering from a severe sign problem and physical data for the Green's function of the Fröhlich polaron.

  19. On a Robust MaxEnt Process Regression Model with Sample-Selection

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2018-04-01

    Full Text Available In a regression analysis, a sample-selection bias arises when a dependent variable is partially observed as a result of the sample selection. This study introduces a Maximum Entropy (MaxEnt process regression model that assumes a MaxEnt prior distribution for its nonparametric regression function and finds that the MaxEnt process regression model includes the well-known Gaussian process regression (GPR model as a special case. Then, this special MaxEnt process regression model, i.e., the GPR model, is generalized to obtain a robust sample-selection Gaussian process regression (RSGPR model that deals with non-normal data in the sample selection. Various properties of the RSGPR model are established, including the stochastic representation, distributional hierarchy, and magnitude of the sample-selection bias. These properties are used in the paper to develop a hierarchical Bayesian methodology to estimate the model. This involves a simple and computationally feasible Markov chain Monte Carlo algorithm that avoids analytical or numerical derivatives of the log-likelihood function of the model. The performance of the RSGPR model in terms of the sample-selection bias correction, robustness to non-normality, and prediction, is demonstrated through results in simulations that attest to its good finite-sample performance.

  20. System for Earth Sample Registration SESAR: Services for IGSN Registration and Sample Metadata Management

    Science.gov (United States)

    Chan, S.; Lehnert, K. A.; Coleman, R. J.

    2011-12-01

    SESAR, the System for Earth Sample Registration, is an online registry for physical samples collected for Earth and environmental studies. SESAR generates and administers the International Geo Sample Number IGSN, a unique identifier for samples that is dramatically advancing interoperability amongst information systems for sample-based data. SESAR was developed to provide the complete range of registry services, including definition of IGSN syntax and metadata profiles, registration and validation of name spaces requested by users, tools for users to submit and manage sample metadata, validation of submitted metadata, generation and validation of the unique identifiers, archiving of sample metadata, and public or private access to the sample metadata catalog. With the development of SESAR v3, we placed particular emphasis on creating enhanced tools that make metadata submission easier and more efficient for users, and that provide superior functionality for users to manage metadata of their samples in their private workspace MySESAR. For example, SESAR v3 includes a module where users can generate custom spreadsheet templates to enter metadata for their samples, then upload these templates online for sample registration. Once the content of the template is uploaded, it is displayed online in an editable grid format. Validation rules are executed in real-time on the grid data to ensure data integrity. Other new features of SESAR v3 include the capability to transfer ownership of samples to other SESAR users, the ability to upload and store images and other files in a sample metadata profile, and the tracking of changes to sample metadata profiles. In the next version of SESAR (v3.5), we will further improve the discovery, sharing, registration of samples. For example, we are developing a more comprehensive suite of web services that will allow discovery and registration access to SESAR from external systems. Both batch and individual registrations will be possible

  1. Patient-reported mobility function and engagement in young adults with cerebral palsy: a cross-sectional sample.

    Science.gov (United States)

    Lennon, N; Church, C; Miller, F

    2018-04-01

    To describe self-reported life satisfaction and motor function of young adults with cerebral palsy (CP). A total of 57 young adults with spastic CP classified as levels I (seven), II (25), III (16), IV (nine) by the Gross Motor Function Classification System, followed from childhood by our CP clinic, returned at a mean age of 27 years two months (SD 3 years 4 months). Self-reported life satisfaction and mobility status were measured by the Pediatric Outcomes Data Collection Instrument (PODCI), Patient-Reported Outcomes Measurement Information System (PROMIS), Functional Mobility Scale (FMS) and a project questionnaire. Surgical history and childhood mobility were confirmed from medical records. The Functional Mobility Scale demonstrated limited but stable mobility function from childhood to adulthood. The PROMIS and PODCI revealed limited motor function compared with a non-disabled normative reference (p mobility function using the FMS correlated highly (r = 0.8; p mobility is limited and community independence is not fully achieved in young adults with CP, these participants maintained childhood levels of mobility function into young adulthood, were satisfied with social roles and had minimal reports of pain.

  2. An Optimization Study on Listening Experiments to Improve the Comparability of Annoyance Ratings of Noise Samples from Different Experimental Sample Sets.

    Science.gov (United States)

    Di, Guoqing; Lu, Kuanguang; Shi, Xiaofan

    2018-03-08

    Annoyance ratings obtained from listening experiments are widely used in studies on health effect of environmental noise. In listening experiments, participants usually give the annoyance rating of each noise sample according to its relative annoyance degree among all samples in the experimental sample set if there are no reference sound samples, which leads to poor comparability between experimental results obtained from different experimental sample sets. To solve this problem, this study proposed to add several pink noise samples with certain loudness levels into experimental sample sets as reference sound samples. On this basis, the standard curve between logarithmic mean annoyance and loudness level of pink noise was used to calibrate the experimental results and the calibration procedures were described in detail. Furthermore, as a case study, six different types of noise sample sets were selected to conduct listening experiments using this method to examine the applicability of it. Results showed that the differences in the annoyance ratings of each identical noise sample from different experimental sample sets were markedly decreased after calibration. The determination coefficient ( R ²) of linear fitting functions between psychoacoustic annoyance (PA) and mean annoyance (MA) of noise samples from different experimental sample sets increased obviously after calibration. The case study indicated that the method above is applicable to calibrating annoyance ratings obtained from different types of noise sample sets. After calibration, the comparability of annoyance ratings of noise samples from different experimental sample sets can be distinctly improved.

  3. Parametric inference for discretely sampled stochastic differential equations

    DEFF Research Database (Denmark)

    Sørensen, Michael

    A review is given of parametric estimation methods for discretely sampled mul- tivariate diffusion processes. The main focus is on estimating functions and asymp- totic results. Maximum likelihood estimation is briefly considered, but the emphasis is on computationally less demanding martingale...

  4. Challenging genosensors in food samples: The case of gluten determination in highly processed samples.

    Science.gov (United States)

    Martín-Fernández, Begoña; de-los-Santos-Álvarez, Noemí; Martín-Clemente, Juan Pedro; Lobo-Castañón, María Jesús; López-Ruiz, Beatriz

    2016-01-01

    Electrochemical genosensors have undergone an enormous development in the last decades, but only very few have achieved a quantification of target content in highly processed food samples. The detection of allergens, and particularly gluten, is challenging because legislation establishes a threshold of 20 ppm for labeling as gluten-free but most genosensors expresses the results in DNA concentration or DNA copies. This paper describes the first attempt to correlate the genosensor response and the wheat content in real samples, even in the case of highly processed food samples. A sandwich-based format, comprising a capture probe immobilized onto the screen-printed gold electrode, and a signaling probe functionalized with fluorescein isothiocyanate (FITC), both hybridizing with the target was used. The hybridization event was electrochemically monitored by adding an anti-FITC peroxidase (antiFITC-HRP) and its substrate, tetramethylbenzidine. Binary model mixtures, as a reference material, and real samples have been analyzed. DNA from food was extracted and a fragment encoding the immunodominant peptide of α2-gliadin amplified by a tailored PCR. The sensor was able to selectively detect toxic cereals for celiac patients, such as different varieties of wheat, barley, rye and oats, from non-toxic plants. As low as 0.001% (10 mg/kg) of wheat flour in an inert matrix was reliably detected, which directly compete with the current method of choice for DNA detection, the real-time PCR. A good correlation with the official immunoassay was found in highly processed food samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Improvement of fuel sampling device for STACY and TRACY

    International Nuclear Information System (INIS)

    Hirose, Hideyuki; Sakuraba, Koichi; Onodera, Seiji

    1998-05-01

    STACY and TRACY, static and transient experiment facilities in NUCEF, use solution fuel. It is important to analyze accurately fuel composition (uranium enrichment, uranium concentration, nitric acid morality, amount of impurities, radioactivity of FP) for their safety operation and improvement of experimental accuracy. Both STACY and TRACY have the sampling devices to sample fuel solution for that purpose. The previous sampling devices of STACY and TRACY had been designed to dilute fuel sample with nitric acid. Its sampling mechanism could pour fuel sample into sampling vessel by a piston drive of nitric acid in the burette. It was, however, sometimes found that sample fuel solution was diluted by mixing with nitric acid in the burette. Therefore, the sampling mechanism was change into a fixed quantity pump drive which didn't use nitric acid. The authors confirmed that the performance of the new sampling device was improved by changing sampling mechanism. It was confirmed through the function test that the uncertainty in uranium concentration measurement using the improved sampling device was 0.14%, and less than the designed value of 0.2% (coefficient of variation). (author)

  6. Effect of sample size on bias correction performance

    Science.gov (United States)

    Reiter, Philipp; Gutjahr, Oliver; Schefczyk, Lukas; Heinemann, Günther; Casper, Markus C.

    2014-05-01

    The output of climate models often shows a bias when compared to observed data, so that a preprocessing is necessary before using it as climate forcing in impact modeling (e.g. hydrology, species distribution). A common bias correction method is the quantile matching approach, which adapts the cumulative distribution function of the model output to the one of the observed data by means of a transfer function. Especially for precipitation we expect the bias correction performance to strongly depend on sample size, i.e. the length of the period used for calibration of the transfer function. We carry out experiments using the precipitation output of ten regional climate model (RCM) hindcast runs from the EU-ENSEMBLES project and the E-OBS observational dataset for the period 1961 to 2000. The 40 years are split into a 30 year calibration period and a 10 year validation period. In the first step, for each RCM transfer functions are set up cell-by-cell, using the complete 30 year calibration period. The derived transfer functions are applied to the validation period of the respective RCM precipitation output and the mean absolute errors in reference to the observational dataset are calculated. These values are treated as "best fit" for the respective RCM. In the next step, this procedure is redone using subperiods out of the 30 year calibration period. The lengths of these subperiods are reduced from 29 years down to a minimum of 1 year, only considering subperiods of consecutive years. This leads to an increasing number of repetitions for smaller sample sizes (e.g. 2 for a length of 29 years). In the last step, the mean absolute errors are statistically tested against the "best fit" of the respective RCM to compare the performances. In order to analyze if the intensity of the effect of sample size depends on the chosen correction method, four variations of the quantile matching approach (PTF, QUANT/eQM, gQM, GQM) are applied in this study. The experiments are further

  7. 36Cl measurements of Hiroshima concrete samples

    International Nuclear Information System (INIS)

    Matsuhiro, T.; Nagashima, Y.; Seki, R.; Takahashi, T.

    2002-01-01

    The 36 Cl AMS studies are reported. A new steps of procedure of a sample preparation is developed and a tremendous reduction of sulphur background has been achieved. The 36 Cl contents of two atomic bombed concrete samples, old Hiroshima Bank one and Gokoku Shrine one, have been measured as a function of 36 Cl to Cl ratio by the Tsukuba AMS system. The 36 Cl to Cl ratio of the old Hiroshima Bank sample shows very nice agreement with the result of γ measurement of 152 Eu. Otherwise, the ratio is about 20% smaller than an estimation by the DS86 dosimetry system. A result of the Gokoku Shrine sample is also smaller than a depth profile estimation by the same DS86. It might be clear that the DS86 has a tendency of overestimation. It seems that a calculation method and/or the parameters used in the calculation are requested to be improved. (author)

  8. Use of thermal neutron reflection method for chemical analysis of bulk samples

    International Nuclear Information System (INIS)

    Papp, A.; Csikai, J.

    2014-01-01

    Microscopic, σ β , and macroscopic, Σ β , reflection cross-sections of thermal neutrons averaged over bulk samples as a function of thickness (z) are given. The σ β values are additive even for bulk samples in the z=0.5–8 cm interval and so the σ βmol (z) function could be given for hydrogenous substances, including some illicit drugs, explosives and hiding materials of ∼1000 cm 3 dimensions. The calculated excess counts agree with the measured R(z) values. For the identification of concealed objects and chemical analysis of bulky samples, different neutron methods need to be used simultaneously. - Highlights: • Check the proposed analytical expression for the description of the flux. • Determination of the reflection cross-sections averaged over bulk samples. • Data rendered to estimate the excess counts for various materials

  9. Use of thermal neutron reflection method for chemical analysis of bulk samples

    Energy Technology Data Exchange (ETDEWEB)

    Papp, A., E-mail: papppa@atomki.hu [Institute of Nuclear Research of the Hungarian Academy of Sciences, (ATOMKI), 4001 Debrecen, Pf. 51 (Hungary); Csikai, J. [Institute of Nuclear Research of the Hungarian Academy of Sciences, (ATOMKI), 4001 Debrecen, Pf. 51 (Hungary); Institute of Experimental Physics, University Debrecen (IEP), 4010 Debrecen-10, Pf. 105 (Hungary)

    2014-09-11

    Microscopic, σ{sub β}, and macroscopic, Σ{sub β}, reflection cross-sections of thermal neutrons averaged over bulk samples as a function of thickness (z) are given. The σ{sub β} values are additive even for bulk samples in the z=0.5–8 cm interval and so the σ{sub βmol}(z) function could be given for hydrogenous substances, including some illicit drugs, explosives and hiding materials of ∼1000 cm{sup 3} dimensions. The calculated excess counts agree with the measured R(z) values. For the identification of concealed objects and chemical analysis of bulky samples, different neutron methods need to be used simultaneously. - Highlights: • Check the proposed analytical expression for the description of the flux. • Determination of the reflection cross-sections averaged over bulk samples. • Data rendered to estimate the excess counts for various materials.

  10. Parent Rated Symptoms of Inattention in Childhood Predict High School Academic Achievement Across Two Culturally and Diagnostically Diverse Samples

    Directory of Open Access Journals (Sweden)

    Astri J. Lundervold

    2017-08-01

    Full Text Available Objective: To investigate parent reports of childhood symptoms of inattention as a predictor of adolescent academic achievement, taking into account the impact of the child’s intellectual functioning, in two diagnostically and culturally diverse samples.Method: Samples: (a an all-female sample in the U.S. predominated by youth with ADHD (Berkeley Girls with ADHD Longitudinal Study [BGALS], N = 202, and (b a mixed-sex sample recruited from a Norwegian population-based sample (the Bergen Child Study [BCS], N = 93. Inattention and intellectual function were assessed via the same measures in the two samples; academic achievement scores during and beyond high school and demographic covariates were country-specific.Results: Childhood inattention predicted subsequent academic achievement in both samples, with a somewhat stronger effect in the BGALS sample, which included a large subgroup of children with ADHD. Intellectual function was another strong predictor, but the effect of early inattention remained statistically significant in both samples when intellectual function was covaried.Conclusion: The effect of early indicators of inattention on future academic success was robust across the two samples. These results support the use of remediation procedures broadly applied. Future longitudinal multicenter studies with pre-planned common inclusion criteria should be performed to increase our understanding of the importance of inattention in primary school children for concurrent and prospective functioning.

  11. Test plan for K Basin Sludge Canister and Floor Sampling Device

    International Nuclear Information System (INIS)

    Meling, T.A.

    1995-01-01

    This document provides the test plan and procedure forms for conducting the functional and operational acceptance testing of the K Basin Sludge Canister and Floor Sampling Device(s). These samplers samples sludge off the floor of the 100K Basins and out of 100K fuel storage canisters

  12. Characterization of the March 2017 Tank 15 Waste Removal Slurry Sample (Combination of Slurry Samples HTF-15-17-28 and HTF-15-17-29)

    Energy Technology Data Exchange (ETDEWEB)

    Reboul, S. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); King, W. D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Coleman, C. J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-05-09

    Two March 2017 Tank 15 slurry samples (HTF-15-17-28 and HTF-15-17-29) were collected during the second bulk waste removal campaign and submitted to SRNL for characterization. At SRNL, the two samples were combined and then characterized by a series of physical, elemental, radiological, and ionic analysis methods. Sludge settling as a function of time was also quantified. The characterization results reported in this document are consistent with expectations based upon waste type, process knowledge, comparisons between alternate analysis techniques, and comparisons with the characterization results obtained for the November 2016 Tank 15 slurry sample (the sample collected during the first bulk waste removal campaign).

  13. Calibrated work function mapping by Kelvin probe force microscopy

    Science.gov (United States)

    Fernández Garrillo, Pablo A.; Grévin, Benjamin; Chevalier, Nicolas; Borowik, Łukasz

    2018-04-01

    We propose and demonstrate the implementation of an alternative work function tip calibration procedure for Kelvin probe force microscopy under ultrahigh vacuum, using monocrystalline metallic materials with known crystallographic orientation as reference samples, instead of the often used highly oriented pyrolytic graphite calibration sample. The implementation of this protocol allows the acquisition of absolute and reproducible work function values, with an improved uncertainty with respect to unprepared highly oriented pyrolytic graphite-based protocols. The developed protocol allows the local investigation of absolute work function values over nanostructured samples and can be implemented in electronic structures and devices characterization as demonstrated over a nanostructured semiconductor sample presenting Al0.7Ga0.3As and GaAs layers with variable thickness. Additionally, using our protocol we find that the work function of annealed highly oriented pyrolytic graphite is equal to 4.6 ± 0.03 eV.

  14. Monte Carlo parametric importance sampling with particle tracks scaling

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.

    1981-01-01

    A method for Monte Carlo importance sampling with parametric dependence is proposed. It depends upon obtaining over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adopted and others rejected. The proposed method is applied to the finite slab penetration problem. When the exponential transformation is used, our method involves scaling of the generated particle tracks, and is a new application of Morton's method of similar trajectories. The method constitutes a generalization of Spanier's multistage importance sampling method, obtained by proper weighting over a single stage the curves he obtains over several stages, and preserves the statistical correlations between histories. It represents an extension of a theory by Frolov and Chentsov on Monte Carlo calculations of smooth curves to surfaces and to importance sampling calculations. By the proposed method, it seems possible to systematically arrive at minimum variance results and to avoid the infinite variances and effective biases sometimes observed in this type of calculation. (orig.) [de

  15. An integrate-over-temperature approach for enhanced sampling.

    Science.gov (United States)

    Gao, Yi Qin

    2008-02-14

    A simple method is introduced to achieve efficient random walking in the energy space in molecular dynamics simulations which thus enhances the sampling over a large energy range. The approach is closely related to multicanonical and replica exchange simulation methods in that it allows configurations of the system to be sampled in a wide energy range by making use of Boltzmann distribution functions at multiple temperatures. A biased potential is quickly generated using this method and is then used in accelerated molecular dynamics simulations.

  16. A preliminary investigation of sleep quality in functional neurological disorders: Poor sleep appears common, and is associated with functional impairment.

    Science.gov (United States)

    Graham, Christopher D; Kyle, Simon D

    2017-07-15

    Functional neurological disorders (FND) are disabling conditions for which there are few empirically-supported treatments. Disturbed sleep appears to be part of the FND context; however, the clinical importance of sleep disturbance (extent, characteristics and impact) remains largely unknown. We described sleep quality in two samples, and investigated the relationship between sleep and FND-related functional impairment. We included a sample recruited online via patient charities (N=205) and a consecutive clinical sample (N=20). Participants completed validated measures of sleep quality and sleep characteristics (e.g. total sleep time, sleep efficiency), mood, and FND-related functional impairment. Poor sleep was common in both samples (89% in the clinical range), which was characterised by low sleep efficiency (M=65.40%) and low total sleep time (M=6.05h). In regression analysis, sleep quality was negatively associated with FND-related functional impairment, accounting for 16% of the variance and remaining significant after the introduction of mood variables. These preliminary analyses suggest that subjective sleep disturbance (low efficiency, short sleep) is common in FND. Sleep quality was negatively associated with the functional impairment attributed to FND, independent of depression. Therefore, sleep disturbance may be a clinically important feature of FND. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Psychotic Experiences and Neuropsychological Functioning in a Population-based Sample.

    Science.gov (United States)

    Mollon, Josephine; David, Anthony S; Morgan, Craig; Frissa, Souci; Glahn, David; Pilecka, Izabela; Hatch, Stephani L; Hotopf, Matthew; Reichenberg, Abraham

    2016-02-01

    Psychotic experiences in early life are associated with neuropsychological impairment and the risk for later psychiatric disorders. Psychotic experiences are also prevalent in adults, but neuropsychological investigations spanning adulthood are limited, and confounding factors have not been examined rigorously. To characterize neuropsychological functioning in adults with psychotic experiences while adjusting for important sociodemographic characteristics and familial factors and investigating the effect of age. The South East London Community Health (SELCoH) study is a population-based household survey of physical and mental health in individuals 16 years or older conducted from June 1, 2008, to December 31, 2010, in 2 London boroughs. The study included 1698 participants from 1075 households. Data were analyzed from May 6, 2014, to April 22, 2015. Psychotic experiences measured using the Psychosis Screening Questionnaire. Neuropsychological functioning measured using tests assessing verbal knowledge (Wechsler Test of Adult Reading), working memory (Spatial Delayed Response Task), memory (Visual Object Learning Task), and processing speed (digit symbol coding task). A composite IQ score of general cognitive ability was calculated. A total of 1677 participants with a mean (SD) age of 40 (17) years were included in the analysis. Compared with the group without psychotic experiences, the 171 (9.7%) adults with psychotic experiences did not show a statistically significant impairment on mean (SD) measures of IQ (95.25 [16.58] vs 100.45 [14.77]; Cohen d, -0.22; P = .06) or processing speed (40.63 [13.06] vs 42.17 [13.79]; Cohen d, -0.03; P = .73) but were impaired on measures of verbal knowledge (31.36 [15.78] vs 38.83 [12.64]; Cohen d, -0.37; P = .003), working memory (20.97 [4.12] vs 22.51 [3.26]; Cohen d, -0.34; P = .005), and memory (43.80 [8.45] vs 46.53 [7.06]; Cohen d, -0.28; P = .01). Only participants 50 years and older with psychotic

  18. Determination of average activating thermal neutron flux in bulk samples

    International Nuclear Information System (INIS)

    Doczi, R.; Csikai, J.; Doczi, R.; Csikai, J.; Hassan, F. M.; Ali, M.A.

    2004-01-01

    A previous method used for the determination of the average neutron flux within bulky samples has been applied for the measurements of hydrogen contents of different samples. An analytical function is given for the description of the correlation between the activity of Dy foils and the hydrogen concentrations. Results obtained by the activation and the thermal neutron reflection methods are compared

  19. Targeted quantification of functional enzyme dynamics in environmental samples for microbially mediated biogeochemical processes: Targeted quantification of functional enzyme dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Li, Minjing [School of Environmental Studies, China University of Geosciences, Wuhan 430074 People' s Republic of China; Gao, Yuqian [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Qian, Wei-Jun [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Shi, Liang [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Liu, Yuanyuan [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Nelson, William C. [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Nicora, Carrie D. [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Resch, Charles T. [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Thompson, Christopher [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Yan, Sen [School of Environmental Studies, China University of Geosciences, Wuhan 430074 People' s Republic of China; Fredrickson, James K. [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Zachara, John M. [Pacific Northwest National Laboratory, Richland, WA 99354 USA; Liu, Chongxuan [Pacific Northwest National Laboratory, Richland, WA 99354 USA; School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055 People' s Republic of China

    2017-07-13

    Microbially mediated biogeochemical processes are catalyzed by enzymes that control the transformation of carbon, nitrogen, and other elements in environment. The dynamic linkage between enzymes and biogeochemical species transformation has, however, rarely been investigated because of the lack of analytical approaches to efficiently and reliably quantify enzymes and their dynamics in soils and sediments. Herein, we developed a signature peptide-based technique for sensitively quantifying dissimilatory and assimilatory enzymes using nitrate-reducing enzymes in a hyporheic zone sediment as an example. Moreover, the measured changes in enzyme concentration were found to correlate with the nitrate reduction rate in a way different from that inferred from biogeochemical models based on biomass or functional genes as surrogates for functional enzymes. This phenomenon has important implications for understanding and modeling the dynamics of microbial community functions and biogeochemical processes in environments. Our results also demonstrate the importance of enzyme quantification for the identification and interrogation of those biogeochemical processes with low metabolite concentrations as a result of faster enzyme-catalyzed consumption of metabolites than their production. The dynamic enzyme behaviors provide a basis for the development of enzyme-based models to describe the relationship between the microbial community and biogeochemical processes.

  20. Sampled-Data Control of Spacecraft Rendezvous with Discontinuous Lyapunov Approach

    Directory of Open Access Journals (Sweden)

    Zhuoshi Li

    2013-01-01

    Full Text Available This paper investigates the sampled-data stabilization problem of spacecraft relative positional holding with improved Lyapunov function approach. The classical Clohessy-Wiltshire equation is adopted to describe the relative dynamic model. The relative position holding problem is converted into an output tracking control problem using sampling signals. A time-dependent discontinuous Lyapunov functionals approach is developed, which will lead to essentially less conservative results for the stability analysis and controller design of the corresponding closed-loop system. Sufficient conditions for the exponential stability analysis and the existence of the proposed controller are provided, respectively. Finally, a simulation result is established to illustrate the effectiveness of the proposed control scheme.

  1. A novel sol-gel-based amino-functionalized fiber for headspace solid-phase microextraction of phenol and chlorophenols from environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Bagheri, Habib [Department of Chemistry, Sharif University of Technology, P.O. Box 11365-9516, Tehran (Iran, Islamic Republic of)], E-mail: Bagheri@sharif.edu; Babanezhad, Esmaeil; Khalilian, Faezeh [Department of Chemistry, Sharif University of Technology, P.O. Box 11365-9516, Tehran (Iran, Islamic Republic of)

    2008-05-26

    A novel amino-functionalized polymer was synthesized using 3-(trimethoxysilyl) propyl amine (TMSPA) as precursor and hydroxy-terminated polydimethylsiloxane (OH-PDMS) by sol-gel technology and coated on fused-silica fiber. The synthesis was designed in a way to impart polar moiety into the coating network. The scanning electron microscopy (SEM) images of this new coating showed the homogeneity and the porous surface structure of the film. The efficiency of new coating was investigated for headspace solid-phase microextraction (SPME) of some environmentally important chlorophenols from aqueous samples followed by gas chromatography-mass spectrometry (GC-MS) analysis. Effect of different parameters influencing the extraction efficiency such as extraction temperature, extraction time, ionic strength and pH was investigated and optimized. In order to improve the separation efficiency of phenolic compounds on chromatography column all the analytes were derivatized prior to extraction using acetic anhydride at alkaline condition. The detection limits of the method under optimized conditions were in the range of 0.02-0.05 ng mL{sup -1}. The relative standard deviations (R.S.D.) (n = 6) at a concentration level of 0.5 ng mL{sup -1} were obtained between 6.8 and 10%. The calibration curves of chlorophenols showed linearity in the range of 0.5-200 ng mL{sup -1}. The proposed method was successfully applied to the extraction from spiked tap water samples and relative recoveries were higher than 90% for all the analytes.

  2. Bounds for Tail Probabilities of the Sample Variance

    Directory of Open Access Journals (Sweden)

    Van Zuijlen M

    2009-01-01

    Full Text Available We provide bounds for tail probabilities of the sample variance. The bounds are expressed in terms of Hoeffding functions and are the sharpest known. They are designed having in mind applications in auditing as well as in processing data related to environment.

  3. OSIRIS-REx Asteroid Sample Return Mission Image Analysis

    Science.gov (United States)

    Chevres Fernandez, Lee Roger; Bos, Brent

    2018-01-01

    NASA’s Origins Spectral Interpretation Resource Identification Security-Regolith Explorer (OSIRIS-REx) mission constitutes the “first-of-its-kind” project to thoroughly characterize a near-Earth asteroid. The selected asteroid is (101955) 1999 RQ36 (a.k.a. Bennu). The mission launched in September 2016, and the spacecraft will reach its asteroid target in 2018 and return a sample to Earth in 2023. The spacecraft that will travel to, and collect a sample from, Bennu has five integrated instruments from national and international partners. NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch-And-Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample and document asteroid sample stowage. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Analysis of spacecraft imagery acquired by the TAGCAMS during cruise to the target asteroid Bennu was performed using custom codes developed in MATLAB. Assessment of the TAGCAMS in-flight performance using flight imagery was done to characterize camera performance. One specific area of investigation that was targeted was bad pixel mapping. A recent phase of the mission, known as the Earth Gravity Assist (EGA) maneuver, provided images that were used for the detection and confirmation of “questionable” pixels, possibly under responsive, using image segmentation analysis. Ongoing work on point spread function morphology and camera linearity and responsivity will also be used for calibration purposes and further analysis in preparation for proximity operations around Bennu. Said analyses will provide a broader understanding

  4. A Variational Approach to Enhanced Sampling and Free Energy Calculations

    Science.gov (United States)

    Parrinello, Michele

    2015-03-01

    The presence of kinetic bottlenecks severely hampers the ability of widely used sampling methods like molecular dynamics or Monte Carlo to explore complex free energy landscapes. One of the most popular methods for addressing this problem is umbrella sampling which is based on the addition of an external bias which helps overcoming the kinetic barriers. The bias potential is usually taken to be a function of a restricted number of collective variables. However constructing the bias is not simple, especially when the number of collective variables increases. Here we introduce a functional of the bias which, when minimized, allows us to recover the free energy. We demonstrate the usefulness and the flexibility of this approach on a number of examples which include the determination of a six dimensional free energy surface. Besides the practical advantages, the existence of such a variational principle allows us to look at the enhanced sampling problem from a rather convenient vantage point.

  5. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  6. [A comparison of convenience sampling and purposive sampling].

    Science.gov (United States)

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  7. The Study on Mental Health at Work: Design and sampling.

    Science.gov (United States)

    Rose, Uwe; Schiel, Stefan; Schröder, Helmut; Kleudgen, Martin; Tophoven, Silke; Rauch, Angela; Freude, Gabriele; Müller, Grit

    2017-08-01

    The Study on Mental Health at Work (S-MGA) generates the first nationwide representative survey enabling the exploration of the relationship between working conditions, mental health and functioning. This paper describes the study design, sampling procedures and data collection, and presents a summary of the sample characteristics. S-MGA is a representative study of German employees aged 31-60 years subject to social security contributions. The sample was drawn from the employment register based on a two-stage cluster sampling procedure. Firstly, 206 municipalities were randomly selected from a pool of 12,227 municipalities in Germany. Secondly, 13,590 addresses were drawn from the selected municipalities for the purpose of conducting 4500 face-to-face interviews. The questionnaire covers psychosocial working and employment conditions, measures of mental health, work ability and functioning. Data from personal interviews were combined with employment histories from register data. Descriptive statistics of socio-demographic characteristics and logistic regressions analyses were used for comparing population, gross sample and respondents. In total, 4511 face-to-face interviews were conducted. A test for sampling bias revealed that individuals in older cohorts participated more often, while individuals with an unknown educational level, residing in major cities or with a non-German ethnic background were slightly underrepresented. There is no indication of major deviations in characteristics between the basic population and the sample of respondents. Hence, S-MGA provides representative data for research on work and health, designed as a cohort study with plans to rerun the survey 5 years after the first assessment.

  8. The Study on Mental Health at Work: Design and sampling

    Science.gov (United States)

    Rose, Uwe; Schiel, Stefan; Schröder, Helmut; Kleudgen, Martin; Tophoven, Silke; Rauch, Angela; Freude, Gabriele; Müller, Grit

    2017-01-01

    Aims: The Study on Mental Health at Work (S-MGA) generates the first nationwide representative survey enabling the exploration of the relationship between working conditions, mental health and functioning. This paper describes the study design, sampling procedures and data collection, and presents a summary of the sample characteristics. Methods: S-MGA is a representative study of German employees aged 31–60 years subject to social security contributions. The sample was drawn from the employment register based on a two-stage cluster sampling procedure. Firstly, 206 municipalities were randomly selected from a pool of 12,227 municipalities in Germany. Secondly, 13,590 addresses were drawn from the selected municipalities for the purpose of conducting 4500 face-to-face interviews. The questionnaire covers psychosocial working and employment conditions, measures of mental health, work ability and functioning. Data from personal interviews were combined with employment histories from register data. Descriptive statistics of socio-demographic characteristics and logistic regressions analyses were used for comparing population, gross sample and respondents. Results: In total, 4511 face-to-face interviews were conducted. A test for sampling bias revealed that individuals in older cohorts participated more often, while individuals with an unknown educational level, residing in major cities or with a non-German ethnic background were slightly underrepresented. Conclusions: There is no indication of major deviations in characteristics between the basic population and the sample of respondents. Hence, S-MGA provides representative data for research on work and health, designed as a cohort study with plans to rerun the survey 5 years after the first assessment. PMID:28673202

  9. Estimation of Uncertainty in Aerosol Concentration Measured by Aerosol Sampling System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Chan; Song, Yong Jae; Jung, Woo Young; Lee, Hyun Chul; Kim, Gyu Tae; Lee, Doo Yong [FNC Technology Co., Yongin (Korea, Republic of)

    2016-10-15

    FNC Technology Co., Ltd has been developed test facilities for the aerosol generation, mixing, sampling and measurement under high pressure and high temperature conditions. The aerosol generation system is connected to the aerosol mixing system which injects SiO{sub 2}/ethanol mixture. In the sampling system, glass fiber membrane filter has been used to measure average mass concentration. Based on the experimental results using main carrier gas of steam and air mixture, the uncertainty estimation of the sampled aerosol concentration was performed by applying Gaussian error propagation law. FNC Technology Co., Ltd. has been developed the experimental facilities for the aerosol measurement under high pressure and high temperature. The purpose of the tests is to develop commercial test module for aerosol generation, mixing and sampling system applicable to environmental industry and safety related system in nuclear power plant. For the uncertainty calculation of aerosol concentration, the value of the sampled aerosol concentration is not measured directly, but must be calculated from other quantities. The uncertainty of the sampled aerosol concentration is a function of flow rates of air and steam, sampled mass, sampling time, condensed steam mass and its absolute errors. These variables propagate to the combination of variables in the function. Using operating parameters and its single errors from the aerosol test cases performed at FNC, the uncertainty of aerosol concentration evaluated by Gaussian error propagation law is less than 1%. The results of uncertainty estimation in the aerosol sampling system will be utilized for the system performance data.

  10. Value of a simple technique for the measurement of total renal function and each kidney functions without requiring blood or urine samples

    International Nuclear Information System (INIS)

    Meyers, A.; Chachati, A.; Godon, J.P.; Rigo, P.

    1985-01-01

    The determination of renal uptake of 99m Tc DTPA and of 131 I Hippuran (as a percentage of the administred dose) at a time interval 1-3 minutes after tracer injection, is a rapid, accurate method for the measurement of total renal function and each kidney functions. Its clinical validity has been confirmed [fr

  11. Fabry-Pérot cavity based on chirped sampled fiber Bragg gratings.

    Science.gov (United States)

    Zheng, Jilin; Wang, Rong; Pu, Tao; Lu, Lin; Fang, Tao; Li, Weichun; Xiong, Jintian; Chen, Yingfang; Zhu, Huatao; Chen, Dalei; Chen, Xiangfei

    2014-02-10

    A novel kind of Fabry-Pérot (FP) structure based on chirped sampled fiber Bragg grating (CSFBG) is proposed and demonstrated. In this structure, the regular chirped FBG (CFBG) that functions as reflecting mirror in the FP cavity is replaced by CSFBG, which is realized by chirping the sampling periods of a sampled FBG having uniform local grating period. The realization of such CSFBG-FPs having diverse properties just needs a single uniform pitch phase mask and sub-micrometer precision moving stage. Compared with the conventional CFBG-FP, it becomes more flexible to design CSFBG-FPs of diverse functions, and the fabrication process gets simpler. As a demonstration, based on the same experimental facilities, FPs with uniform FSR (~73 pm) and chirped FSR (varying from 28 pm to 405 pm) are fabricated respectively, which shows good agreement with simulation results.

  12. Fractional Gaussian noise: Prior specification and model comparison

    KAUST Repository

    Sørbye, Sigrunn Holbek

    2017-07-07

    Fractional Gaussian noise (fGn) is a stationary stochastic process used to model antipersistent or persistent dependency structures in observed time series. Properties of the autocovariance function of fGn are characterised by the Hurst exponent (H), which, in Bayesian contexts, typically has been assigned a uniform prior on the unit interval. This paper argues why a uniform prior is unreasonable and introduces the use of a penalised complexity (PC) prior for H. The PC prior is computed to penalise divergence from the special case of white noise and is invariant to reparameterisations. An immediate advantage is that the exact same prior can be used for the autocorrelation coefficient ϕ(symbol) of a first-order autoregressive process AR(1), as this model also reflects a flexible version of white noise. Within the general setting of latent Gaussian models, this allows us to compare an fGn model component with AR(1) using Bayes factors, avoiding the confounding effects of prior choices for the two hyperparameters H and ϕ(symbol). Among others, this is useful in climate regression models where inference for underlying linear or smooth trends depends heavily on the assumed noise model.

  13. Fractional Gaussian noise: Prior specification and model comparison

    KAUST Repository

    Sø rbye, Sigrunn Holbek; Rue, Haavard

    2017-01-01

    Fractional Gaussian noise (fGn) is a stationary stochastic process used to model antipersistent or persistent dependency structures in observed time series. Properties of the autocovariance function of fGn are characterised by the Hurst exponent (H), which, in Bayesian contexts, typically has been assigned a uniform prior on the unit interval. This paper argues why a uniform prior is unreasonable and introduces the use of a penalised complexity (PC) prior for H. The PC prior is computed to penalise divergence from the special case of white noise and is invariant to reparameterisations. An immediate advantage is that the exact same prior can be used for the autocorrelation coefficient ϕ(symbol) of a first-order autoregressive process AR(1), as this model also reflects a flexible version of white noise. Within the general setting of latent Gaussian models, this allows us to compare an fGn model component with AR(1) using Bayes factors, avoiding the confounding effects of prior choices for the two hyperparameters H and ϕ(symbol). Among others, this is useful in climate regression models where inference for underlying linear or smooth trends depends heavily on the assumed noise model.

  14. Modularity and the spread of perturbations in complex dynamical systems.

    Science.gov (United States)

    Kolchinsky, Artemy; Gates, Alexander J; Rocha, Luis M

    2015-12-01

    We propose a method to decompose dynamical systems based on the idea that modules constrain the spread of perturbations. We find partitions of system variables that maximize "perturbation modularity," defined as the autocovariance of coarse-grained perturbed trajectories. The measure effectively separates the fast intramodular from the slow intermodular dynamics of perturbation spreading (in this respect, it is a generalization of the "Markov stability" method of network community detection). Our approach captures variation of modular organization across different system states, time scales, and in response to different kinds of perturbations: aspects of modularity which are all relevant to real-world dynamical systems. It offers a principled alternative to detecting communities in networks of statistical dependencies between system variables (e.g., "relevance networks" or "functional networks"). Using coupled logistic maps, we demonstrate that the method uncovers hierarchical modular organization planted in a system's coupling matrix. Additionally, in homogeneously coupled map lattices, it identifies the presence of self-organized modularity that depends on the initial state, dynamical parameters, and type of perturbations. Our approach offers a powerful tool for exploring the modular organization of complex dynamical systems.

  15. Functional design criteria for the retained gas sampler system

    International Nuclear Information System (INIS)

    Wootan, D.W.

    1995-01-01

    A Retained Gas Sampler System (RGSS) is being developed to capture and analyze waste samples from Hanford Flammable Gas Watch List Tanks to determine both the quantity and composition of gases retained in the waste. The RGSS consists of three main components: the Sampler, Extractor, and Extruder. This report describes the functional criteria for the design of the RGSS components. The RGSS Sampler is based on the WHC Universal Sampler design with modifications to eliminate gas leakage. The primary function of the Sampler is to capture a representative waste sample from a tank and transport the sample with minimal loss of gas content from the tank to the laboratory. The function of the Extruder is to transfer the waste sample from the Sampler to the Extractor. The function of the Extractor is to separate the gases from the liquids and solids, measure the relative volume of gas to determine the void fraction, and remove and analyze the gas constituents

  16. Comparing Microbiome Sampling Methods in a Wild Mammal: Fecal and Intestinal Samples Record Different Signals of Host Ecology, Evolution.

    Science.gov (United States)

    Ingala, Melissa R; Simmons, Nancy B; Wultsch, Claudia; Krampis, Konstantinos; Speer, Kelly A; Perkins, Susan L

    2018-01-01

    The gut microbiome is a community of host-associated symbiotic microbes that fulfills multiple key roles in host metabolism, immune function, and tissue development. Given the ability of the microbiome to impact host fitness, there is increasing interest in studying the microbiome of wild animals to better understand these communities in the context of host ecology and evolution. Human microbiome research protocols are well established, but wildlife microbiome research is still a developing field. Currently, there is no standardized set of best practices guiding the collection of microbiome samples from wildlife. Gut microflora are typically sampled either by fecal collection, rectal swabbing, or by destructively sampling the intestinal contents of the host animal. Studies rarely include more than one sampling technique and no comparison of these methods currently exists for a wild mammal. Although some studies have hypothesized that the fecal microbiome is a nested subset of the intestinal microbiome, this hypothesis has not been formally tested. To address these issues, we examined guano (feces) and distal intestinal mucosa from 19 species of free-ranging bats from Lamanai, Belize, using 16S rRNA amplicon sequencing to compare microbial communities across sample types. We found that the diversity and composition of intestine and guano samples differed substantially. In addition, we conclude that signatures of host evolution are retained by studying gut microbiomes based on mucosal tissue samples, but not fecal samples. Conversely, fecal samples retained more signal of host diet than intestinal samples. These results suggest that fecal and intestinal sampling methods are not interchangeable, and that these two microbiotas record different information about the host from which they are isolated.

  17. Comparing Microbiome Sampling Methods in a Wild Mammal: Fecal and Intestinal Samples Record Different Signals of Host Ecology, Evolution

    Directory of Open Access Journals (Sweden)

    Melissa R. Ingala

    2018-05-01

    Full Text Available The gut microbiome is a community of host-associated symbiotic microbes that fulfills multiple key roles in host metabolism, immune function, and tissue development. Given the ability of the microbiome to impact host fitness, there is increasing interest in studying the microbiome of wild animals to better understand these communities in the context of host ecology and evolution. Human microbiome research protocols are well established, but wildlife microbiome research is still a developing field. Currently, there is no standardized set of best practices guiding the collection of microbiome samples from wildlife. Gut microflora are typically sampled either by fecal collection, rectal swabbing, or by destructively sampling the intestinal contents of the host animal. Studies rarely include more than one sampling technique and no comparison of these methods currently exists for a wild mammal. Although some studies have hypothesized that the fecal microbiome is a nested subset of the intestinal microbiome, this hypothesis has not been formally tested. To address these issues, we examined guano (feces and distal intestinal mucosa from 19 species of free-ranging bats from Lamanai, Belize, using 16S rRNA amplicon sequencing to compare microbial communities across sample types. We found that the diversity and composition of intestine and guano samples differed substantially. In addition, we conclude that signatures of host evolution are retained by studying gut microbiomes based on mucosal tissue samples, but not fecal samples. Conversely, fecal samples retained more signal of host diet than intestinal samples. These results suggest that fecal and intestinal sampling methods are not interchangeable, and that these two microbiotas record different information about the host from which they are isolated.

  18. Correlation between k-space sampling pattern and MTF in compressed sensing MRSI.

    Science.gov (United States)

    Heikal, A A; Wachowicz, K; Fallone, B G

    2016-10-01

    To investigate the relationship between the k-space sampling patterns used for compressed sensing MR spectroscopic imaging (CS-MRSI) and the modulation transfer function (MTF) of the metabolite maps. This relationship may allow the desired frequency content of the metabolite maps to be quantitatively tailored when designing an undersampling pattern. Simulations of a phantom were used to calculate the MTF of Nyquist sampled (NS) 32 × 32 MRSI, and four-times undersampled CS-MRSI reconstructions. The dependence of the CS-MTF on the k-space sampling pattern was evaluated for three sets of k-space sampling patterns generated using different probability distribution functions (PDFs). CS-MTFs were also evaluated for three more sets of patterns generated using a modified algorithm where the sampling ratios are constrained to adhere to PDFs. Strong visual correlation as well as high R 2 was found between the MTF of CS-MRSI and the product of the frequency-dependant sampling ratio and the NS 32 × 32 MTF. Also, PDF-constrained sampling patterns led to higher reproducibility of the CS-MTF, and stronger correlations to the above-mentioned product. The relationship established in this work provides the user with a theoretical solution for the MTF of CS MRSI that is both predictable and customizable to the user's needs.

  19. Recent advances in applications of nanomaterials for sample preparation.

    Science.gov (United States)

    Xu, Linnan; Qi, Xiaoyue; Li, Xianjiang; Bai, Yu; Liu, Huwei

    2016-01-01

    Sample preparation is a key step for qualitative and quantitative analysis of trace analytes in complicated matrix. Along with the rapid development of nanotechnology in material science, numerous nanomaterials have been developed with particularly useful applications in analytical chemistry. Benefitting from their high specific areas, increased surface activities, and unprecedented physical/chemical properties, the potentials of nanomaterials for rapid and efficient sample preparation have been exploited extensively. In this review, recent progress of novel nanomaterials applied in sample preparation has been summarized and discussed. Both nanoparticles and nanoporous materials are evaluated for their unusual performance in sample preparation. Various compositions and functionalizations extended the applications of nanomaterials in sample preparations, and distinct size and shape selectivity was generated from the diversified pore structures of nanoporous materials. Such great variety make nanomaterials a kind of versatile tools in sample preparation for almost all categories of analytes. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  1. Is emotional functioning related to academic achievement among university students? Results from a cross-sectional Iranian sample

    Directory of Open Access Journals (Sweden)

    Dena Sadeghi Bahmani

    2018-03-01

    Full Text Available Objective: Whereas several studies have predicted academic achievement (AA as a function of favorable cognitive factors and low negative emotional functioning (such as depression and anxiety, little is known about its associations with cognitive-emotional states of positive emotional functioning, such as social satisfaction. The present study sought to evaluate associations of AA with dimensions of negative and positive emotional functioning. Method: This cross-sectional study enrolled 275 students (mean age, 21.24 years; 66.1% females, who completed questionnaires covering sociodemographic parameters and AA scores, as well as measures of loneliness and depression (representing negative emotional functioning and social satisfaction (representing positive emotional functioning. Results: Lower scores for negative and higher scores for positive emotional functioning were associated with higher AA scores. Multiple regression analysis showed that AA was predicted independently by both low negative and high positive emotional functioning. No gender differences were observed. Conclusions: The pattern of results observed in this study suggests that opposing dimensions of emotional functioning are independently related to AA. Students, educators, and health professionals dealing with students should focus both on increasing social satisfaction and on decreasing feelings of loneliness and depression.

  2. Capillary electrophoresis of covalently functionalized single-chirality carbon nanotubes.

    Science.gov (United States)

    He, Pingli; Meany, Brendan; Wang, Chunyan; Piao, Yanmei; Kwon, Hyejin; Deng, Shunliu; Wang, YuHuang

    2017-07-01

    We demonstrate the separation of chirality-enriched single-walled carbon nanotubes (SWCNTs) by degree of surface functionalization using high-performance CE. Controlled amounts of negatively charged and positively charged functional groups were attached to the sidewall of chirality-enriched SWCNTs through covalent functionalization using 4-carboxybenzenediazonium tetrafluoroborate or 4-diazo-N,N-diethylaniline tetrafluoroborate, respectively. Surfactant- and pH-dependent studies confirmed that under conditions that minimized ionic screening effects, separation of these functionalized SWCNTs was strongly dependent on the surface charge density introduced through covalent surface chemistry. For both heterogeneous mixtures and single-chirality-enriched samples, covalently functionalized SWCNTs showed substantially increased peak width in electropherogram spectra compared to nonfunctionalized SWCNTs, which can be attributed to a distribution of surface charges along the functionalized nanotubes. Successful separation of functionalized single-chirality SWCNTs by functional density was confirmed with UV-Vis-NIR absorption and Raman scattering spectroscopies of fraction collected samples. These results suggest a high degree of structural heterogeneity in covalently functionalized SWCNTs, even for chirality-enriched samples, and show the feasibility of applying CE for high-performance separation of nanomaterials based on differences in surface functional density. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. A fully blanketed early B star LTE model atmosphere using an opacity sampling technique

    International Nuclear Information System (INIS)

    Phillips, A.P.; Wright, S.L.

    1980-01-01

    A fully blanketed LTE model of a stellar atmosphere with Tsub(e) = 21914 K (thetasub(e) = 0.23), log g = 4 is presented. The model includes an explicit representation of the opacity due to the strongest lines, and uses a statistical opacity sampling technique to represent the weaker line opacity. The sampling technique is subjected to several tests and the model is compared with an atmosphere calculated using the line-distribution function method. The limitations of the distribution function method and the particular opacity sampling method used here are discussed in the light of the results obtained. (author)

  4. Sampling efficiency of modified 37-mm sampling cassettes using computational fluid dynamics.

    Science.gov (United States)

    Anthony, T Renée; Sleeth, Darrah; Volckens, John

    2016-01-01

    In the U.S., most industrial hygiene practitioners continue to rely on the closed-face cassette (CFC) to assess worker exposures to hazardous dusts, primarily because ease of use, cost, and familiarity. However, mass concentrations measured with this classic sampler underestimate exposures to larger particles throughout the inhalable particulate mass (IPM) size range (up to aerodynamic diameters of 100 μm). To investigate whether the current 37-mm inlet cap can be redesigned to better meet the IPM sampling criterion, computational fluid dynamics (CFD) models were developed, and particle sampling efficiencies associated with various modifications to the CFC inlet cap were determined. Simulations of fluid flow (standard k-epsilon turbulent model) and particle transport (laminar trajectories, 1-116 μm) were conducted using sampling flow rates of 10 L min(-1) in slow moving air (0.2 m s(-1)) in the facing-the-wind orientation. Combinations of seven inlet shapes and three inlet diameters were evaluated as candidates to replace the current 37-mm inlet cap. For a given inlet geometry, differences in sampler efficiency between inlet diameters averaged less than 1% for particles through 100 μm, but the largest opening was found to increase the efficiency for the 116 μm particles by 14% for the flat inlet cap. A substantial reduction in sampler efficiency was identified for sampler inlets with side walls extending beyond the dimension of the external lip of the current 37-mm CFC. The inlet cap based on the 37-mm CFC dimensions with an expanded 15-mm entry provided the best agreement with facing-the-wind human aspiration efficiency. The sampler efficiency was increased with a flat entry or with a thin central lip adjacent to the new enlarged entry. This work provides a substantial body of sampling efficiency estimates as a function of particle size and inlet geometry for personal aerosol samplers.

  5. Does the heritability of cognitive abilities vary as a function of parental education? Evidence from a German twin sample.

    Science.gov (United States)

    Spengler, Marion; Gottschling, Juliana; Hahn, Elisabeth; Tucker-Drob, Elliot M; Harzer, Claudia; Spinath, Frank M

    2018-01-01

    A well-known hypothesis in the behavioral genetic literature predicts that the heritability of cognitive abilities is higher in the presence of higher socioeconomic contexts. However, studies suggest that the effect of socioeconomic status (SES) on the heritability of cognitive ability may not be universal, as it has mostly been demonstrated in the United States, but not in other Western nations. In the present study we tested whether the importance of genetic and environmental effects on cognitive abilities varies as a function of parental education in a German twin sample. Cognitive ability scores (general, verbal, and nonverbal) were obtained on 531 German twin pairs (192 monozygotic, 339 dizygotic, ranging from 7 to 14 years of age; Mage = 10.25, SD = 1.83). Data on parental education were available from mothers and fathers. Results for general cognitive ability and nonverbal ability indicated no significant gene x parental education interaction effect. For verbal ability, a significant nonshared environment (E) x parental education interaction was found in the direction of greater nonshared environmental influences on verbal abilities among children raised by more educated parents.

  6. Electronic module for control of sample feeding device of spectrometers of X-ray fluorescent analysis of CRV type

    International Nuclear Information System (INIS)

    Petrov, V.A.; Fursov, A.V.

    2002-01-01

    The scheme of electronic module for sample feeding device control for the CRV type X-ray fluorescence analysis spectrometers is considered. This module provides realization of next functions: sample change operations and installation in starting position; signaling and defense at emergency cases; indication of any sample amount in the spectrometer chamber; testing function at tuning and testing of modules. All these principal functions are entrusted with microcontroller. Programming of the microcontroller is putting into effect by algorithm of the whole sample feeding device. In the capacity of microcontroller the single crystalline processor PICI16C54 and stepping motor of NV-306-V2202 model have been used

  7. Single injection 51Cr EDTA plasma clearance determination in children using capillary blood samples

    International Nuclear Information System (INIS)

    Broechner-Mortensen, J.; Christoffersen, J.

    1977-01-01

    The reliability of a determination of the total 51 Cr EDTA plasma clearance (e) (and with it the glomerular filtration rate), by a simplified single injection method (injected dose: 4.5 μCi per kg b.w.) using capillary blood samples (0.2 ml), was investigated in twenty children. Clearance values determined from capillary blood samples did not differ significantly from those measured simultaneously from venous blood samples, the mean ratio+-SD being 1.02+-0.06(n = 10). The reproducibility (total day-to-day variation) of E determined from capillary blood samples was 6.7% in children with decreased renal function (n = 3) and 6.9% in children with normal renal function (n = 7). The present data indicate that the use of capillary blood samples is an accurate and very precise approach for determination of E in children. (Auth.)

  8. Phobos/Deimos sample return via solar sail.

    Science.gov (United States)

    Matloff, Gregory L; Taylor, Travis; Powell, Conley; Moton, Tryshanda

    2005-12-01

    A sample-return mission to the Martian satellites using a con-temporary solar sail for all post-Earth-escape propulsion is proposed. The 0.015 kg/m(2) areal mass-thickness sail unfurls after launch and injection onto a Mars-bound Hohmann-transfer ellipse. Structure and payload increase spacecraft areal mass thickness to 0.028 kg/m(2). During the Mars encounter, the sail functions as a parachute in the outer atmosphere of Mars to accomplish aerocapture. On-board thrusters or the sail maneuver the spacecraft into an orbit with periapsis near Mars and apoapsis near Phobos. The orbit is circularized for Phobos-rendezvous; surface samples are collected. The sail then raises the orbit for Deimos-rendezvous and sample collection. The sail next places the spacecraft on an Earth-bound Hohmann-transfer ellipse. During Earth encounter, the sail accomplishes Earth-aerocapture or partially decelerates the sample container for entry into the Earth's atmosphere. Mission mass budget is about 218 grams and mission duration is less than five years.

  9. Functional data analysis

    CERN Document Server

    Ramsay, J O

    1997-01-01

    Scientists today collect samples of curves and other functional observations. This monograph presents many ideas and techniques for such data. Included are expressions in the functional domain of such classics as linear regression, principal components analysis, linear modelling, and canonical correlation analysis, as well as specifically functional techniques such as curve registration and principal differential analysis. Data arising in real applications are used throughout for both motivation and illustration, showing how functional approaches allow us to see new things, especially by exploiting the smoothness of the processes generating the data. The data sets exemplify the wide scope of functional data analysis; they are drwan from growth analysis, meterology, biomechanics, equine science, economics, and medicine. The book presents novel statistical technology while keeping the mathematical level widely accessible. It is designed to appeal to students, to applied data analysts, and to experienced researc...

  10. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  11. Sampling rare fluctuations of discrete-time Markov chains

    Science.gov (United States)

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  12. Aberrant functional network connectivity in psychopathy from a large (N = 985) forensic sample.

    Science.gov (United States)

    Espinoza, Flor A; Vergara, Victor M; Reyes, Daisy; Anderson, Nathaniel E; Harenski, Carla L; Decety, Jean; Rachakonda, Srinivas; Damaraju, Eswar; Rashid, Barnaly; Miller, Robyn L; Koenigs, Michael; Kosson, David S; Harenski, Keith; Kiehl, Kent A; Calhoun, Vince D

    2018-06-01

    Psychopathy is a personality disorder characterized by antisocial behavior, lack of remorse and empathy, and impaired decision making. The disproportionate amount of crime committed by psychopaths has severe emotional and economic impacts on society. Here we examine the neural correlates associated with psychopathy to improve early assessment and perhaps inform treatments for this condition. Previous resting-state functional magnetic resonance imaging (fMRI) studies in psychopathy have primarily focused on regions of interest. This study examines whole-brain functional connectivity and its association to psychopathic traits. Psychopathy was hypothesized to be characterized by aberrant functional network connectivity (FNC) in several limbic/paralimbic networks. Group-independent component and regression analyses were applied to a data set of resting-state fMRI from 985 incarcerated adult males. We identified resting-state networks (RSNs), estimated FNC between RSNs, and tested their association to psychopathy factors and total summary scores (Factor 1, interpersonal/affective; Factor 2, lifestyle/antisocial). Factor 1 scores showed both increased and reduced functional connectivity between RSNs from seven brain domains (sensorimotor, cerebellar, visual, salience, default mode, executive control, and attentional). Consistent with hypotheses, RSNs from the paralimbic system-insula, anterior and posterior cingulate cortex, amygdala, orbital frontal cortex, and superior temporal gyrus-were related to Factor 1 scores. No significant FNC associations were found with Factor 2 and total PCL-R scores. In summary, results suggest that the affective and interpersonal symptoms of psychopathy (Factor 1) are associated with aberrant connectivity in multiple brain networks, including paralimbic regions. © 2018 Wiley Periodicals, Inc.

  13. Information sampling behavior with explicit sampling costs

    Science.gov (United States)

    Juni, Mordechai Z.; Gureckis, Todd M.; Maloney, Laurence T.

    2015-01-01

    The decision to gather information should take into account both the value of information and its accrual costs in time, energy and money. Here we explore how people balance the monetary costs and benefits of gathering additional information in a perceptual-motor estimation task. Participants were rewarded for touching a hidden circular target on a touch-screen display. The target’s center coincided with the mean of a circular Gaussian distribution from which participants could sample repeatedly. Each “cue” — sampled one at a time — was plotted as a dot on the display. Participants had to repeatedly decide, after sampling each cue, whether to stop sampling and attempt to touch the hidden target or continue sampling. Each additional cue increased the participants’ probability of successfully touching the hidden target but reduced their potential reward. Two experimental conditions differed in the initial reward associated with touching the hidden target and the fixed cost per cue. For each condition we computed the optimal number of cues that participants should sample, before taking action, to maximize expected gain. Contrary to recent claims that people gather less information than they objectively should before taking action, we found that participants over-sampled in one experimental condition, and did not significantly under- or over-sample in the other. Additionally, while the ideal observer model ignores the current sample dispersion, we found that participants used it to decide whether to stop sampling and take action or continue sampling, a possible consequence of imperfect learning of the underlying population dispersion across trials. PMID:27429991

  14. Sex determination by tooth size in a sample of Greek population.

    Science.gov (United States)

    Mitsea, A G; Moraitis, K; Leon, G; Nicopoulou-Karayianni, K; Spiliopoulou, C

    2014-08-01

    Sex assessment from tooth measurements can be of major importance for forensic and bioarchaeological investigations, especially when only teeth or jaws are available. The purpose of this study is to assess the reliability and applicability of establishing sex identity in a sample of Greek population using the discriminant function proposed by Rösing et al. (1995). The study comprised of 172 dental casts derived from two private orthodontic clinics in Athens. The individuals were randomly selected and all had clear medical history. The mesiodistal crown diameters of all the teeth were measured apart from those of the 3rd molars. The values quoted for the sample to which the discriminant function was first applied were similar to those obtained for the Greek sample. The results of the preliminary statistical analysis did not support the use of the specific discriminant function for a reliable determination of sex by means of the mesiodistal diameter of the teeth. However, there was considerable variation between different populations and this might explain the reason for lack of discriminating power of the specific function in the Greek population. In order to investigate whether a better discriminant function could be obtained using the Greek data, separate discriminant function analysis was performed on the same teeth and a different equation emerged without, however, any real improvement in the classification process, with an overall correct classification of 72%. The results showed that there were a considerably higher percentage of females correctly classified than males. The results lead to the conclusion that the use of the mesiodistal diameter of teeth is not as a reliable method as one would have expected for determining sex of human remains from a forensic context. Therefore, this method could be used only in combination with other identification approaches. Copyright © 2014. Published by Elsevier GmbH.

  15. Parameters of the covariance function of galaxies

    International Nuclear Information System (INIS)

    Fesenko, B.I.; Onuchina, E.V.

    1988-01-01

    The two-point angular covariance functions for two samples of galaxies are considered using quick methods of analysis. It is concluded that in the previous investigations the amplitude of the covariance function in the Lick counts was overestimated and the rate of decrease of the function underestimated

  16. Correlation Functions in Open Quantum-Classical Systems

    Directory of Open Access Journals (Sweden)

    Chang-Yu Hsieh

    2013-12-01

    Full Text Available Quantum time correlation functions are often the principal objects of interest in experimental investigations of the dynamics of quantum systems. For instance, transport properties, such as diffusion and reaction rate coefficients, can be obtained by integrating these functions. The evaluation of such correlation functions entails sampling from quantum equilibrium density operators and quantum time evolution of operators. For condensed phase and complex systems, where quantum dynamics is difficult to carry out, approximations must often be made to compute these functions. We present a general scheme for the computation of correlation functions, which preserves the full quantum equilibrium structure of the system and approximates the time evolution with quantum-classical Liouville dynamics. Several aspects of the scheme are discussed, including a practical and general approach to sample the quantum equilibrium density, the properties of the quantum-classical Liouville equation in the context of correlation function computations, simulation schemes for the approximate dynamics and their interpretation and connections to other approximate quantum dynamical methods.

  17. Determination of gold nanoparticles in environmental water samples by second-order optical scattering using dithiotreitol-functionalized CdS quantum dots after cloud point extraction

    Energy Technology Data Exchange (ETDEWEB)

    Mandyla, Spyridoula P.; Tsogas, George Z.; Vlessidis, Athanasios G.; Giokas, Dimosthenis L., E-mail: dgiokas@cc.uoi.gr

    2017-02-05

    Highlights: • A new method has been developed to determine gold nanoparticles in water samples. • Extraction was achieved by cloud point extraction. • A nano-hybrid assembly between AuNPs and dithiol-coated quantum dots was formulated. • Detection was accomplished at pico-molar levels by second-order light scattering. • The method was selective against ionic gold and other nanoparticle species. - Abstract: This work presents a new method for the sensitive and selective determination of gold nanoparticles in water samples. The method combines a sample preparation and enrichment step based on cloud point extraction with a new detection motif that relies on the optical incoherent light scattering of a nano-hybrid assembly that is formed by hydrogen bond interactions between gold nanoparticles and dithiotreitol-functionalized CdS quantum dots. The experimental parameters affecting the extraction and detection of gold nanoparticles were optimized and evaluated to the analysis of gold nanoparticles of variable size and surface coating. The selectivity of the method against gold ions and other nanoparticle species was also evaluated under different conditions reminiscent to those usually found in natural water samples. The developed method was applied to the analysis of gold nanoparticles in natural waters and wastewater with satisfactory results in terms of sensitivity (detection limit at the low pmol L{sup −1} levels), recoveries (>80%) and reproducibility (<9%). Compared to other methods employing molecular spectrometry for metal nanoparticle analysis, the developed method offers improved sensitivity and it is easy-to-operate thus providing an additional tool for the monitoring and the assessment of nanoparticles toxicity and hazards in the environment.

  18. Determination of gold nanoparticles in environmental water samples by second-order optical scattering using dithiotreitol-functionalized CdS quantum dots after cloud point extraction

    International Nuclear Information System (INIS)

    Mandyla, Spyridoula P.; Tsogas, George Z.; Vlessidis, Athanasios G.; Giokas, Dimosthenis L.

    2017-01-01

    Highlights: • A new method has been developed to determine gold nanoparticles in water samples. • Extraction was achieved by cloud point extraction. • A nano-hybrid assembly between AuNPs and dithiol-coated quantum dots was formulated. • Detection was accomplished at pico-molar levels by second-order light scattering. • The method was selective against ionic gold and other nanoparticle species. - Abstract: This work presents a new method for the sensitive and selective determination of gold nanoparticles in water samples. The method combines a sample preparation and enrichment step based on cloud point extraction with a new detection motif that relies on the optical incoherent light scattering of a nano-hybrid assembly that is formed by hydrogen bond interactions between gold nanoparticles and dithiotreitol-functionalized CdS quantum dots. The experimental parameters affecting the extraction and detection of gold nanoparticles were optimized and evaluated to the analysis of gold nanoparticles of variable size and surface coating. The selectivity of the method against gold ions and other nanoparticle species was also evaluated under different conditions reminiscent to those usually found in natural water samples. The developed method was applied to the analysis of gold nanoparticles in natural waters and wastewater with satisfactory results in terms of sensitivity (detection limit at the low pmol L −1 levels), recoveries (>80%) and reproducibility (<9%). Compared to other methods employing molecular spectrometry for metal nanoparticle analysis, the developed method offers improved sensitivity and it is easy-to-operate thus providing an additional tool for the monitoring and the assessment of nanoparticles toxicity and hazards in the environment.

  19. Samples and Sampling Protocols for Scientific Investigations | Joel ...

    African Journals Online (AJOL)

    ... from sampling, through sample preparation, calibration to final measurement and reporting. This paper, therefore offers useful information on practical guidance on sampling protocols in line with best practice and international standards. Keywords: Sampling, sampling protocols, chain of custody, analysis, documentation ...

  20. Functional behavior of the anomalous magnetic relaxation observed in melt-textured YBa_2Cu_3O_7_-_δ samples showing the paramagnetic Meissner effect

    International Nuclear Information System (INIS)

    Dias, F.T.; Vieira, V.N.; Garcia, E.L.; Wolff-Fabris, F.; Kampert, E.; Gouvêa, C.P.; Schaf, J.; Obradors, X.; Puig, T.; Roa, J.J.

    2016-01-01

    Highlights: • Paramagnetic Meissner effect observed up to 5T in FCC and FCW measurements. • Time effects evidenced by irreversibilities between FCC and FCW measurements. • Strong time effects causing an anomalous paramagnetic relaxation. • Paramagnetic relaxation governed by different flux dynamics in different intervals. • An interpretative analysis to identify the flux dynamics in the relaxation process. - Abstract: We have studied the functional behavior of the field-cooled (FC) magnetic relaxation observed in melt-textured YBa_2Cu_3O_7_-_δ (Y123) samples with 30 wt% of Y_2Ba_1Cu_1O_5 (Y211) phase, in order to investigate anomalous paramagnetic moments observed during the experiments. FC magnetic relaxation experiments were performed under controlled conditions, such as cooling rate and temperature. Magnetic fields up to 5T were applied parallel to the ab plane and along the c-axis. Our results are associated with the paramagnetic Meissner effect (PME), characterized by positive moments during FC experiments, and related to the magnetic flux compression into the samples. After different attempts our experimental data could be adequately fitted by an exponential decay function with different relaxation times. We discuss our results suggesting the existence of different and preferential flux dynamics governing the anomalous FC paramagnetic relaxation in different time intervals. This work is one of the first attempts to interpret this controversial effect in a simple analysis of the pinning mechanisms and flux dynamics acting during the time evolution of the magnetic moment. However, the results may be useful to develop models to explain this interesting and still misunderstood feature of the paramagnetic Meissner effect.

  1. A Virtual Petrological Microscope for All Apollo 11 Lunar Samples

    Science.gov (United States)

    Pillnger, C. T.; Tindle, A. G.; Kelley, S. P.; Quick, K.; Scott, P.; Gibson, E. K.; Zeigler, R. A.

    2014-01-01

    A means of viewing, over the Internet, polished thin sections of every rock in the Apollo lunar sample collections via software, duplicaing many of the functions of a petrological microscope, is described.

  2. Simulated Tempering Distributed Replica Sampling, Virtual Replica Exchange, and Other Generalized-Ensemble Methods for Conformational Sampling.

    Science.gov (United States)

    Rauscher, Sarah; Neale, Chris; Pomès, Régis

    2009-10-13

    Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.

  3. Functioning of active postmenopausal women with osteoporosis

    Directory of Open Access Journals (Sweden)

    Aline Cristiane Binda

    Full Text Available Abstract Introduction: The multiple aspects of disability in patients with osteoporosis require comprehensive tool for their assessment. The International Classification of Functioning, Disability and Health (ICF is designed to describe the experience of such patients with theirs functioning. Objective: This study aimed to describe the functioning in a sample of active postmenopausal women with osteoporosis according to the brief ICF core set for osteoporosis. Methods: This cross-sectional study was conducted among active community-dwelling older adults in a southern Brazilian city. Participants were enrolled by convenience sampling from a group conducting supervised aquatic and land-based exercises. Active postmenopausal women with osteoporosis were included. Thirty-two women (mean age 68.0 ± 5.1 years old participated in the evaluation. The brief ICF core set for osteoporosis was used to establish functional profiles. The categories were considered valid when ≥20% of participants showed some disability (according to ICF qualifiers. Results: No category showed a high level of disability, as >50% of women rated by qualifiers .3 or .4. Only the category e580 was considered by all participants as a facilitator. Conclusion: The brief ICF core set for osteoporosis results demonstrated that this classification system is representative to describe the functional profile of the sample. Active postmenopausal women with osteoporosis presented few impairments related to body functions and structures, activities and participation and environmental factors.

  4. Determination of Organophosphorous Pesticides in Environmental Water Samples Using Surface-Engineered C18 Functionalized Silica-Coated Core-Shell Magnetic Nanoparticles-Based Extraction Coupled with GC-MS/MS Analysis.

    Science.gov (United States)

    Srivastava, Neha; Kumari, Supriya; Nair, Kishore; Alam, Samsul; Raza, Syed K

    2017-05-01

    The present paper depicts a novel method based on magnetic SPE (MSPE) for the determination of organophosphorus pesticides (OPs) such as phorate, malathion, and chlorpyrifos in environmental water samples. In this study, C18 functionalized silica-coated core-shell iron oxide magnetic nanoparticles (MNPs) were used as a surface-engineered magnetic sorbent for the selective extraction of pesticides from aqueous samples, followed by GC-MS and GC-tandem MS analysis for confirmative determination of the analytes. Various important method parameters, including quantity of MNP adsorbent, volume of sample, effective time for extraction, nature of the desorbing solvent, and pH of the aqueous sample, were investigated and optimized to obtain maximum method performance. Under the optimized instrumental analysis conditions, good linearity (r2 value ≥0.994) was achieved at the concentration range of 0.5-500 μg/L. Recoveries were in the range of 79.2-96.3 and 80.4-97.5% in selective-ion monitoring and multiple reaction monitoring (MRM) modes, respectively, at the spiking concentrations of 1, 5, and 10 μg/L. MRM mode showed better sensitivity, selectivity, and low-level detection (0.5 μg/L) of analytes. The novel MSPE method is a simple, cheap, rapid, and eco-friendly method for the determination of OPs in environmental water samples.

  5. Traumatic stress and psychological functioning in a South African adolescent community sample

    OpenAIRE

    Swain, Karl D.; Pillay, Basil J.; Kliewer, Wendy

    2017-01-01

    Background: Traumatic stress may arise from various incidents often leading to posttraumatic stress disorder (PTSD). The lifetime prevalence of PTSD is estimated at 1% – 2% in Western Europe, 6% – 9% in North America and at just over 10% in countries exposed to long-term violence. In South Africa, the lifetime prevalence for PTSD in the general population is estimated at 2.3%. Aim: To examine the prevalence of posttraumatic stress symptomatology and related psychological functioning in a ...

  6. Neutron-Irradiated Samples as Test Materials for MPEX

    International Nuclear Information System (INIS)

    Ellis, Ronald James; Rapp, Juergen

    2015-01-01

    Plasma Material Interaction (PMI) is a major concern in fusion reactor design and analysis. The Material-Plasma Exposure eXperiment (MPEX) will explore PMI under fusion reactor plasma conditions. Samples with accumulated displacements per atom (DPA) damage produced by fast neutron irradiations in the High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL) will be studied in the MPEX facility. This paper presents assessments of the calculated induced radioactivity and resulting radiation dose rates of a variety of potential fusion reactor plasma-facing materials (such as tungsten). The scientific code packages MCNP and SCALE were used to simulate irradiation of the samples in HFIR including the generation and depletion of nuclides in the material and the subsequent composition, activity levels, gamma radiation fields, and resultant dose rates as a function of cooling time. A challenge of the MPEX project is to minimize the radioactive inventory in the preparation of the samples and the sample dose rates for inclusion in the MPEX facility

  7. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    Energy Technology Data Exchange (ETDEWEB)

    Maglevanny, I.I., E-mail: sianko@list.ru [Volgograd State Social Pedagogical University, 27 Lenin Avenue, Volgograd 400131 (Russian Federation); Smolar, V.A. [Volgograd State Technical University, 28 Lenin Avenue, Volgograd 400131 (Russian Federation)

    2016-01-15

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  8. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    International Nuclear Information System (INIS)

    Maglevanny, I.I.; Smolar, V.A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  9. CHOMIK -Sampling Device of Penetrating Type for Russian Phobos Sample Return Mission

    Science.gov (United States)

    Seweryn, Karol; Grygorczuk, Jerzy; Rickmann, Hans; Morawski, Marek; Aleksashkin, Sergey; Banaszkiewicz, Marek; Drogosz, Michal; Gurgurewicz, Joanna; Kozlov, Oleg E.; Krolikowska-Soltan, Malgorzata; Sutugin, Sergiej E.; Wawrzaszek, Roman; Wisniewski, Lukasz; Zakharov, Alexander

    Measurements of physical properties of planetary bodies allow to determine many important parameters for scientists working in different fields of research. For example effective heat conductivity of the regolith can help with better understanding of processes occurring in the body interior. Chemical and mineralogical composition gives us a chance to better understand the origin and evolution of the moons. In principle such parameters of the planetary bodies can be determined based on three different measurement techniques: (i) in situ measurements (ii) measurements of the samples in laboratory conditions at the Earth and (iii) remote sensing measurements. Scientific missions which allow us to perform all type of measurements, give us a chance for not only parameters determination but also cross calibration of the instruments. Russian Phobos Sample Return (PhSR) mission is one of few which allows for all type of such measurements. The spacecraft will be equipped with remote sensing instruments like: spectrometers, long wave radar and dust counter, instruments for in-situ measurements -gas-chromatograph, seismometer, thermodetector and others and also robotic arm and sampling device. PhSR mission will be launched in November 2011 on board of a launch vehicle Zenit. About a year later (11 months) the vehicle will reach the Martian orbit. It is anticipated that it will land on Phobos in the beginning of 2013. A take off back will take place a month later and the re-entry module containing a capsule that will hold the soil sample enclosed in a container will be on its way back to Earth. The 11 kg re-entry capsule with the container will land in Kazakhstan in mid-2014. A unique geological penetrator CHOMIK dedicated for the Phobos Sample Return space mis-sion will be designed and manufactured at the Space Mechatronics and Robotics Laboratory, Space Research Centre Polish Academy of Sciences (SRC PAS) in Warsaw. Functionally CHOMIK is based on the well known MUPUS

  10. Zirconium(IV) functionalized magnetic nanocomposites for extraction of organophosphorus pesticides from environmental water samples.

    Science.gov (United States)

    Jiang, Li; Huang, Tengjun; Feng, Shun; Wang, Jide

    2016-07-22

    The widespread use of organophosphate pesticides (OPPs) in agriculture leads to residue accumulation in the environment which is dangerous to human health and disrupts the ecological balance. In this work, one nanocomposite immobilized zirconium (Zr, IV) was prepared and used as the affinity probes to quickly and selectively extract organophosphorus pesticides (OPPs) from water samples. The Fe3O4-ethylenediamine tetraacetic acid (EDTA)@Zr(IV) nanocomposites (NPs) were prepared by simply mixing Zr(IV) ions with Fe3O4-EDTA NPs synthesized by one-pot chemical co-precipitation method. The immobilized Zr(IV) ions were further utilized to capture OPPs based on their high affinity for the phosphate moiety in OPPs. Coupled with GC-MS, four OPPs were used as models to demonstrate the feasibility of this approach. Under the optimum conditions, the limits of detection for target OPPs were in the range of 0.10-10.30ngmL(-1) with relative standard deviations (RSDs) of 0.61-4.40% (n=3), respectively. The linear ranges were over three orders of magnitudes (correlation coefficients, R(2)>0.9995). The Fe3O4-EDTA@Zr(IV) NPs were successfully applied to extract OPPs samples with recoveries of 86.95-112.60% and RSDs of 1.20-10.42% (n=3) from two spiked real water. By the proposed method, the matrix interference could be effectively eliminated. We hope our finding can provide a promising alternative for the fast extraction of OPPs from complex real samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Acceptability of Functional Behavioral Assessment Procedures to Special Educators and School Psychologists

    Science.gov (United States)

    O'Neill, Robert E.; Bundock, Kaitlin; Kladis, Kristin; Hawken, Leanne S.

    2015-01-01

    This survey study assessed the acceptability of a variety of functional behavioral assessment (FBA) procedures (i.e., functional assessment interviews, rating scales/questionnaires, systematic direct observations, functional analysis manipulations) to a national sample of 123 special educators and a state sample of 140 school psychologists.…

  12. Platelet function in stored heparinised autologous blood is not superior to in patient platelet function during routine cardiopulmonary bypass.

    Directory of Open Access Journals (Sweden)

    Rolf C G Gallandat Huet

    Full Text Available BACKGROUND: In cardiac surgery, cardiopulmonary bypass (CPB and unfractionated heparin have negative effects on blood platelet function. In acute normovolemic haemodilution autologous unfractionated heparinised blood is stored ex-vivo and retransfused at the end of the procedure to reduce (allogeneic transfusion requirements. In this observational study we assessed whether platelet function is better preserved in ex vivo stored autologous blood compared to platelet function in the patient during CPB. METHODOLOGY/PRINCIPAL FINDING: We measured platelet aggregation responses pre-CPB, 5 min after the start of CPB, at the end of CPB, and after unfractionated heparin reversal, using multiple electrode aggregometry (Multiplate® with adenosine diphosphate (ADP, thrombin receptor activating peptide (TRAP and ristocetin activated test cells. We compared blood samples taken from the patient with samples taken from 100 ml ex-vivo stored blood, which we took to mimick blood storage during normovolemic haemodilution. Platelet function declined both in ex-vivo stored blood as well as in blood taken from the patient. At the end of CPB there were no differences in platelet aggregation responses between samples from the ex vivo stored blood and the patient. CONCLUSION/SIGNIFICANCE: Ex vivo preservation of autologous blood in unfractionated heparin does not seem to be profitable to preserve platelet function.

  13. Sampling and chemical analysis in environmental samples around Nuclear Power Plants and some environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Yong Woo; Han, Man Jung; Cho, Seong Won; Cho, Hong Jun; Oh, Hyeon Kyun; Lee, Jeong Min; Chang, Jae Sook [KORTIC, Taejon (Korea, Republic of)

    2002-12-15

    Twelve kinds of environmental samples such as soil, seawater, underground water, etc. around Nuclear Power Plants(NPPs) were collected. Tritium chemical analysis was tried for the samples of rain water, pine-needle, air, seawater, underground water, chinese cabbage, a grain of rice and milk sampled around NPPs, and surface seawater and rain water sampled over the country. Strontium in the soil that sere sampled at 60 point of district in Korea were analyzed. Tritium were sampled at 60 point of district in Korea were analyzed. Tritium were analyzed in 21 samples of surface seawater around the Korea peninsular that were supplied from KFRDI(National Fisheries Research and Development Institute). Sampling and chemical analysis environmental samples around Kori, Woolsung, Youngkwang, Wooljin Npps and Taeduk science town for tritium and strontium analysis was managed according to plans. Succeed to KINS after all samples were tried.

  14. Mass Functions of the Active Black Holes in Distant Quasars from the Large Bright Quasar Survey, the Bright Quasar Survey, and the Color-Selected Sample of the SDSS Fall Equatorial Stripe

    DEFF Research Database (Denmark)

    Vestergaard, Marianne; Osmer, Patrick S.

    2009-01-01

    We present mass functions of distant actively accreting supermassive black holes residing in luminous quasars discovered in the Large Bright Quasar Survey, the Bright Quasar Survey, and the Fall Equatorial Stripe of the Sloan Digital Sky Survey (SDSS). The quasars cover a wide range of redshifts (0...... functions at similar redshifts based on the SDSS Data Release 3 quasar catalog presented by Vestergaard et al. We see clear evidence of cosmic downsizing in the comoving space density distribution of active black holes in the LBQS sample alone. In forthcoming papers, further analysis, comparison......, and discussion of these mass functions will be made with other existing black hole mass functions, notably that based on the SDSS DR3 quasar catalog. We present the relationships used to estimate the black hole mass based on the MgII emission line; the relations are calibrated to the Hbeta and CIV relations...

  15. Moon-Mars simulation campaign in volcanic Eifel: Remote science support and sample analysis

    Science.gov (United States)

    Offringa, Marloes; Foing, Bernard H.; Kamps, Oscar

    2016-07-01

    Moon-Mars analogue missions using a mock-up lander that is part of the ESA/ILEWG ExoGeoLab project were conducted during Eifel field campaigns in 2009, 2015 and 2016 (Foing et al., 2010). In the last EuroMoonMars2016 campaign the lander was used to conduct reconnaissance experiments and in situ geological scientific analysis of samples, with a payload that mainly consisted of a telescope and a UV-VIS reflectance spectrometer. The aim of the campaign was to exhibit possibilities for the ExoGeoLab lander to perform remotely controlled experiments and test its applicability in the field by simulating the interaction with astronauts. The Eifel region in Germany where the experiments with the ExoGeoLab lander were conducted is a Moon-Mars analogue due to its geological setting and volcanic rock composition. The research conducted by analysis equipment on the lander could function in support of Moon-Mars sample return missions, by providing preliminary insight into characteristics of the analyzed samples. The set-up of the prototype lander was that of a telescope with camera and solar power equipment deployed on the top, the UV-VIS reflectance spectrometer together with computers and a sample webcam were situated in the middle compartment and to the side a sample analysis test bench was attached, attainable by astronauts from outside the lander. An alternative light source that illuminated the samples in case of insufficient daylight was placed on top of the lander and functioned on solar power. The telescope, teleoperated from a nearby stationed pressurized transport vehicle that functioned as a base control center, attained an overview of the sampling area and assisted the astronauts in their initial scouting pursuits. Locations of suitable sampling sites based on these obtained images were communicated to the astronauts, before being acquired during a simulated EVA. Sampled rocks and soils were remotely analyzed by the base control center, while the astronauts

  16. Interrelations between psychosocial functioning and adaptive- and maladaptive-range personality traits.

    Science.gov (United States)

    Ro, Eunyoe; Clark, Lee Anna

    2013-08-01

    Decrements in one or more domains of psychosocial functioning (e.g., poor job performance, poor interpersonal relations) are commonly observed in psychiatric patients. The purpose of this study is to increase understanding of psychosocial functioning as a broad, multifaceted construct as well as its associations with both adaptive- and maladaptive-range personality traits in both nonclinical and psychiatric outpatient samples. The study was conducted in two phases. In Study 1, a nonclinical sample (N = 429) was administered seven psychosocial functioning and adaptive-range personality trait measures. In Study 2, psychiatric outpatients (N = 181) were administered the same psychosocial functioning measures, and maladaptive- as well as adaptive-range personality trait measures. Exploratory (both studies) and confirmatory (Study 2) factor analyses indicated a common three-factor, hierarchical structure of psychosocial functioning-Well Being, Social/Interpersonal Functioning, and Basic Functioning. These psychosocial functioning domains were closely--and differentially--linked with personality traits, especially strongly so in patients. Across samples, Well Being was associated with both Neuroticism/Negative Affectivity and Extraversion/Positive Affectivity, Social/Interpersonal Functioning was associated with both Agreeableness and Conscientiousness/Disinhibition, and Basic Functioning was associated with Conscientiousness/Disinhibition, although only modestly in the nonclinical sample. These relations generally were maintained even after partialing out current general dysphoric symptoms. These findings have implications for considering psychosocial functioning as an important third domain in a tripartite model together with personality and psychopathology. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  17. Pierre Gy's sampling theory and sampling practice heterogeneity, sampling correctness, and statistical process control

    CERN Document Server

    Pitard, Francis F

    1993-01-01

    Pierre Gy's Sampling Theory and Sampling Practice, Second Edition is a concise, step-by-step guide for process variability management and methods. Updated and expanded, this new edition provides a comprehensive study of heterogeneity, covering the basic principles of sampling theory and its various applications. It presents many practical examples to allow readers to select appropriate sampling protocols and assess the validity of sampling protocols from others. The variability of dynamic process streams using variography is discussed to help bridge sampling theory with statistical process control. Many descriptions of good sampling devices, as well as descriptions of poor ones, are featured to educate readers on what to look for when purchasing sampling systems. The book uses its accessible, tutorial style to focus on professional selection and use of methods. The book will be a valuable guide for mineral processing engineers; metallurgists; geologists; miners; chemists; environmental scientists; and practit...

  18. A logistic regression estimating function for spatial Gibbs point processes

    DEFF Research Database (Denmark)

    Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege

    We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related to the p......We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...

  19. Systematic Sampling and Cluster Sampling of Packet Delays

    OpenAIRE

    Lindh, Thomas

    2006-01-01

    Based on experiences of a traffic flow performance meter this papersuggests and evaluates cluster sampling and systematic sampling as methods toestimate average packet delays. Systematic sampling facilitates for exampletime analysis, frequency analysis and jitter measurements. Cluster samplingwith repeated trains of periodically spaced sampling units separated by randomstarting periods, and systematic sampling are evaluated with respect to accuracyand precision. Packet delay traces have been ...

  20. Influence of Sample Size on Automatic Positional Accuracy Assessment Methods for Urban Areas

    Directory of Open Access Journals (Sweden)

    Francisco J. Ariza-López

    2018-05-01

    Full Text Available In recent years, new approaches aimed to increase the automation level of positional accuracy assessment processes for spatial data have been developed. However, in such cases, an aspect as significant as sample size has not yet been addressed. In this paper, we study the influence of sample size when estimating the planimetric positional accuracy of urban databases by means of an automatic assessment using polygon-based methodology. Our study is based on a simulation process, which extracts pairs of homologous polygons from the assessed and reference data sources and applies two buffer-based methods. The parameter used for determining the different sizes (which range from 5 km up to 100 km has been the length of the polygons’ perimeter, and for each sample size 1000 simulations were run. After completing the simulation process, the comparisons between the estimated distribution functions for each sample and population distribution function were carried out by means of the Kolmogorov–Smirnov test. Results show a significant reduction in the variability of estimations when sample size increased from 5 km to 100 km.

  1. Transport Coefficients from Large Deviation Functions

    OpenAIRE

    Gao, Chloe Ya; Limmer, David T.

    2017-01-01

    We describe a method for computing transport coefficients from the direct evaluation of large deviation functions. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which are scaled cumulant generating functions analogous to the free energies. A diffusion Monte Carlo algorithm is used to evaluate th...

  2. On sampling and modeling complex systems

    International Nuclear Information System (INIS)

    Marsili, Matteo; Mastromatteo, Iacopo; Roudi, Yasser

    2013-01-01

    The study of complex systems is limited by the fact that only a few variables are accessible for modeling and sampling, which are not necessarily the most relevant ones to explain the system behavior. In addition, empirical data typically undersample the space of possible states. We study a generic framework where a complex system is seen as a system of many interacting degrees of freedom, which are known only in part, that optimize a given function. We show that the underlying distribution with respect to the known variables has the Boltzmann form, with a temperature that depends on the number of unknown variables. In particular, when the influence of the unknown degrees of freedom on the known variables is not too irregular, the temperature decreases as the number of variables increases. This suggests that models can be predictable only when the number of relevant variables is less than a critical threshold. Concerning sampling, we argue that the information that a sample contains on the behavior of the system is quantified by the entropy of the frequency with which different states occur. This allows us to characterize the properties of maximally informative samples: within a simple approximation, the most informative frequency size distributions have power law behavior and Zipf’s law emerges at the crossover between the under sampled regime and the regime where the sample contains enough statistics to make inferences on the behavior of the system. These ideas are illustrated in some applications, showing that they can be used to identify relevant variables or to select the most informative representations of data, e.g. in data clustering. (paper)

  3. Rumination prospectively predicts executive functioning impairments in adolescents.

    Science.gov (United States)

    Connolly, Samantha L; Wagner, Clara A; Shapero, Benjamin G; Pendergast, Laura L; Abramson, Lyn Y; Alloy, Lauren B

    2014-03-01

    The current study tested the resource allocation hypothesis, examining whether baseline rumination or depressive symptom levels prospectively predicted deficits in executive functioning in an adolescent sample. The alternative to this hypothesis was also evaluated by testing whether lower initial levels of executive functioning predicted increases in rumination or depressive symptoms at follow-up. A community sample of 200 adolescents (ages 12-13) completed measures of depressive symptoms, rumination, and executive functioning at baseline and at a follow-up session approximately 15 months later. Adolescents with higher levels of baseline rumination displayed decreases in selective attention and attentional switching at follow-up. Rumination did not predict changes in working memory or sustained and divided attention. Depressive symptoms were not found to predict significant changes in executive functioning scores at follow-up. Baseline executive functioning was not associated with change in rumination or depression over time. Findings partially support the resource allocation hypothesis that engaging in ruminative thoughts consumes cognitive resources that would otherwise be allocated towards difficult tests of executive functioning. Support was not found for the alternative hypothesis that lower levels of initial executive functioning would predict increased rumination or depressive symptoms at follow-up. Our study is the first to find support for the resource allocation hypothesis using a longitudinal design and an adolescent sample. Findings highlight the potentially detrimental effects of rumination on executive functioning during early adolescence. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Work-related measures of physical and behavioral health function: Test-retest reliability.

    Science.gov (United States)

    Marino, Molly Elizabeth; Meterko, Mark; Marfeo, Elizabeth E; McDonough, Christine M; Jette, Alan M; Ni, Pengsheng; Bogusz, Kara; Rasch, Elizabeth K; Brandt, Diane E; Chan, Leighton

    2015-10-01

    The Work Disability Functional Assessment Battery (WD-FAB), developed for potential use by the US Social Security Administration to assess work-related function, currently consists of five multi-item scales assessing physical function and four multi-item scales assessing behavioral health function; the WD-FAB scales are administered as Computerized Adaptive Tests (CATs). The goal of this study was to evaluate the test-retest reliability of the WD-FAB Physical Function and Behavioral Health CATs. We administered the WD-FAB scales twice, 7-10 days apart, to a sample of 376 working age adults and 316 adults with work-disability. Intraclass correlation coefficients were calculated to measure the consistency of the scores between the two administrations. Standard error of measurement (SEM) and minimal detectable change (MDC90) were also calculated to measure the scales precision and sensitivity. For the Physical Function CAT scales, the ICCs ranged from 0.76 to 0.89 in the working age adult sample, and 0.77-0.86 in the sample of adults with work-disability. ICCs for the Behavioral Health CAT scales ranged from 0.66 to 0.70 in the working age adult sample, and 0.77-0.80 in the adults with work-disability. The SEM ranged from 3.25 to 4.55 for the Physical Function scales and 5.27-6.97 for the Behavioral Health function scales. For all scales in both samples, the MDC90 ranged from 7.58 to 16.27. Both the Physical Function and Behavioral Health CATs of the WD-FAB demonstrated good test-retest reliability in adults with work-disability and general adult samples, a critical requirement for assessing work related functioning in disability applicants and in other contexts. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Quantitative analysis of light elements in aerosol samples by PIGE

    International Nuclear Information System (INIS)

    Mateus, R.; Reis, M.A.; Jesus, A.P.; Ribeiro, J.P.

    2006-01-01

    Quantitative PIGE analysis of aerosol samples collected on nuclepore polycarbonate filters was performed by a method that avoids the use of comparative standards. Nuclear cross sections and calibration parameters established before in an extensive work on thick and intermediate samples were employed. For these samples, the excitation functions of nuclear reactions, induced by the incident protons on target's light elements, were used as input for a code that evaluates the gamma-ray yield integrating along the depth of the sample. In the present work we apply the same code to validate the use of an effective energy for thin sample analysis. Results pertaining to boron, fluorine and sodium concentrations are presented. In order to establish a correlation with sodium values, PIXE results related to chlorine are also presented, giving support to the reliability of this PIGE method for thin film analysis

  6. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  7. Synthesis of polydopamine-functionalized magnetic graphene and carbon nanotubes hybrid nanocomposites as an adsorbent for the fast determination of 16 priority polycyclic aromatic hydrocarbons in aqueous samples.

    Science.gov (United States)

    Chen, Kun; Jin, Rongrong; Luo, Chen; Song, Guoxin; Hu, Yaoming; Cheng, Hefa

    2018-04-01

    A novel adsorbent made of polydopamine-functionalized magnetic graphene and carbon nanotubes hybrid nanocomposite was synthesized and applied to determine 16 priority polycyclic aromatic hydrocarbons by magnetic solid phase extraction in water samples. FTIR spectroscopy, transmission electron microscopy, scanning electron microscopy, and Raman spectroscopy consistently indicate that the synthesized adsorbents are made of core-shell nanoparticles well dispersed on the surface of graphene and carbon nanotubes. The major factors affecting the extraction efficiency, including the pH value of samples, the amount of adsorbent, adsorption time and desorption time, type and volume of desorption solvent, were systematically optimized. Under the optimum extraction conditions, a linear response was obtained for polycyclic aromatic hydrocarbons between concentrations of 10 and 500 ng/L with the correlation coefficients ranging from 0.9958 to 0.9989, and the limits of detection (S/N = 3) were between 0.1 and 3.0 ng/L. Satisfactory results were also obtained when applying these magnetic graphene/carbon nanotubes/polydopamine hybrid nanocomposites to detect polycyclic aromatic hydrocarbons in several environmental aqueous samples. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Linking and Psychological Functioning in a Chinese Sample: The Multiple Mediation of Response to Positive Affect

    Science.gov (United States)

    Yang, Hongfei; Li, Juan

    2016-01-01

    The present study examined the associations between linking, response to positive affect, and psychological functioning in Chinese college students. The results of conducting multiple mediation analyses indicated that emotion- and self-focused positive rumination mediated the relationship between linking and psychological functioning, whereas…

  9. Water born pollutants sampling using porous suction samples

    International Nuclear Information System (INIS)

    Baig, M.A.

    1997-01-01

    The common standard method of sampling water born pollutants in the vadoze zone is core sampling and it is followed by extraction of pore fluid. This method does not allow sampling at the same location next time and again later on. There is an alternative approach for sampling fluids (water born pollutants) from both saturated and unsaturated regions of vadose zone using porous suction samplers. There are three types of porous suction samplers, vacuum-operated, pressure-vacuum lysimeters, high pressure vacuum samples. The suction samples are operated in the range of 0-70 centi bars and usually consist of ceramic and polytetrafluorethylene (PTFE). The operation range of PTFE is higher than ceramic cups. These samplers are well suited for in situ and repeated sampling form the same location. This paper discusses the physical properties and operating condition of such samplers to the utilized under our environmental sampling. (author)

  10. Choosing a suitable sample size in descriptive sampling

    International Nuclear Information System (INIS)

    Lee, Yong Kyun; Choi, Dong Hoon; Cha, Kyung Joon

    2010-01-01

    Descriptive sampling (DS) is an alternative to crude Monte Carlo sampling (CMCS) in finding solutions to structural reliability problems. It is known to be an effective sampling method in approximating the distribution of a random variable because it uses the deterministic selection of sample values and their random permutation,. However, because this method is difficult to apply to complex simulations, the sample size is occasionally determined without thorough consideration. Input sample variability may cause the sample size to change between runs, leading to poor simulation results. This paper proposes a numerical method for choosing a suitable sample size for use in DS. Using this method, one can estimate a more accurate probability of failure in a reliability problem while running a minimal number of simulations. The method is then applied to several examples and compared with CMCS and conventional DS to validate its usefulness and efficiency

  11. Neurocognition and psychosocial functioning in adolescents with bipolar disorder.

    Science.gov (United States)

    Best, Michael W; Bowie, Christopher R; Naiberg, Melanie R; Newton, Dwight F; Goldstein, Benjamin I

    2017-01-01

    Adults with bipolar disorder demonstrate significantly poorer psychosocial functioning and neurocognition compared to controls. In adult bipolar disorder neurocognition predicts a substantial portion of variance in functioning. Adolescents with bipolar disorder have reducedpsychosocial functioning, but less is known about neurocognitive impairments, and no studies have examined the relationship between neurocognition and functioning in an adolescent sample. 38 adolescents with bipolar disorder and 49 healthy controls under 20 years of age completed assessments of psychosocial functioning, neurocognitive ability, and psychiatric symptoms. Adolescents with bipolar disorder had significantly poorer psychosocial functioning in domains of daily activities, social functioning, and satisfaction with functioning, psadolescent sample with bipolar disorder experiences significantly poorer neurocognitive and psychosocial functioning compared to controls; however, psychosocial functioning appears to be more strongly related to mood symptoms than to neurocognition. Future work is needed to delineate the time course of neurocognitive functioning and its relation to psychosocial functioning across the course of illness. Adolescence may provide an ideal time for cognitive enhancement and intensive psychosocial intervention. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Improvements to robotics-inspired conformational sampling in rosetta.

    Directory of Open Access Journals (Sweden)

    Amelie Stein

    Full Text Available To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new "next-generation KIC" method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions.

  13. Computer graphics for quality control in the INAA of geological samples

    International Nuclear Information System (INIS)

    Grossman, J.N.; Baedecker, P.A.

    1987-01-01

    A data reduction system for the routine instrumental activation analysis of samples is described, with particular emphasis on interactive graphics capabilities for evaluating analytical quality. Graphics procedures were developed to interactively control the analysis of selected photopeaks during spectral analysis, and to evaluate detector performance during a given counting cycle. Graphics algorithms are also used to compare the data on reference samples with accepted values, to prepare quality control charts to evaluate long term precision and to search for systematic variations in data on reference samples as a function of time. (author)

  14. Acceptance Sampling Plans Based on Truncated Life Tests for Sushila Distribution

    Directory of Open Access Journals (Sweden)

    Amer Ibrahim Al-Omari

    2018-03-01

    Full Text Available An acceptance sampling plan problem based on truncated life tests when the lifetime following a Sushila distribution is considered in this paper. For various acceptance numbers, confidence levels and values of the ratio between fixed experiment time and particular mean lifetime, the minimum sample sizes required to ascertain a specified mean life were found. The operating characteristic function values of the suggested sampling plans and the producer’s risk are presented. Some tables are provided and the results are illustrated by an example of a real data set.

  15. Mechanical design and simulation of an automatized sample exchanger

    International Nuclear Information System (INIS)

    Lopez, Yon; Gora, Jimmy; Bedregal, Patricia; Hernandez, Yuri; Baltuano, Oscar; Gago, Javier

    2013-01-01

    The design of a turntable type sample exchanger for irradiation and with a capacity for up to 20 capsules was performed. Its function is the automatic sending of samples contained in polyethylene capsules, for irradiation in the grid position of the reactor core, using a pneumatic system and further analysis by neutron activation. This study shows the structural design analysis and calculations in selecting motors and actuators. This development will improve efficiency in the analysis, reducing the contribution of the workers and also the radiation exposure time. (authors).

  16. Test plan for core sampling drill bit temperature monitor

    International Nuclear Information System (INIS)

    Francis, P.M.

    1994-01-01

    At WHC, one of the functions of the Tank Waste Remediation System division is sampling waste tanks to characterize their contents. The push-mode core sampling truck is currently used to take samples of liquid and sludge. Sampling of tanks containing hard salt cake is to be performed with the rotary-mode core sampling system, consisting of the core sample truck, mobile exhauster unit, and ancillary subsystems. When drilling through the salt cake material, friction and heat can be generated in the drill bit. Based upon tank safety reviews, it has been determined that the drill bit temperature must not exceed 180 C, due to the potential reactivity of tank contents at this temperature. Consequently, a drill bit temperature limit of 150 C was established for operation of the core sample truck to have an adequate margin of safety. Unpredictable factors, such as localized heating, cause this buffer to be so great. The most desirable safeguard against exceeding this threshold is bit temperature monitoring . This document describes the recommended plan for testing the prototype of a drill bit temperature monitor developed for core sampling by Sandia National Labs. The device will be tested at their facilities. This test plan documents the tests that Westinghouse Hanford Company considers necessary for effective testing of the system

  17. Double Shell Tank (DST) Process Waste Sampling Subsystem Definition Report

    International Nuclear Information System (INIS)

    RASMUSSEN, J.H.

    2000-01-01

    This report defines the Double-Shell Tank (DST) Process Waste Sampling Subsystem (PWSS). This subsystem definition report fully describes and identifies the system boundaries of the PWSS. This definition provides a basis for developing functional, performance, and test requirements (i.e., subsystem specification), as necessary, for the PWSS. The resultant PWSS specification will include the sampling requirements to support the transfer of waste from the DSTs to the Privatization Contractor during Phase 1 of Waste Feed Delivery

  18. Soil Sampling Plan for the transuranic storage area soil overburden and final report: Soil overburden sampling at the RWMC transuranic storage area

    International Nuclear Information System (INIS)

    Stanisich, S.N.

    1994-12-01

    This Soil Sampling Plan (SSP) has been developed to provide detailed procedural guidance for field sampling and chemical and radionuclide analysis of selected areas of soil covering waste stored at the Transuranic Storage Area (TSA) at the Idaho National Engineering Laboratory's (INEL) Radioactive Waste Management Complex (RWMC). The format and content of this SSP represents a complimentary hybrid of INEL Waste Management--Environmental Restoration Program, and Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) Remedial Investigation/Feasibility Study (RI/FS) sampling guidance documentation. This sampling plan also functions as a Quality Assurance Project Plan (QAPP). The QAPP as a controlling mechanism during sampling to ensure that all data collected are valid, reliabile, and defensible. This document outlines organization, objectives and quality assurance/quality control (QA/QC) activities to achieve the desired data quality goals. The QA/QC requirements for this project are outlined in the Data Collection Quality Assurance Plan (DCQAP) for the Buried Waste Program. The DCQAP is a program plan and does not outline the site specific requirements for the scope of work covered by this SSP

  19. GANSEKI: JAMSTEC Deep Seafloor Rock Sample Database Emerging to the New Phase

    Science.gov (United States)

    Tomiyama, T.; Ichiyama, Y.; Horikawa, H.; Sato, Y.; Soma, S.; Hanafusa, Y.

    2013-12-01

    Japan Agency for Marine-Earth Science and Technology (JAMSTEC) collects a lot of substantial samples as well as various geophysical data using its research vessels and submersibles. These samples and data, which are obtained by spending large amounts of human and physical resources, are precious wealth of the world scientific community. For the better use of these samples and data, it is important that they are utilized not only for initial purpose of each cruse but also for other general scientific and educational purposes of second-hand users. Based on the JAMSTEC data and sample handling policies [1], JAMSTEC has systematically stored samples and data obtained during research cruises, and provided them to domestic/foreign activities on research, education, and public relation. Being highly valued for second-hand usability, deep seafloor rock samples are one of the most important types of samples obtained by JAMSTEC, as oceanic biological samples and sediment core samples are. Rock samples can be utilized for natural history sciences and other various purposes; some of these purposes are connected to socially important issues such as earthquake mechanisms and mineral resource developments. Researchers and educators can access to JAMSTEC rock samples and associated data through 'GANSEKI [2]', the JAMSTEC Deep Seafloor Rock Sample Database. GANSEKI was established on the Internet in 2006 and its contents and functions have been continuously enriched and upgraded since then. GANSEKI currently provides 19 thousands of sample metadata, 9 thousands of collection inventory data and 18 thousands of geochemical data. Most of these samples are recovered from the North-western Pacific Ocean, although samples from other area are also included. The major update of GANSEKI held in May 2013 involved a replacement of database core system and a redesign of user interface. In the new GANSEKI, users can select samples easily and precisely using multi-index search, numerical

  20. Convergence and Efficiency of Adaptive Importance Sampling Techniques with Partial Biasing

    Science.gov (United States)

    Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G.

    2018-04-01

    We propose a new Monte Carlo method to efficiently sample a multimodal distribution (known up to a normalization constant). We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which can also be seen as a generalization of well-tempered metadynamics. The dynamics is based on an adaptive importance technique. The importance function relies on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to an additive constant, the free-energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free-energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.

  1. Sampling and examination methods used for TMI-2 samples

    International Nuclear Information System (INIS)

    Marley, A.W.; Akers, D.W.; McIsaac, C.V.

    1988-01-01

    The purpose of this paper is to summarize the sampling and examination techniques that were used in the collection and analysis of TMI-2 samples. Samples ranging from auxiliary building air to core debris were collected and analyzed. Handling of the larger samples and many of the smaller samples had to be done remotely and many standard laboratory analytical techniques were modified to accommodate the extremely high radiation fields associated with these samples. The TMI-2 samples presented unique problems with sampling and the laboratory analysis of prior molten fuel debris. 14 refs., 8 figs

  2. Trace-element measurement in human blood samples

    International Nuclear Information System (INIS)

    Hamidian, M.R.; Ebrahimi-Fakhar, F.

    1992-01-01

    It is conceivable that some essential elements such as zinc, iron, calcium, copper, phosphorus, selenium, etc., have a major impact on biological and metabolical functions in the human body. The concentration of these elements is normally very minute and changes within a naturally set tolerance. The accurate measurement of these elements in biological samples, such as in blood, is one of the objectives of medical physics in diagnosis. There are many sophisticated methods to measure the accurate amount of each element in biological samples. The methods used in this project are a combination of proton-induced X-ray emission (PIXE) and neutron activation analysis (NAA). The PIXE and NAA are fast and reliable techniques for multielement analysis at the level of parts per million and less

  3. [Correlation between demyelinating lesions and executive function decline in a sample of Mexican patients with multiple sclerosis].

    Science.gov (United States)

    Aldrete Cortez, V R; Duriez-Sotelo, E; Carrillo-Mora, P; Pérez-Zuno, J A

    2013-09-01

    Multiple Sclerosis (MS) is characterised by several neurological symptoms including cognitive impairment, which has recently been the subject of considerable study. At present, evidence pointing to a correlation between lesion characteristics and specific cognitive impairment is not conclusive. To investigate the presence of a correlation between the characteristics of demyelinating lesions and performance of basic executive functions in a sample of MS patients. We included 21 adult patients with scores of 0 to 5 on the Kurtzke scale and no exacerbations of the disease in at least 3 months prior to the evaluation date. They completed the Stroop test and the Wisconsin Card Sorting Test (WCST). The location of the lesions was determined using magnetic resonance imaging (MRI) performed by a blinded expert in neuroimaging. Demyelinating lesions were more frequently located in the frontal and occipital lobes. The Stroop test showed that as cognitive demand increased on each of the sections in the test, reaction time and number of errors increased. On the WCST, 33.33% of patients registered as having moderate cognitive impairment. No correlation could be found between demyelinating lesion characteristics (location, size, and number) and patients' scores on the tests. Explanations of the causes of cognitive impairment in MS should examine a variety of biological, psychological, and social factors instead of focusing solely on demyelinating lesions. Copyright © 2012 Sociedad Española de Neurología. Published by Elsevier Espana. All rights reserved.

  4. Dynamics of entanglement between two atomic samples with spontaneous scattering

    International Nuclear Information System (INIS)

    Di Lisi, Antonio; De Siena, Silvio; Illuminati, Fabrizio

    2004-01-01

    We investigate the effects of spontaneous scattering on the evolution of entanglement of two atomic samples, probed by phase-shift measurements on optical beams interacting with both samples. We develop a formalism of conditional quantum evolutions and present a wave function analysis implemented in numerical simulations of the state vector dynamics. This method allows us to track the evolution of entanglement and to compare it with the predictions obtained when spontaneous scattering is neglected. We provide numerical evidence that the interferometric scheme to entangle atomic samples is only marginally affected by the presence of spontaneous scattering and should thus be robust even in more realistic situations

  5. Childhood sexual abuse: long-term effects on psychological and sexual functioning in a nonclinical and nonstudent sample of adult women.

    Science.gov (United States)

    Greenwald, E; Leitenberg, H; Cado, S; Tarran, M J

    1990-01-01

    The purpose of this study was to explore how the experience of childhood sexual abuse is related to long-term psychological and sexual functioning in a nonclinical and nonstudent community sample of women. Questionnaires were distributed to 1,500 nurses and returned anonymously. Fifty-four women who had been sexually abused as children (age 15 or younger) responded. These subjects were then matched with 54 nonabused control subjects. Although there was no difference on a measure of self-esteem, the abused group reported more symptoms of distress on the Global Severity Index and on seven out of nine subscales of the Derogatis Brief Symptom Inventory. They also reported more disturbance on a scale which examined psychological symptoms that have been commonly reported in the literature to be particularly associated with sexual abuse. These differences between the abused and nonabused groups were evident even after controlling for differences in subjects' perceptions of parental emotional support. Unlike the results for psychological adjustment, however, the abused subjects did not differ from the control subjects on self-reported levels of sexual satisfaction or sexual dysfunction.

  6. Statistical Analysis Of Tank 19F Floor Sample Results

    International Nuclear Information System (INIS)

    Harris, S.

    2010-01-01

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 19F as per the statistical sampling plan developed by Harris and Shine. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL95%) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current scrape sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 19F. The uncertainty is quantified in this report by an UCL95% on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL95% was based entirely on the six current scrape sample results (each averaged across three analytical determinations).

  7. Piloting the use of experience sampling method to investigate the everyday social experiences of children with Asperger syndrome/high functioning autism.

    Science.gov (United States)

    Cordier, Reinie; Brown, Nicole; Chen, Yu-Wei; Wilkes-Gillan, Sarah; Falkmer, Torbjorn

    2016-01-01

    This pilot study explored the nature and quality of social experiences of children with Asperger Syndrome/High Functioning Autism (AS/HFA) through experience sampling method (ESM) while participating in everyday activities. ESM was used to identify the contexts and content of daily life experiences. Six children with AS/HFA (aged 8-12) wore an iPod Touch on seven consecutive days, while being signalled to complete a short survey. Participants were in the company of others 88.3% of their waking time, spent 69.0% of their time with family and 3.8% with friends, but only conversed with others 26.8% of the time. Participants had more positive experiences and emotions when they were with friends compared with other company. Participating in leisure activities was associated with enjoyment, interest in the occasion, and having positive emotions. ESM was found to be helpful in identifying the nature and quality of social experiences of children with AS/HFA from their perspective.

  8. Soil sample moisture content as a function of time during oven drying for gamma-ray spectroscopic measurements

    International Nuclear Information System (INIS)

    Benke, R.R.; Kearfott, K.J.

    1999-01-01

    In routine gamma-ray spectroscopic analysis of collected soil samples, procedure often calls to remove soil moisture by oven drying overnight at a temperature of 100 deg. C . Oven drying not only minimizes the gamma-ray self-attenuation of soil samples due to the absence of water during the gamma-ray spectroscopic analysis, but also allows for a straightforward calculation of the specific activity of radionuclides in soil, historically based on the sample dry weight. Because radon exhalation is strongly dependent on moisture , knowledge of the oven-drying time dependence of the soil moisture content, combined with radon exhalation measurements during oven drying and at room temperature for varying soil moisture contents, would allow conclusions to be made on how the oven-drying radon exhalation rate depends on soil moisture content. Determinations of the oven-drying radon exhalation from soil samples allow corrections to be made for the immediate laboratory gamma-ray spectroscopy of radionuclides in the natural uranium decay chain. This paper presents the results of soil moisture content measurements during oven drying and suggests useful empirical fits to the moisture data

  9. An adaptive Monte Carlo method under emission point as sampling station for deep penetration calculation

    International Nuclear Information System (INIS)

    Wang, Ruihong; Yang, Shulin; Pei, Lucheng

    2011-01-01

    Deep penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, an adaptive technique under the emission point as a sampling station is presented. The main advantage is to choose the most suitable sampling number from the emission point station to get the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is also derived. The main principle is to define the importance function of the response due to the particle state and ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive method under the emission point as a station could overcome the difficulty of underestimation to the result in some degree, and the related importance sampling method gets satisfied results as well. (author)

  10. On the assessment of extremely low breakdown probabilities by an inverse sampling procedure [gaseous insulation

    DEFF Research Database (Denmark)

    Thyregod, Poul; Vibholm, Svend

    1991-01-01

    the flashover probability function and the corresponding distribution of first breakdown voltages under the inverse sampling procedure, and show how this relation may be utilized to assess the single-shot flashover probability corresponding to the observed average first breakdown voltage. Since the procedure......First breakdown voltages obtained under the inverse sampling procedure assuming a double exponential flashover probability function are discussed. An inverse sampling procedure commences the voltage application at a very low level, followed by applications at stepwise increased levels until...... is based on voltage applications in the neighbourhood of the quantile under investigation, the procedure is found to be insensitive to the underlying distributional assumptions...

  11. Influence of the sample anticoagulant on the measurements of impedance aggregometry in cardiac surgery

    Directory of Open Access Journals (Sweden)

    Cristina Solomon

    2008-10-01

    Full Text Available Cristina Solomon1, Michael Winterhalter1, Isabel Gilde1, Ludwig Hoy2, Andreas Calatzis3, Niels Rahe-Meyer11Department of Anesthesiology, Hannover Medical School, Hannover, Germany; 2Institute for Biometry, Hannover Medical School, Hannover, Germany; 3Department Hemostasis Transfusion Medicine, University Hospital Munich, Munich, GermanyBackground: The standard method of assessment of platelet function is represented by light transmission aggregometry (LTA, performed in citrated platelet-rich plasma (PRP. With LTA, decrease and subsequent post-cardiopulmonary bypass (CPB recovery of platelet function have been reported during cardiac surgery. Multiple electrode aggregometry (MEA may be used as point-of-care method to monitor perioperative changes in platelet function. Since MEA assesses macroaggregation which is influenced by the plasmatic levels of unbound calcium, citrate may be inadequate as anticoagulant for MEA. We used citrate and heparin for MEA samples, to see with which anticoagulant the intraoperative decrease and postoperative recovery in platelet function previously described with other aggregometric methods in cardiac surgery may be observed with MEA.Methods: Blood was obtained from 60 patients undergoing routine cardiac surgery and the samples were collected in standard tubes containing unfractionated heparin (50 U/mL or trisodium citrate (3.2%. The samples were obtained before CPB, at 30 minutes on CPB, end of CPB and on the first postoperative day. MEA was performed using the Multiplate® analyzer. Collagen (COLtest, 100 μg/mL and TRAP-6 (thrombin receptor activating peptide, TRAPtest, 1mM/mL were used as aggregation agonists.Results: Platelet aggregometric response decreased significantly during CPB. Platelet aggregation assessed using TRAP-6 as agonist on heparinized blood significantly correlated with the duration of CPB (r = −0.41, p = 0.001, 2-tailed Pearson test. The aggregometric analysis performed on the first

  12. Optimum time of blood sampling for determination of glomerular filtration rate by single-injection [51Cr]EDTA plasma clearance

    International Nuclear Information System (INIS)

    Broechner-Mortensen, J.; Roedbro, P.

    1976-01-01

    We have investigated the influence on reproducibility of total [ 51 Cr]EDTA plasma clearance (E) of various times and numbers of blood samples in patients with normal (13 patients) and low (14 patients) renal function. The study aims at fixing a clinically useful procedure suitable for all levels of renal function. Six different types of E were evaluated with time periods for blood sampling between 3 and 5 h after tracer injection, and the variation from counting radioactivity, s 1 , was determined as part of the total variation, s 2 . Optimum mean time, t(E), for blood sampling was calculated as a function of E, as the mean time giving the least change in E for a given change in the 'final slope' of the plasma curve. For patients with normal E, s 1 did not contribute significantly to s 2 , and t(E) was about 2h. For patients with low renal function s 1 contributed significantly to s 2 , and t(E) increased steeply with decreasing E. The relative error in s 1 from fixed Etypes was calculated for all levels of renal function. The results indicate that blood sampling individualized according to predicted E values is not necessary. A sufficient precision of E can be achieved for all function levels from three blood samples drawn at 180, 240, and 300 min after injection. (Auth.)

  13. STAR FORMATION LAWS: THE EFFECTS OF GAS CLOUD SAMPLING

    International Nuclear Information System (INIS)

    Calzetti, D.; Liu, G.; Koda, J.

    2012-01-01

    Recent observational results indicate that the functional shape of the spatially resolved star formation-molecular gas density relation depends on the spatial scale considered. These results may indicate a fundamental role of sampling effects on scales that are typically only a few times larger than those of the largest molecular clouds. To investigate the impact of this effect, we construct simple models for the distribution of molecular clouds in a typical star-forming spiral galaxy and, assuming a power-law relation between star formation rate (SFR) and cloud mass, explore a range of input parameters. We confirm that the slope and the scatter of the simulated SFR-molecular gas surface density relation depend on the size of the sub-galactic region considered, due to stochastic sampling of the molecular cloud mass function, and the effect is larger for steeper relations between SFR and molecular gas. There is a general trend for all slope values to tend to ∼unity for region sizes larger than 1-2 kpc, irrespective of the input SFR-cloud relation. The region size of 1-2 kpc corresponds to the area where the cloud mass function becomes fully sampled. We quantify the effects of selection biases in data tracing the SFR, either as thresholds (i.e., clouds smaller than a given mass value do not form stars) or as backgrounds (e.g., diffuse emission unrelated to current star formation is counted toward the SFR). Apparently discordant observational results are brought into agreement via this simple model, and the comparison of our simulations with data for a few galaxies supports a steep (>1) power-law index between SFR and molecular gas.

  14. Imaging and cognitive genetics: the Norwegian Cognitive NeuroGenetics sample.

    Science.gov (United States)

    Espeseth, Thomas; Christoforou, Andrea; Lundervold, Astri J; Steen, Vidar M; Le Hellard, Stephanie; Reinvang, Ivar

    2012-06-01

    Data collection for the Norwegian Cognitive NeuroGenetics sample (NCNG) was initiated in 2003 with a research grant (to Ivar Reinvang) to study cognitive aging, brain function, and genetic risk factors. The original focus was on the effects of aging (from middle age and up) and candidate genes (e.g., APOE, CHRNA4) in cross-sectional and longitudinal designs, with the cognitive and MRI-based data primarily being used for this purpose. However, as the main topic of the project broadened from cognitive aging to imaging and cognitive genetics more generally, the sample size, age range of the participants, and scope of available phenotypes and genotypes, have developed beyond the initial project. In 2009, a genome-wide association (GWA) study was undertaken, and the NCNG proper was established to study the genetics of cognitive and brain function more comprehensively. The NCNG is now controlled by the NCNG Study Group, which consists of the present authors. Prominent features of the NCNG are the adult life-span coverage of healthy participants with high-dimensional imaging, and cognitive data from a genetically homogenous sample. Another unique property is the large-scale (sample size 300-700) use of experimental cognitive tasks focusing on attention and working memory. The NCNG data is now used in numerous ongoing GWA-based studies and has contributed to several international consortia on imaging and cognitive genetics. The objective of the following presentation is to give other researchers the information necessary to evaluate possible contributions from the NCNG to various multi-sample data analyses.

  15. Sampling and sample processing in pesticide residue analysis.

    Science.gov (United States)

    Lehotay, Steven J; Cook, Jo Marie

    2015-05-13

    Proper sampling and sample processing in pesticide residue analysis of food and soil have always been essential to obtain accurate results, but the subject is becoming a greater concern as approximately 100 mg test portions are being analyzed with automated high-throughput analytical methods by agrochemical industry and contract laboratories. As global food trade and the importance of monitoring increase, the food industry and regulatory laboratories are also considering miniaturized high-throughput methods. In conjunction with a summary of the symposium "Residues in Food and Feed - Going from Macro to Micro: The Future of Sample Processing in Residue Analytical Methods" held at the 13th IUPAC International Congress of Pesticide Chemistry, this is an opportune time to review sampling theory and sample processing for pesticide residue analysis. If collected samples and test portions do not adequately represent the actual lot from which they came and provide meaningful results, then all costs, time, and efforts involved in implementing programs using sophisticated analytical instruments and techniques are wasted and can actually yield misleading results. This paper is designed to briefly review the often-neglected but crucial topic of sample collection and processing and put the issue into perspective for the future of pesticide residue analysis. It also emphasizes that analysts should demonstrate the validity of their sample processing approaches for the analytes/matrices of interest and encourages further studies on sampling and sample mass reduction to produce a test portion.

  16. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  17. Cellular characterization of compression induced-damage in live biological samples

    Science.gov (United States)

    Bo, Chiara; Balzer, Jens; Hahnel, Mark; Rankin, Sara M.; Brown, Katherine A.; Proud, William G.

    2011-06-01

    Understanding the dysfunctions that high-intensity compression waves induce in human tissues is critical to impact on acute-phase treatments and requires the development of experimental models of traumatic damage in biological samples. In this study we have developed an experimental system to directly assess the impact of dynamic loading conditions on cellular function at the molecular level. Here we present a confinement chamber designed to subject live cell cultures in liquid environment to compression waves in the range of tens of MPa using a split Hopkinson pressure bars system. Recording the loading history and collecting the samples post-impact without external contamination allow the definition of parameters such as pressure and duration of the stimulus that can be related to the cellular damage. The compression experiments are conducted on Mesenchymal Stem Cells from BALB/c mice and the damage analysis are compared to two control groups. Changes in Stem cell viability, phenotype and function are assessed flow cytometry and with in vitro bioassays at two different time points. Identifying the cellular and molecular mechanisms underlying the damage caused by dynamic loading in live biological samples could enable the development of new treatments for traumatic injuries.

  18. Boat sampling

    International Nuclear Information System (INIS)

    Citanovic, M.; Bezlaj, H.

    1994-01-01

    This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures

  19. A Bayesian Justification for Random Sampling in Sample Survey

    Directory of Open Access Journals (Sweden)

    Glen Meeden

    2012-07-01

    Full Text Available In the usual Bayesian approach to survey sampling the sampling design, plays a minimal role, at best. Although a close relationship between exchangeable prior distributions and simple random sampling has been noted; how to formally integrate simple random sampling into the Bayesian paradigm is not clear. Recently it has been argued that the sampling design can be thought of as part of a Bayesian's prior distribution. We will show here that under this scenario simple random sample can be given a Bayesian justification in survey sampling.

  20. Efficiency corrections in determining the 137Cs inventory of environmental soil samples by using relative measurement method and GEANT4 simulations

    International Nuclear Information System (INIS)

    Li, Gang; Liang, Yongfei; Xu, Jiayun; Bai, Lixin

    2015-01-01

    The determination of 137 Cs inventory is widely used to estimate the soil erosion or deposition rate. The generally used method to determine the activity of volumetric samples is the relative measurement method, which employs a calibration standard sample with accurately known activity. This method has great advantages in accuracy and operation only when there is a small difference in elemental composition, sample density and geometry between measuring samples and the calibration standard. Otherwise it needs additional efficiency corrections in the calculating process. The Monte Carlo simulations can handle these correction problems easily with lower financial cost and higher accuracy. This work presents a detailed description to the simulation and calibration procedure for a conventionally used commercial P-type coaxial HPGe detector with cylindrical sample geometry. The effects of sample elemental composition, density and geometry were discussed in detail and calculated in terms of efficiency correction factors. The effect of sample placement was also analyzed, the results indicate that the radioactive nuclides and sample density are not absolutely uniform distributed along the axial direction. At last, a unified binary quadratic functional relationship of efficiency correction factors as a function of sample density and height was obtained by the least square fitting method. This function covers the sample density and height range of 0.8–1.8 g/cm 3 and 3.0–7.25 cm, respectively. The efficiency correction factors calculated by the fitted function are in good agreement with those obtained by the GEANT4 simulations with the determination coefficient value greater than 0.9999. The results obtained in this paper make the above-mentioned relative measurements more accurate and efficient in the routine radioactive analysis of environmental cylindrical soil samples. - Highlights: • Determination of 137 Cs inventory in environmental soil samples by using relative

  1. Image re-sampling detection through a novel interpolation kernel.

    Science.gov (United States)

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Testing of a method of importance sampling for use with SYVAC

    International Nuclear Information System (INIS)

    Dalrymple, G.J.; Prust, J.O.; Edwards, H.H.

    1985-10-01

    The Importance Sampling Scheme is designed to concentrate sampling in the high dose region of the parameter space. A sensitivity analysis of an intitial case study is used to roughly define the high dose and risk region of the parameter space. By applying modified distribution to the individual parameter ranges it was possible to concentrate sampling in regions of the parameter range that lead to high doses and risks. Comparison of risk estimates and cumulative distribution functions of dose for an increasing number of runs of the SYVAC model indicated that the risk estimate had converged at 1200 Importance Sampling runs. Examination of a plot of risk in various dose bands supported this conclusion. It was clear that the random sampling had not achieved convergence at 400 runs. (author)

  3. Fitting by a pearson II function of the spatial deposited energy distribution in superconducting YBaCuO samples calculated by Monte Carlo simulation

    International Nuclear Information System (INIS)

    Cruz Inclan, Carlos M.; Leyva Fabelo, Antonio; Alfonso Vazquez, Onexis

    2001-01-01

    The spatial deposited energy distribution inside YBa 2 Cu 3 O 7 superconducting ceramics irradiated with gamma rays were simulated using the codes system EGS4, based on the Monte Carlo method. The obtained distributions evidence a notable inhomogeneity, which may be one of the possible sources of inconsistent results of irradiation studies. The profiles of these distributions show asymmetrical behaviors, which may be fitted satisfactorily through a Pearson II Gamma type function. These fittings are presented in the paper and the behavior of the fitting parameters with the energy of incident photons, its number, and the experimental geometry were studied. The physical signification of each fitting parameters is discussed in the text. The exponent is related to certain mass absorption coefficient when the thick of the sample is sufficiently large

  4. Caregiver Report of Executive Functioning in a Population-Based Sample of Young Children with Down Syndrome

    Science.gov (United States)

    Lee, Nancy Raitano; Fidler, Deborah J.; Blakeley-Smith, Audrey; Daunhauer, Lisa; Robinson, Cordelia; Hepburn, Susan L.

    2011-01-01

    The current study describes everyday executive function (EF) profiles in young children with Down syndrome. Caregivers of children with Down syndrome (n = 26; chronological ages = 4-10 years; mental ages = 2-4 years) completed the Behavior Rating Inventory of Executive Function-Preschool (BRIEF-P; G. A. Gioia, K. A. Espy, & P. K. Isquith, 2003), a…

  5. Perilymph sampling from the cochlear apex: a reliable method to obtain higher purity perilymph samples from scala tympani.

    Science.gov (United States)

    Salt, Alec N; Hale, Shane A; Plonkte, Stefan K R

    2006-05-15

    Measurements of drug levels in the fluids of the inner ear are required to establish kinetic parameters and to determine the influence of specific local delivery protocols. For most substances, this requires cochlear fluids samples to be obtained for analysis. When auditory function is of primary interest, the drug level in the perilymph of scala tympani (ST) is most relevant, since drug in this scala has ready access to the auditory sensory cells. In many prior studies, ST perilymph samples have been obtained from the basal turn, either by aspiration through the round window membrane (RWM) or through an opening in the bony wall. A number of studies have demonstrated that such samples are likely to be contaminated with cerebrospinal fluid (CSF). CSF enters the basal turn of ST through the cochlear aqueduct when the bony capsule is perforated or when fluid is aspirated. The degree of sample contamination has, however, not been widely appreciated. Recent studies have shown that perilymph samples taken through the round window membrane are highly contaminated with CSF, with samples greater than 2microL in volume containing more CSF than perilymph. In spite of this knowledge, many groups continue to sample from the base of the cochlea, as it is a well-established method. We have developed an alternative, technically simple method to increase the proportion of ST perilymph in a fluid sample. The sample is taken from the apex of the cochlea, a site that is distant from the cochlear aqueduct. A previous problem with sampling through a perforation in the bone was that the native perilymph rapidly leaked out driven by CSF pressure and was lost to the middle ear space. We therefore developed a procedure to collect all the fluid that emerged from the perforated apex after perforation. We evaluated the method using a marker ion trimethylphenylammonium (TMPA). TMPA was applied to the perilymph of guinea pigs either by RW irrigation or by microinjection into the apical turn. The

  6. Graph sampling

    OpenAIRE

    Zhang, L.-C.; Patone, M.

    2017-01-01

    We synthesise the existing theory of graph sampling. We propose a formal definition of sampling in finite graphs, and provide a classification of potential graph parameters. We develop a general approach of Horvitz–Thompson estimation to T-stage snowball sampling, and present various reformulations of some common network sampling methods in the literature in terms of the outlined graph sampling theory.

  7. An Automated Sample Processing System for Planetary Exploration

    Science.gov (United States)

    Soto, Juancarlos; Lasnik, James; Roark, Shane; Beegle, Luther

    2012-01-01

    An Automated Sample Processing System (ASPS) for wet chemistry processing of organic materials on the surface of Mars has been jointly developed by Ball Aerospace and the Jet Propulsion Laboratory. The mechanism has been built and tested to demonstrate TRL level 4. This paper describes the function of the system, mechanism design, lessons learned, and several challenges that were overcome.

  8. A General Linear Method for Equating with Small Samples

    Science.gov (United States)

    Albano, Anthony D.

    2015-01-01

    Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

  9. Amotivation and functional outcomes in early schizophrenia.

    Science.gov (United States)

    Fervaha, Gagan; Foussias, George; Agid, Ofer; Remington, Gary

    2013-12-15

    Negative symptoms, particularly amotivation/apathy, are intimately tied to functional outcomes. In the present study, apathy strongly predicted psychosocial functioning in a sample of early course schizophrenia patients. This relationship remained robust even after controlling for other clinical variables. These data suggest amotivation is core to functioning across the disease course. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. MUSIC ALGORITHM FOR LOCATING POINT-LIKE SCATTERERS CONTAINED IN A SAMPLE ON FLAT SUBSTRATE

    Institute of Scientific and Technical Information of China (English)

    Dong Heping; Ma Fuming; Zhang Deyue

    2012-01-01

    In this paper,we consider a MUSIC algorithm for locating point-like scatterers contained in a sample on flat substrate.Based on an asymptotic expansion of the scattering amplitude proposed by Ammari et al.,the reconstruction problem can be reduced to a calculation of Green function corresponding to the background medium.In addition,we use an explicit formulation of Green function in the MUSIC algorithm to simplify the calculation when the cross-section of sample is a half-disc.Numerical experiments are included to demonstrate the feasibility of this method.

  11. Computer graphics for quality control in the INAA of geological samples

    Science.gov (United States)

    Grossman, J.N.; Baedecker, P.A.

    1987-01-01

    A data reduction system for the routine instrumental activation analysis of samples is described, with particular emphasis on interactive graphics capabilities for evaluating analytical quality. Graphics procedures have been developed to interactively control the analysis of selected photopeaks during spectral analysis, and to evaluate detector performance during a given counting cycle. Graphics algorithms are also used to compare the data on reference samples with accepted values, to prepare quality control charts to evaluate long term precision and to search for systematic variations in data on reference samples as a function of time. ?? 1987 Akade??miai Kiado??.

  12. inverse gaussian model for small area estimation via gibbs sampling

    African Journals Online (AJOL)

    ADMIN

    For example, MacGibbon and Tomberlin. (1989) have considered estimating small area rates and binomial parameters using empirical Bayes methods. Stroud (1991) used hierarchical Bayes approach for univariate natural exponential families with quadratic variance functions in sample survey applications, while Chaubey ...

  13. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Ling-Yu Duan

    2010-01-01

    Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  14. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Tian Yonghong

    2010-01-01

    Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  15. Effect of simulated sampling disturbance on creep behaviour of rock salt

    Science.gov (United States)

    Guessous, Z.; Gill, D. E.; Ladanyi, B.

    1987-10-01

    This article presents the results of an experimental study of creep behaviour of a rock salt under uniaxial compression as a function of prestrain, simulating sampling disturbance. The prestrain was produced by radial compressive loading of the specimens prior to creep testing. The tests were conducted on an artifical salt to avoid excessive scattering of the results. The results obtained from several series of single-stage creep tests show that, at short-term, the creep response of salt is strongly affected by the preloading history of samples. The nature of this effect depends upon the intensity of radial compressive preloading, and its magnitude is a function of the creep stress level. The effect, however, decreases with increasing plastic deformation, indicating that large creep strains may eventually lead to a complete loss of preloading memory.

  16. Ambient krypton-85 air sampling at Hanford

    International Nuclear Information System (INIS)

    Trevathan, M.S.; Price, K.R.

    1985-01-01

    In the fall of 1982, the Environmental Evaluations Section of Pacific Northwest Laboratory (PNL) initiated a network of continuous 85 Kr air samplers located on and around the Hanford Site. This effort was in response to the resumption of operations at a nuclear fuel reprocessing plant located onsite where 85 Kr was to be released during fuel dissolution. Preoperational data were collected using noble gas samplers designed by the Environmental Protection Agency-Las Vegas (EPA-LV). The samplers functioned erratically resulting in excessive maintenance costs and prompted a search for a new sampling system. State-of-the-art 85 Dr sampling methods were reviewed and found to be too costly, too complex and inappropriate for field application, so a simple bag collection system was designed and field tested. The system is composed of a reinforced, heavy plastic bag, connected to a variable flow pump and housed in a weatherproof enclosure. At the end of the four week sampling period the air in the bag is transferred by a compressor into a pressure tank for easy transport to the laboratory for analysis. After several months of operation, the air sampling system has proven its reliability and sensitivity to ambient levels of 85 Kr

  17. Ambient krypton-85 air sampling at Hanford

    International Nuclear Information System (INIS)

    Trevathan, M.S.; Price, K.R.

    1984-10-01

    In the fall of 1982, the Environmental Evaluations Section of Pacific Northwest Laboratory (PNL) initiated a network of continuous krypton-85 air samplers located on and around the Hanford Site. This effort was in response to the resumption of operations at a nuclear fuel reprocessing plant located onsite where krypton-85 was to be released during fuel dissolution. Preoperational data were collected using noble gas samplers designed by the Environmental Protection Agency-Las Vegas (EPA-LV). The samplers functioned erratically resulting in excessive maintenance costs and prompted a search for a new sampling system. State of the art krypton-85 sampling methods were reviewed and found to be too costly, too complex and inappropriate for field application, so a simple bag collection system was designed and field tested. The system is composed of a reinforced, heavy plastic bag, connected to a variable flow pump and housed in a weatherproof enclosure. At the end of the four week sampling period the air in the bag is transferred by a compressor into a pressure tank for easy transport to the laboratory for analysis. After several months of operation, the air sampling system has proven its reliability and sensitivity to ambient levels of krypton-85. 3 references, 3 figures, 1 table

  18. Phylotyping and functional analysis of two ancient human microbiomes.

    Directory of Open Access Journals (Sweden)

    Raúl Y Tito

    Full Text Available BACKGROUND: The Human Microbiome Project (HMP is one of the U.S. National Institutes of Health Roadmap for Medical Research. Primary interests of the HMP include the distinctiveness of different gut microbiomes, the factors influencing microbiome diversity, and the functional redundancies of the members of human microbiotas. In this present work, we contribute to these interests by characterizing two extinct human microbiotas. METHODOLOGY/PRINCIPAL FINDINGS: We examine two paleofecal samples originating from cave deposits in Durango Mexico and dating to approximately 1300 years ago. Contamination control is a serious issue in ancient DNA research; we use a novel approach to control contamination. After we determined that each sample originated from a different human, we generated 45 thousand shotgun DNA sequencing reads. The phylotyping and functional analysis of these reads reveals a signature consistent with the modern gut ecology. Interestingly, inter-individual variability for phenotypes but not functional pathways was observed. The two ancient samples have more similar functional profiles to each other than to a recently published profile for modern humans. This similarity could not be explained by a chance sampling of the databases. CONCLUSIONS/SIGNIFICANCE: We conduct a phylotyping and functional analysis of ancient human microbiomes, while providing novel methods to control for DNA contamination and novel hypotheses about past microbiome biogeography. We postulate that natural selection has more of an influence on microbiome functional profiles than it does on the species represented in the microbial ecology. We propose that human microbiomes were more geographically structured during pre-Columbian times than today.

  19. 40 CFR 1065.245 - Sample flow meter for batch sampling.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Sample flow meter for batch sampling... Sample flow meter for batch sampling. (a) Application. Use a sample flow meter to determine sample flow... difference between a diluted exhaust sample flow meter and a dilution air meter to calculate raw exhaust flow...

  20. Lower Limb Function in Elderly Korean Adults Is Related to Cognitive Function.

    Science.gov (United States)

    Kim, A-Sol; Ko, Hae-Jin

    2018-05-01

    Patients with cognitive impairment have decreased lower limb function. Therefore, we aimed to investigate the relationship between lower limb function and cognitive disorders to determine whether lower limb function can be screened to identify cognitive decline. Using Korean National Health Insurance Service-National Sample Cohort database data, we assessed the cognitive and lower limb functioning of 66-year-olds who underwent national health screening between 2010 and 2014. Cognitive function was assessed via a questionnaire. Timed Up-and-Go (TUG) and one-leg-standing (OLS) tests were performed to evaluate lower limb function. Associations between cognitive and lower limb functions were analyzed, and optimal cut-off points for these tests to screen for cognitive decline, were determined. Cognitive function was significantly correlated with TUG interval ( r = 0.414, p cognitive disorders were >11 s and ≤12 s for TUG interval and OLS duration, respectively. Among 66-year-olds who underwent national health screening, a significant correlation between lower limb and cognitive function was demonstrated. The TUG and OLS tests are useful screening tools for cognitive disorders in elderly patients. A large-scale prospective cohort study should be conducted to investigate the causal relationship between cognitive and lower limb function.

  1. Balanced sampling

    NARCIS (Netherlands)

    Brus, D.J.

    2015-01-01

    In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling

  2. Characteristic Polynomials of Sample Covariance Matrices: The Non-Square Case

    OpenAIRE

    Kösters, Holger

    2009-01-01

    We consider the sample covariance matrices of large data matrices which have i.i.d. complex matrix entries and which are non-square in the sense that the difference between the number of rows and the number of columns tends to infinity. We show that the second-order correlation function of the characteristic polynomial of the sample covariance matrix is asymptotically given by the sine kernel in the bulk of the spectrum and by the Airy kernel at the edge of the spectrum. Similar results are g...

  3. An ultrasonic corer for planetary rock sample retrieval

    International Nuclear Information System (INIS)

    Harkness, P; Cardoni, A; Lucas, M

    2009-01-01

    Several recent and planned space projects have been focussed on surface rovers for planetary missions, such as the U.S. Mars Exploration Rovers and the European ExoMars. The main functions of similar extraterrestrial vehicles in the future will be moving across planetary surfaces and retrieving rock samples. This paper presents a novel ultrasonic rock sampling tool tuned in a longitudinal-torsional mode along with the conceptual design of a full coring apparatus for preload delivery and core removal. Drilling and coring bits have been designed so that a portion of the longitudinal motion supplied by the ultrasonic transducer is converted into torsional motion. Results of drilling/coring trials are also presented.

  4. Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products

    Science.gov (United States)

    Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun

    2011-10-01

    To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."

  5. Effect of density increase on self-absorption property of bulk samples

    International Nuclear Information System (INIS)

    Dao Anh Minh; Tran Duc Thiep

    1990-01-01

    Asymptotic behaviour due to self-absorption of photon attenuation function in terms of material density for bulk samples has been considered. Some practical applications have also been presented. (author). 9 refs., 4 figs., 2 tabs

  6. Ensemble Sampling

    OpenAIRE

    Lu, Xiuyuan; Van Roy, Benjamin

    2017-01-01

    Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...

  7. Waste Sampling and Characterization Facility (WSCF)

    International Nuclear Information System (INIS)

    Bozich, J.L.

    1993-07-01

    This Maintenance Implementation Plan has been developed for maintenance functions associated with the Waste Sampling and Characterization Facility (WSCF). This plan is developed from the guidelines presented by Department of Energy (DOE) Order 4330.4A, Maintenance Management Program (DOE 1990), Chapter II. The objective of this plan is to provide baseline information for establishing and identifying WHC conformance programs and policies applicable to implementation of DOE order 4330.4A guidelines. In addition, this maintenance plan identifies the actions necessary to develop a cost-effective and efficient maintenance program at WSCF

  8. A Fault Sample Simulation Approach for Virtual Testability Demonstration Test

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yong; QIU Jing; LIU Guanjun; YANG Peng

    2012-01-01

    Virtual testability demonstration test has many advantages,such as low cost,high efficiency,low risk and few restrictions.It brings new requirements to the fault sample generation.A fault sample simulation approach for virtual testability demonstration test based on stochastic process theory is proposed.First,the similarities and differences of fault sample generation between physical testability demonstration test and virtual testability demonstration test are discussed.Second,it is pointed out that the fault occurrence process subject to perfect repair is renewal process.Third,the interarrival time distribution function of the next fault event is given.Steps and flowcharts of fault sample generation are introduced.The number of faults and their occurrence time are obtained by statistical simulation.Finally,experiments are carried out on a stable tracking platform.Because a variety of types of life distributions and maintenance modes are considered and some assumptions are removed,the sample size and structure of fault sample simulation results are more similar to the actual results and more reasonable.The proposed method can effectively guide the fault injection in virtual testability demonstration test.

  9. THE LOCAL [C ii] 158 μ m EMISSION LINE LUMINOSITY FUNCTION

    Energy Technology Data Exchange (ETDEWEB)

    Hemmati, Shoubaneh; Yan, Lin; Capak, Peter; Faisst, Andreas; Masters, Daniel [Infrared Processing and Analysis Center, Department of Astronomy, California Institute of Technology, 1200 E. California Blvd., Pasadena CA 91125 (United States); Diaz-Santos, Tanio [Nucleo de Astronomia de la Facultad de Ingenieria, Universidad Diego Portales, Av. Ejercito Libertador 441, Santiago (Chile); Armus, Lee, E-mail: shemmati@ipac.caltech.edu [Spitzer Science Center, Department of Astronomy, California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125 (United States)

    2017-01-01

    We present, for the first time, the local [C ii] 158 μ m emission line luminosity function measured using a sample of more than 500 galaxies from the Revised Bright Galaxy Sample. [C ii] luminosities are measured from the Herschel PACS observations of the Luminous Infrared Galaxies (LIRGs) in the Great Observatories All-sky LIRG Survey and estimated for the rest of the sample based on the far-infrared (far-IR) luminosity and color. The sample covers 91.3% of the sky and is complete at S{sub 60μm} > 5.24 Jy. We calculate the completeness as a function of [C ii] line luminosity and distance, based on the far-IR color and flux densities. The [C ii] luminosity function is constrained in the range ∼10{sup 7–9} L{sub ⊙} from both the 1/ V{sub max} and a maximum likelihood methods. The shape of our derived [C ii] emission line luminosity function agrees well with the IR luminosity function. For the CO(1-0) and [C ii] luminosity functions to agree, we propose a varying ratio of [C ii]/CO(1-0) as a function of CO luminosity, with larger ratios for fainter CO luminosities. Limited [C ii] high-redshift observations as well as estimates based on the IR and UV luminosity functions are suggestive of an evolution in the [C ii] luminosity function similar to the evolution trend of the cosmic star formation rate density. Deep surveys using the Atacama Large Millimeter Array with full capability will be able to confirm this prediction.

  10. Modern survey sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...

  11. Self stigmatization, cognitive functions and social functioning in mood disorders

    Directory of Open Access Journals (Sweden)

    Gulsum Ozge Doganavsargil Baysal

    2013-06-01

    Full Text Available Purpose: Internalized stigmatization (IS generally has a negative effect on diagnosis, treatment, rehabilitation and prognosis of diseases. The purpose of this study is to compare patients with bipolar disorder and unipolar depression both are in remission in terms of IS and social functioning (SF, cognitive function and secondly to consider relationship between IS, cognitive functions and SF. Methods: This cross-sectional study is carried out with bipolar (BD and unipolar depression (UD patients in remission, admitted to the psychiatry outpatient clinics of Akdeniz University Hospital. The sample size is estimated as 35 patients. Basic independent variable is the type of disease and dependent variables are; IS, cognitive functions and SF. Performed scales are: The internalized stigma of mental ilness scale, the social functioning scale and for the assesment of cognitive functions: Wisconsin card sorting, stroop test, test of verbal memory process. Results.Concerning the results there was negative corelation between IS and SF scores in all groups. There was only significant relationship between verbal memory and IS in UD patients. There was not any significant relationship between IS and cognitive function in BD patients. Conclusion: This study indicates that in terms of cognitive functions, patients with unipolar depression are effected as much as the patients with bipolar disorder also manifesting the inverse relation between IS and SF, however cognitive functions were relevant to IS only in UD patients. [Cukurova Med J 2013; 38(3.000: 390-402

  12. Signal processing to evaluate parameters affecting SPE for multi-residue analysis of personal care products.

    Science.gov (United States)

    Pietrogrande, Maria Chiara; Basaglia, Giulia; Dondi, Francesco

    2009-05-01

    This paper discusses the development of a comprehensive method for the simultaneous analysis of personal care products (PCPs) based on SPE and GC-MS. The method was developed on 29 target compounds to represent PCPs belonging to different chemical classes: surfactants in detergents (alkyl benzenes), fragrances in cosmetics (nitro and polycyclic musks), antioxidants and preservatives (phenols), plasticizers (phthalates) displaying a wide range of volatility, polarity, water solubility. In addition to the conventional C(18) stationary phase, a surface modified styrene divinylbenzene polymeric phase (Strata X SPE cartridge) has been investigated as suitable for the simultaneous extraction of several PCPs with polar and non-polar characteristics. For both sorbents different solvent compositions and eluting conditions were tested and compared in order to achieve high extraction efficiency for as many sample components as possible. Comparison of the behavior of the two cartridges reveals that, overall, Strata-X provides better efficiency with extraction recovery higher than 70% for most of the PCPs investigated. The best results were obtained under the following operative conditions: an evaporation temperature of 40 degrees C, elution on Strata-X cartridge using a volume of 15 mL of ethyl acetate (EA) as solvent and operating with slow flow rate (-10 KPa). In addition to the conventional method based on peak integration, a chemometric approach based on the computation of the experimental autocovariance function (EACVF(tot)) was applied to the complex GC-MS signal: the percentage recovery and information on peak abundance distribution can be evaluated for each procedure step. The PC-based signal processing proved very helpful in assisting the development of the analytical procedure, since it saves labor and time and increases result reliability in handling GC complex signals.

  13. Respondent-Driven Sampling – Testing Assumptions: Sampling with Replacement

    Directory of Open Access Journals (Sweden)

    Barash Vladimir D.

    2016-03-01

    Full Text Available Classical Respondent-Driven Sampling (RDS estimators are based on a Markov Process model in which sampling occurs with replacement. Given that respondents generally cannot be interviewed more than once, this assumption is counterfactual. We join recent work by Gile and Handcock in exploring the implications of the sampling-with-replacement assumption for bias of RDS estimators. We differ from previous studies in examining a wider range of sampling fractions and in using not only simulations but also formal proofs. One key finding is that RDS estimates are surprisingly stable even in the presence of substantial sampling fractions. Our analyses show that the sampling-with-replacement assumption is a minor contributor to bias for sampling fractions under 40%, and bias is negligible for the 20% or smaller sampling fractions typical of field applications of RDS.

  14. Free-breathing Sparse Sampling Cine MR Imaging with Iterative Reconstruction for the Assessment of Left Ventricular Function and Mass at 3.0 T.

    Science.gov (United States)

    Sudarski, Sonja; Henzler, Thomas; Haubenreisser, Holger; Dösch, Christina; Zenge, Michael O; Schmidt, Michaela; Nadar, Mariappan S; Borggrefe, Martin; Schoenberg, Stefan O; Papavassiliu, Theano

    2017-01-01

    Purpose To prospectively evaluate the accuracy of left ventricle (LV) analysis with a two-dimensional real-time cine true fast imaging with steady-state precession (trueFISP) magnetic resonance (MR) imaging sequence featuring sparse data sampling with iterative reconstruction (SSIR) performed with and without breath-hold (BH) commands at 3.0 T. Materials and Methods Ten control subjects (mean age, 35 years; range, 25-56 years) and 60 patients scheduled to undergo a routine cardiac examination that included LV analysis (mean age, 58 years; range, 20-86 years) underwent a fully sampled segmented multiple BH cine sequence (standard of reference) and a prototype undersampled SSIR sequence performed during a single BH and during free breathing (non-BH imaging). Quantitative analysis of LV function and mass was performed. Linear regression, Bland-Altman analysis, and paired t testing were performed. Results Similar to the results in control subjects, analysis of the 60 patients showed excellent correlation with the standard of reference for single-BH SSIR (r = 0.93-0.99) and non-BH SSIR (r = 0.92-0.98) for LV ejection fraction (EF), volume, and mass (P 3.0 T is noninferior to the standard of reference irrespective of BH commands. LV mass, however, is overestimated with SSIR. © RSNA, 2016 Online supplemental material is available for this article.

  15. Characteristics of Executive Functioning in a Small Sample of Children With Tourette Syndrome.

    Science.gov (United States)

    Schwam, Dina M; King, Tricia Z; Greenberg, Daphne

    2015-01-01

    Tourette syndrome (TS) is a disorder that involves at least one vocal tic and two or more motor tics; however, associated symptoms of obsessive-compulsive disorder (OCD) and attention-deficit disorder or attention-deficit hyperactivity disorder (ADHD) are common. Many children with TS exhibit educational difficulties and one possible explanation may be deficits in executive functioning. The focus of this study was to look at the severity of symptoms often associated with TS (tics and OCD and ADHD symptoms) and its potential relationship with the Behavior Rating Inventory of Executive Function (BRIEF) Parent Form in 11 children diagnosed with TS aged 8 to 14 years old. The parent of the child completed the BRIEF along with symptom measures evaluating tics, OCD behaviors, and ADHD symptoms. Despite relatively low mean scores on the symptom measures and just a few children exhibiting clinically significant scores on the BRIEF indexes, at least half the children exhibited abnormal scores on the Working Memory, Inhibit, and Shift subscales of the BRIEF. Varying patterns of relationships were found on the BRIEF subscales for each symptom severity scale. Results suggest that the BRIEF may be useful in determining the specific areas of difficulty in a population with variable symptomatology.

  16. Optimal sampling plan for clean development mechanism energy efficiency lighting projects

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng

    2013-01-01

    Highlights: • A metering cost minimisation model is built to assist the sampling plan for CDM projects. • The model minimises the total metering cost by the determination of optimal sample size. • The required 90/10 criterion sampling accuracy is maintained. • The proposed metering cost minimisation model is applicable to other CDM projects as well. - Abstract: Clean development mechanism (CDM) project developers are always interested in achieving required measurement accuracies with the least metering cost. In this paper, a metering cost minimisation model is proposed for the sampling plan of a specific CDM energy efficiency lighting project. The problem arises from the particular CDM sampling requirement of 90% confidence and 10% precision for the small-scale CDM energy efficiency projects, which is known as the 90/10 criterion. The 90/10 criterion can be met through solving the metering cost minimisation problem. All the lights in the project are classified into different groups according to uncertainties of the lighting energy consumption, which are characterised by their statistical coefficient of variance (CV). Samples from each group are randomly selected to install power meters. These meters include less expensive ones with less functionality and more expensive ones with greater functionality. The metering cost minimisation model will minimise the total metering cost through the determination of the optimal sample size at each group. The 90/10 criterion is formulated as constraints to the metering cost objective. The optimal solution to the minimisation problem will therefore minimise the metering cost whilst meeting the 90/10 criterion, and this is verified by a case study. Relationships between the optimal metering cost and the population sizes of the groups, CV values and the meter equipment cost are further explored in three simulations. The metering cost minimisation model proposed for lighting systems is applicable to other CDM projects as

  17. Authentication Assurance Level Application to the Inventory Sampling Measurement System

    International Nuclear Information System (INIS)

    Devaney, Mike M.; Kouzes, Richard T.; Hansen, Randy R.; Geelhood, Bruce D.

    2001-01-01

    This document concentrates on the identification of a standardized assessment approach for the verification of security functionality in specific equipment, the Inspection Sampling Measurement System (ISMS) being developed for MAYAK. Specifically, an Authentication Assurance Level 3 is proposed to be reached in authenticating the ISMS

  18. Second harmonic sound field after insertion of a biological tissue sample

    Science.gov (United States)

    Zhang, Dong; Gong, Xiu-Fen; Zhang, Bo

    2002-01-01

    Second harmonic sound field after inserting a biological tissue sample is investigated by theory and experiment. The sample is inserted perpendicular to the sound axis, whose acoustical properties are different from those of surrounding medium (distilled water). By using the superposition of Gaussian beams and the KZK equation in quasilinear and parabolic approximations, the second harmonic field after insertion of the sample can be derived analytically and expressed as a linear combination of self- and cross-interaction of the Gaussian beams. Egg white, egg yolk, porcine liver, and porcine fat are used as the samples and inserted in the sound field radiated from a 2 MHz uniformly excited focusing source. Axial normalized sound pressure curves of the second harmonic wave before and after inserting the sample are measured and compared with the theoretical results calculated with 10 items of Gaussian beam functions.

  19. 296-B-5 Stack monitoring and sampling system annual system assessment report

    International Nuclear Information System (INIS)

    Ridge, T.M.

    1995-02-01

    The B Plant Administration Manual requires an annual system assessment to evaluate and report the present condition of the sampling and monitoring system associated with Stack 296-B-5 at B Plant. The sampling and monitoring system associated with stack 296-B-5 is functional and performing satisfactorily. This document is an annual assessment report of the systems associated with the 296-B-5 stack

  20. Planetary mass function and planetary systems

    Science.gov (United States)

    Dominik, M.

    2011-02-01

    With planets orbiting stars, a planetary mass function should not be seen as a low-mass extension of the stellar mass function, but a proper formalism needs to take care of the fact that the statistical properties of planet populations are linked to the properties of their respective host stars. This can be accounted for by describing planet populations by means of a differential planetary mass-radius-orbit function, which together with the fraction of stars with given properties that are orbited by planets and the stellar mass function allows the derivation of all statistics for any considered sample. These fundamental functions provide a framework for comparing statistics that result from different observing techniques and campaigns which all have their very specific selection procedures and detection efficiencies. Moreover, recent results both from gravitational microlensing campaigns and radial-velocity surveys of stars indicate that planets tend to cluster in systems rather than being the lonely child of their respective parent star. While planetary multiplicity in an observed system becomes obvious with the detection of several planets, its quantitative assessment however comes with the challenge to exclude the presence of further planets. Current exoplanet samples begin to give us first hints at the population statistics, whereas pictures of planet parameter space in its full complexity call for samples that are 2-4 orders of magnitude larger. In order to derive meaningful statistics, however, planet detection campaigns need to be designed in such a way that well-defined fully deterministic target selection, monitoring and detection criteria are applied. The probabilistic nature of gravitational microlensing makes this technique an illustrative example of all the encountered challenges and uncertainties.

  1. Monitoring of trace amounts of heavy metals in different food and water samples by flame atomic absorption spectrophotometer after preconcentration by amine-functionalized graphene nanosheet.

    Science.gov (United States)

    Behbahani, Mohammad; Tapeh, Nasim Akbari Ghareh; Mahyari, Mojtaba; Pourali, Ali Reza; Amin, Bahareh Golrokh; Shaabani, Ahmad

    2014-11-01

    We are introducing graphene oxide modified with amine groups as a new solid phase for extraction of heavy metal ions including cadmium(II), copper(II), nickel(II), zinc(II), and lead(II). Effects of pH value, flow rates, type, concentration, and volume of the eluent, breakthrough volume, and the effect of potentially interfering ions were studied. Under optimized conditions, the extraction efficiency is >97 %, the limit of detections are 0.03, 0.05, 0.2, 0.1, and 1 μg L(-1) for the ions of cadmium, copper, nickel, zinc, and lead, respectively, and the adsorption capacities for these ions are 178, 142, 110, 125, and 210 mg g(-1). The amino-functionalized graphene oxide was characterized by thermogravimetric analysis, transmission electron microscopy, scanning electron microscopy, and Fourier transform infrared spectrometry. The proposed method was successfully applied in the analysis of environmental water and food samples. Good spiked recoveries over the range of 95.8-100.0 % were obtained. This work not only proposes a useful method for sample preconcentration but also reveals the great potential of modified graphene as an excellent sorbent material in analytical processes.

  2. [Impact of Socioeconomic Risk Exposure on Maternal Sensitivity, Stress and Family Functionality].

    Science.gov (United States)

    Sidor, Anna; Köhler, Hubert; Cierpka, Manfred

    2018-03-01

    Impact of Socioeconomic Risk Exposure on Maternal Sensitivity, Stress and Family Functionality Parental stress exposure can influence the parent-child relationship, child development and child wellbeing in negative ways. The aim of this study was to investigate the impact of socio-economic risk exposure on the quality of the mother-child-interaction and family functionality. A sample of 294 mother-infant dyads at psychosocial risk was compared with a lower-risk, middle-class sample of 125 mother-infant-dyads in regard to maternal sensitivity/child's cooperation (CARE-Index), maternal stress (PSI-SF) and family functionality (FB-K). Lower levels of maternal sensitivity/child's cooperation and by trend also of the family functionality were found among the mothers from the at-risk sample in comparison to the low-risk sample. The level of maternal stress was similar in both samples. The results underpin the negative effects of a socio-economic risk exposure on the mother-child relationship. An early, sensitivity-focused family support could be encouraged to counteract the negative effects of early socioeconomic stress.

  3. Stuttering on function words in bilingual children who stutter: A preliminary study.

    Science.gov (United States)

    Gkalitsiou, Zoi; Byrd, Courtney T; Bedore, Lisa M; Taliancich-Klinger, Casey L

    2017-01-01

    Evidence suggests young monolingual children who stutter (CWS) are more disfluent on function than content words, particularly when produced in the initial utterance position. The purpose of the present preliminary study was to investigate whether young bilingual CWS present with this same pattern. The narrative and conversational samples of four bilingual Spanish- and English-speaking CWS were analysed. All four bilingual participants produced significantly more stuttering on function words compared to content words, irrespective of their position in the utterance, in their Spanish narrative and conversational speech samples. Three of the four participants also demonstrated more stuttering on function compared to content words in their narrative speech samples in English, but only one participant produced more stuttering on function than content words in her English conversational sample. These preliminary findings are discussed relative to linguistic planning and language proficiency and their potential contribution to stuttered speech.

  4. Mediators of the Link between Autistic Traits and Marital Functioning in a Non-Clinical Sample

    NARCIS (Netherlands)

    Pollmann, M.M.H.; Finkenauer, C.; Begeer, S.

    2010-01-01

    People with ASD have deficits in their social skills and may therefore experience lower relationship satisfaction. This study investigated possible mechanisms to explain whether and how autistic traits, measured with the AQ, influence relationship satisfaction in a non-clinical sample of 195 married

  5. Prediction of reduced thermal conductivity in nano-engineered rough semiconductor nanowires

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Pierre N; Aksamija, Zlatan; Ravaioli, Umberto [Department of Electrical and Computer Engineering, University of Illinois, Urbana-Champaign, Urbana, IL 61801 (United States); Beckman Institute for Advanced Technology and Science, University of Illinois, Urbana-Champaign, Urbana, IL 61801 (United States); Pop, Eric, E-mail: pmartin7@illinois.ed, E-mail: epop@illinois.ed [Department of Electrical and Computer Engineering, University of Illinois, Urbana-Champaign, Urbana, IL 61801 (United States); Beckman Institute for Advanced Technology and Science, University of Illinois, Urbana-Champaign, Urbana, IL 61801 (United States); Micro- and Nano-Technology Laboratory, University of Illinois, Urbana-Champaign, Urbana, IL 61801 (United States)

    2009-11-15

    We explore phonon decay processes necessary to the design of efficient rough semiconductor nanowire (NW) thermoelectric devices. A novel approach to surface roughness-limited thermal conductivity of Si, Ge, and GaAs NW with diameter D < 500 nm is presented. In particular, a frequency-dependent phonon scattering rate is computed from perturbation theory and related to a description of the surface through the root-mean-square roughness height {Delta} and autocovariance length L. Using a full phonon dispersion relation, the thermal conductivity varies quadratically with diameter and roughness as (D/{Delta}){sup 2}. Computed results are in agreement with experimental data, and predict remarkably low thermal conductivity below 1 W/m/K in rough-etched 56 nm Ge and GaAs NW at room temperature.

  6. Dieticians' intentions to recommend functional foods: The mediating role of consumption frequency of functional foods

    OpenAIRE

    Cha, Myeong Hwa; Lee, Jiyeon; Song, Mi Jung

    2010-01-01

    This study explored the conceptual framework of dieticians' intentions to recommend functional food and the mediating role of consumption frequency. A web-based survey was designed using a self-administered questionnaire. A sample of Korean dieticians (N=233) responded to the questionnaire that included response efficacy, risk perception, consumption frequency, and recommendation intention for functional foods. A structural equation model was constructed to analyze the data. We found that res...

  7. STATISTICAL ANALYSIS OF TANK 18F FLOOR SAMPLE RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Harris, S.

    2010-09-02

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 18F as per the statistical sampling plan developed by Shine [1]. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL [2]. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results [3] to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL{sub 95%}) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 18F. The uncertainty is quantified in this report by an upper 95% confidence limit (UCL{sub 95%}) on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL{sub 95%} was based entirely on the six current scrape sample results (each averaged across three analytical determinations).

  8. NEON terrestrial field observations: designing continental scale, standardized sampling

    Science.gov (United States)

    R. H. Kao; C.M. Gibson; R. E. Gallery; C. L. Meier; D. T. Barnett; K. M. Docherty; K. K. Blevins; P. D. Travers; E. Azuaje; Y. P. Springer; K. M. Thibault; V. J. McKenzie; M. Keller; L. F. Alves; E. L. S. Hinckley; J. Parnell; D. Schimel

    2012-01-01

    Rapid changes in climate and land use and the resulting shifts in species distributions and ecosystem functions have motivated the development of the National Ecological Observatory Network (NEON). Integrating across spatial scales from ground sampling to remote sensing, NEON will provide data for users to address ecological responses to changes in climate, land use,...

  9. Sample preparation

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Sample preparation prior to HPLC analysis is certainly one of the most important steps to consider in trace or ultratrace analysis. For many years scientists have tried to simplify the sample preparation process. It is rarely possible to inject a neat liquid sample or a sample where preparation may not be any more complex than dissolution of the sample in a given solvent. The last process alone can remove insoluble materials, which is especially helpful with the samples in complex matrices if other interactions do not affect extraction. Here, it is very likely a large number of components will not dissolve and are, therefore, eliminated by a simple filtration process. In most cases, the process of sample preparation is not as simple as dissolution of the component interest. At times, enrichment is necessary, that is, the component of interest is present in very large volume or mass of material. It needs to be concentrated in some manner so a small volume of the concentrated or enriched sample can be injected into HPLC. 88 refs

  10. The real-time fitting of radioactive decay curves. Pt. 3. Counting during sampling

    International Nuclear Information System (INIS)

    Hartley, B.M.

    1994-01-01

    An analysis of a least-squares method for the real-time fitting of the theoretical total count function to the actual total count from radioactive decays has been given previously for the case where counting takes place after a sample is taken. The counting may be done in a number of different counting systems which distinguish between different types or energies of radiation emitted from the sample. The method would allow real-time determination of the numbers of atoms and hence activities of the individual isotopes present and has been designated the Time Evolved Least-Squares method (TELS). If the radioactivity which is to be measured exists as an aerosol or in a form where a sample is taken at a constant rate it may be possible to count during sampling and by so doing reduce the total time required to determine the activity of the individual isotopes present. The TELS method is extended here to the case where counting and the evaluation of the activity takes place concurrently with the sampling. The functions which need to be evaluated are derived and the calculations required to implement the method are discussed. As with the TELS method of counting after sampling the technique of counting during sampling and the simultaneous evaluation of activity could be achieved in real-time. Results of testing the method by computer simulation for two counting schemes for the descendants of radon are presented. ((orig.))

  11. Analysis of bioethanol samples through Inductively Coupled Plasma Mass Spectrometry with a total sample consumption system

    Science.gov (United States)

    Sánchez, Carlos; Lienemann, Charles-Philippe; Todolí, Jose-Luis

    2016-10-01

    Bioethanol real samples have been directly analyzed through ICP-MS by means of the so called High Temperature Torch Integrated Sample Introduction System (hTISIS). Because bioethanol samples may contain water, experiments have been carried out in order to determine the effect of ethanol concentration on the ICP-MS response. The ethanol content studied went from 0 to 50%, because higher alcohol concentrations led to carbon deposits on the ICP-MS interface. The spectrometer default spray chamber (double pass) equipped with a glass concentric pneumatic micronebulizer has been taken as the reference system. Two flow regimes have been evaluated: continuous sample aspiration at 25 μL min- 1 and 5 μL air-segmented sample injection. hTISIS temperature has been shown to be critical, in fact ICP-MS sensitivity increased with this variable up to 100-200 °C depending on the solution tested. Higher chamber temperatures led to either a drop in signal or a plateau. Compared with the reference system, the hTISIS improved the sensitivities by a factor included within the 4 to 8 range while average detection limits were 6 times lower for the latter device. Regarding the influence of the ethanol concentration on sensitivity, it has been observed that an increase in the temperature was not enough to eliminate the interferences. It was also necessary to modify the torch position with respect to the ICP-MS interface to overcome them. This fact was likely due to the different extent of ion plasma radial diffusion encountered as a function of the matrix when working at high chamber temperatures. When the torch was moved 1 mm plasma down axis, ethanolic and aqueous solutions provided statistically equal sensitivities. A preconcentration procedure has been applied in order to validate the methodology. It has been found that, under optimum conditions from the point of view of matrix effects, recoveries for spiked samples were close to 100%. Furthermore, analytical concentrations for real

  12. Chance constrained problems: penalty reformulation and performance of sample approximation technique

    Czech Academy of Sciences Publication Activity Database

    Branda, Martin

    2012-01-01

    Roč. 48, č. 1 (2012), s. 105-122 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional research plan: CEZ:AV0Z10750506 Keywords : chance constrained problems * penalty functions * asymptotic equivalence * sample approximation technique * investment problem Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.619, year: 2012 http://library.utia.cas.cz/separaty/2012/E/branda-chance constrained problems penalty reformulation and performance of sample approximation technique.pdf

  13. Sample summary report for ARG 1 pressure tube sample

    International Nuclear Information System (INIS)

    Belinco, C.

    2006-01-01

    The ARG 1 sample is made from an un-irradiated Zr-2.5% Nb pressure tube. The sample has 103.4 mm ID, 112 mm OD and approximately 500 mm length. A punch mark was made very close to one end of the sample. The punch mark indicates the 12 O'clock position and also identifies the face of the tube for making all the measurements. ARG 1 sample contains flaws on ID and OD surface. There was no intentional flaw within the wall of the pressure tube sample. Once the flaws are machined the pressure tube sample was covered from outside to hide the OD flaws. Approximately 50 mm length of pressure tube was left open at both the ends to facilitate the holding of sample in the fixtures for inspection. No flaw was machined in this zone of 50 mm on either end of the pressure tube sample. A total of 20 flaws were machined in ARG 1 sample. Out of these, 16 flaws were on the OD surface and the remaining 4 on the ID surface of the pressure tube. The flaws were characterized in to various groups like axial flaws, circumferential flaws, etc

  14. Estimation of creatinine in Urine sample by Jaffe's method

    International Nuclear Information System (INIS)

    Wankhede, Sonal; Arunkumar, Suja; Sawant, Pramilla D.; Rao, B.B.

    2012-01-01

    In-vitro bioassay monitoring is based on the determination of activity concentrations in biological samples excreted from the body and is most suitable for alpha and beta emitters. A truly representative bioassay sample is the one having all the voids collected during a 24-h period however, this being technically difficult, overnight urine samples collected by the workers are analyzed. These overnight urine samples are collected for 10-16 h, however in the absence of any specific information, 12 h duration is assumed and the observed results are then corrected accordingly obtain the daily excretion rate. To reduce the uncertainty due to unknown duration of sample collection, IAEA has recommended two methods viz., measurement of specific gravity and creatinine excretion rate in urine sample. Creatinine is a final metabolic product creatinine phosphate in the body and is excreted at a steady rate for people with normally functioning kidneys. It is, therefore, often used as a normalization factor for estimation of duration of sample collection. The present study reports the chemical procedure standardized and its application for the estimation of creatinine in urine samples collected from occupational workers. Chemical procedure for estimation of creatinine in bioassay samples was standardized and applied successfully for its estimation in bioassay samples collected from the workers. The creatinine excretion rate observed for these workers is lower than observed in literature. Further, work is in progress to generate a data bank of creatinine excretion rate for most of the workers and also to study the variability in creatinine coefficient for the same individual based on the analysis of samples collected for different duration

  15. Optimized IMAC-IMAC protocol for phosphopeptide recovery from complex biological samples

    DEFF Research Database (Denmark)

    Ye, Juanying; Zhang, Xumin; Young, Clifford

    2010-01-01

    using Fe(III)-NTA IMAC resin and it proved to be highly selective in the phosphopeptide enrichment of a highly diluted standard sample (1:1000) prior to MALDI MS analysis. We also observed that a higher iron purity led to an increased IMAC enrichment efficiency. The optimized method was then adapted...... to phosphoproteome analyses of cell lysates of high protein complexity. From either 20 microg of mouse sample or 50 microg of Drosophila melanogaster sample, more than 1000 phosphorylation sites were identified in each study using IMAC-IMAC and LC-MS/MS. We demonstrate efficient separation of multiply phosphorylated...... characterization of phosphoproteins in functional phosphoproteomics research projects....

  16. New complete sample of identified radio sources. Part 2. Statistical study

    International Nuclear Information System (INIS)

    Soltan, A.

    1978-01-01

    Complete sample of radio sources with known redshifts selected in Paper I is studied. Source counts in the sample and the luminosity - volume test show that both quasars and galaxies are subject to the evolution. Luminosity functions for different ranges of redshifts are obtained. Due to many uncertainties only simplified models of the evolution are tested. Exponential decline of the liminosity with time of all the bright sources is in a good agreement both with the luminosity- volume test and N(S) realtion in the entire range of observed flux densities. It is shown that sources in the sample are randomly distributed in scales greater than about 17 Mpc. (author)

  17. Resolution optimization with irregularly sampled Fourier data

    International Nuclear Information System (INIS)

    Ferrara, Matthew; Parker, Jason T; Cheney, Margaret

    2013-01-01

    Image acquisition systems such as synthetic aperture radar (SAR) and magnetic resonance imaging often measure irregularly spaced Fourier samples of the desired image. In this paper we show the relationship between sample locations, their associated backprojection weights, and image resolution as characterized by the resulting point spread function (PSF). Two new methods for computing data weights, based on different optimization criteria, are proposed. The first method, which solves a maximal-eigenvector problem, optimizes a PSF-derived resolution metric which is shown to be equivalent to the volume of the Cramer–Rao (positional) error ellipsoid in the uniform-weight case. The second approach utilizes as its performance metric the Frobenius error between the PSF operator and the ideal delta function, and is an extension of a previously reported algorithm. Our proposed extension appropriately regularizes the weight estimates in the presence of noisy data and eliminates the superfluous issue of image discretization in the choice of data weights. The Frobenius-error approach results in a Tikhonov-regularized inverse problem whose Tikhonov weights are dependent on the locations of the Fourier data as well as the noise variance. The two new methods are compared against several state-of-the-art weighting strategies for synthetic multistatic point-scatterer data, as well as an ‘interrupted SAR’ dataset representative of in-band interference commonly encountered in very high frequency radar applications. (paper)

  18. Testing results of Monte Carlo sampling processes in MCSAD

    International Nuclear Information System (INIS)

    Pinnera, I.; Cruz, C.; Abreu, Y.; Leyva, A.; Correa, C.; Demydenko, C.

    2009-01-01

    The Monte Carlo Simulation of Atom Displacements (MCSAD) is a code implemented by the authors to simulate the complete process of atom displacement (AD) formation. This code makes use of the Monte Carlo (MC) method to sample all the processes involved in the gamma and electronic radiation transport through matter. The kernel of the calculations applied to this code relies on a model based on an algorithm developed by the authors, which firstly splits out multiple electron elastic scattering events from those single ones at higher scattering angles and then, from the last one, sampling those leading to AD at high transferred atomic recoil energies. Some tests have been developed to check the sampling algorithms with the help of the corresponding theoretical distribution functions. Satisfactory results have been obtained, which indicate the strength of the methods and subroutines used in the code. (Author)

  19. Positive Wigner functions render classical simulation of quantum computation efficient.

    Science.gov (United States)

    Mari, A; Eisert, J

    2012-12-07

    We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.

  20. Correlation Functions in Open Quantum-Classical Systems

    OpenAIRE

    Hsieh, Chang-Yu; Kapral, Raymond

    2013-01-01

    Quantum time correlation functions are often the principal objects of interest in experimental investigations of the dynamics of quantum systems. For instance, transport properties, such as diffusion and reaction rate coefficients, can be obtained by integrating these functions. The evaluation of such correlation functions entails sampling from quantum equilibrium density operators and quantum time evolution of operators. For condensed phase and complex systems, where quantum dynamics is diff...