WorldWideScience

Sample records for autocorrelation factor analyses

  1. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  2. Sum rule and hydrodynamic analyses of the velocity autocorrelation function in strongly coupled plasmas

    International Nuclear Information System (INIS)

    Nagano, Seido; Ichimaru, Setsuo

    1980-01-01

    The memory function for the velocity autocorrelation function in a strongly coupled, one-component plasma is analyzed in the short time and long time domains, respectively, with the aid of the frequency-moment sum rules and the hydrodynamic consideration evoking the idea of the generalized Stokes friction. A series of interpolation schemes with successively improved accuracies are then introduced. Numerical investigations of those interpolation schemes clarify the physical origin of the three different types of the velocity autocorrelation function observed in the molecular dynamics simulation at different regimes of the coupling constant. (author)

  3. Disentangling the effects of forage, social rank, and risk on movement autocorrelation of elephants using Fourier and wavelet analyses.

    Science.gov (United States)

    Wittemyer, George; Polansky, Leo; Douglas-Hamilton, Iain; Getz, Wayne M

    2008-12-09

    The internal state of an individual-as it relates to thirst, hunger, fear, or reproductive drive-can be inferred by referencing points on its movement path to external environmental and sociological variables. Using time-series approaches to characterize autocorrelative properties of step-length movements collated every 3 h for seven free-ranging African elephants, we examined the influence of social rank, predation risk, and seasonal variation in resource abundance on periodic properties of movement. The frequency domain methods of Fourier and wavelet analyses provide compact summaries of temporal autocorrelation and show both strong diurnal and seasonal based periodicities in the step-length time series. This autocorrelation is weaker during the wet season, indicating random movements are more common when ecological conditions are good. Periodograms of socially dominant individuals are consistent across seasons, whereas subordinate individuals show distinct differences diverging from that of dominants during the dry season. We link temporally localized statistical properties of movement to landscape features and find that diurnal movement correlation is more common within protected wildlife areas, and multiday movement correlations found among lower ranked individuals are typically outside of protected areas where predation risks are greatest. A frequency-related spatial analysis of movement-step lengths reveal that rest cycles related to the spatial distribution of critical resources (i.e., forage and water) are responsible for creating the observed patterns. Our approach generates unique information regarding the spatial-temporal interplay between environmental and individual characteristics, providing an original approach for understanding the movement ecology of individual animals and the spatial organization of animal populations.

  4. Auto-correlation of journal impact factor for consensus research reporting statements: a cohort study.

    Science.gov (United States)

    Shanahan, Daniel R

    2016-01-01

    journal in which a reporting statement was published was shown to influence the number of citations that statement will gather over time. Similarly, the number of article accesses also influenced the number of citations, although to a lesser extent than the impact factor. This demonstrates that citation counts are not purely a reflection of scientific merit and the impact factor is, in fact, auto-correlated.

  5. Auto-correlation of journal impact factor for consensus research reporting statements: a cohort study

    Directory of Open Access Journals (Sweden)

    Daniel R. Shanahan

    2016-03-01

    . The impact factor of the journal in which a reporting statement was published was shown to influence the number of citations that statement will gather over time. Similarly, the number of article accesses also influenced the number of citations, although to a lesser extent than the impact factor. This demonstrates that citation counts are not purely a reflection of scientific merit and the impact factor is, in fact, auto-correlated.

  6. Correlation factor, velocity autocorrelation function and frequency-dependent tracer diffusion coefficient

    NARCIS (Netherlands)

    Beijeren, H. van; Kehr, K.W.

    1986-01-01

    The correlation factor, defined as the ratio between the tracer diffusion coefficient in lattice gases and the diffusion coefficient for a corresponding uncorrelated random walk, is known to assume a very simple form under certain conditions. A simple derivation of this is given with the aid of

  7. Consequences of spatial autocorrelation for niche-based models

    DEFF Research Database (Denmark)

    Segurado, P.; Araújo, Miguel B.; Kunin, W. E.

    2006-01-01

    1.  Spatial autocorrelation is an important source of bias in most spatial analyses. We explored the bias introduced by spatial autocorrelation on the explanatory and predictive power of species' distribution models, and make recommendations for dealing with the problem. 2.  Analyses were based o...

  8. Summary of the analyses for recovery factors

    Science.gov (United States)

    Verma, Mahendra K.

    2017-07-17

    IntroductionIn order to determine the hydrocarbon potential of oil reservoirs within the U.S. sedimentary basins for which the carbon dioxide enhanced oil recovery (CO2-EOR) process has been considered suitable, the CO2 Prophet model was chosen by the U.S. Geological Survey (USGS) to be the primary source for estimating recovery-factor values for individual reservoirs. The choice was made because of the model’s reliability and the ease with which it can be used to assess a large number of reservoirs. The other two approaches—the empirical decline curve analysis (DCA) method and a review of published literature on CO2-EOR projects—were deployed to verify the results of the CO2 Prophet model. This chapter discusses the results from CO2 Prophet (chapter B, by Emil D. Attanasi, this report) and compares them with results from decline curve analysis (chapter C, by Hossein Jahediesfanjani) and those reported in the literature for selected reservoirs with adequate data for analyses (chapter D, by Ricardo A. Olea).To estimate the technically recoverable hydrocarbon potential for oil reservoirs where CO2-EOR has been applied, two of the three approaches—CO2 Prophet modeling and DCA—do not include analysis of economic factors, while the third approach—review of published literature—implicitly includes economics. For selected reservoirs, DCA has provided estimates of the technically recoverable hydrocarbon volumes, which, in combination with calculated amounts of original oil in place (OOIP), helped establish incremental CO2-EOR recovery factors for individual reservoirs.The review of published technical papers and reports has provided substantial information on recovery factors for 70 CO2-EOR projects that are either commercially profitable or classified as pilot tests. When comparing the results, it is important to bear in mind the differences and limitations of these three approaches.

  9. Velocity and stress autocorrelation decay in isothermal dissipative particle dynamics

    Science.gov (United States)

    Chaudhri, Anuj; Lukes, Jennifer R.

    2010-02-01

    The velocity and stress autocorrelation decay in a dissipative particle dynamics ideal fluid model is analyzed in this paper. The autocorrelation functions are calculated at three different friction parameters and three different time steps using the well-known Groot/Warren algorithm and newer algorithms including self-consistent leap-frog, self-consistent velocity Verlet and Shardlow first and second order integrators. At low friction values, the velocity autocorrelation function decays exponentially at short times, shows slower-than exponential decay at intermediate times, and approaches zero at long times for all five integrators. As friction value increases, the deviation from exponential behavior occurs earlier and is more pronounced. At small time steps, all the integrators give identical decay profiles. As time step increases, there are qualitative and quantitative differences between the integrators. The stress correlation behavior is markedly different for the algorithms. The self-consistent velocity Verlet and the Shardlow algorithms show very similar stress autocorrelation decay with change in friction parameter, whereas the Groot/Warren and leap-frog schemes show variations at higher friction factors. Diffusion coefficients and shear viscosities are calculated using Green-Kubo integration of the velocity and stress autocorrelation functions. The diffusion coefficients match well-known theoretical results at low friction limits. Although the stress autocorrelation function is different for each integrator, fluctuates rapidly, and gives poor statistics for most of the cases, the calculated shear viscosities still fall within range of theoretical predictions and nonequilibrium studies.

  10. Parallel auto-correlative statistics with VTK.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  11. A simple method to estimate interwell autocorrelation

    Energy Technology Data Exchange (ETDEWEB)

    Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)

    1997-08-01

    The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.

  12. Biometric feature extraction using local fractal auto-correlation

    International Nuclear Information System (INIS)

    Chen Xi; Zhang Jia-Shu

    2014-01-01

    Image texture feature extraction is a classical means for biometric recognition. To extract effective texture feature for matching, we utilize local fractal auto-correlation to construct an effective image texture descriptor. Three main steps are involved in the proposed scheme: (i) using two-dimensional Gabor filter to extract the texture features of biometric images; (ii) calculating the local fractal dimension of Gabor feature under different orientations and scales using fractal auto-correlation algorithm; and (iii) linking the local fractal dimension of Gabor feature under different orientations and scales into a big vector for matching. Experiments and analyses show our proposed scheme is an efficient biometric feature extraction approach. (condensed matter: structural, mechanical, and thermal properties)

  13. General simulation algorithm for autocorrelated binary processes.

    Science.gov (United States)

    Serinaldi, Francesco; Lombardo, Federico

    2017-02-01

    The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.

  14. General simulation algorithm for autocorrelated binary processes

    Science.gov (United States)

    Serinaldi, Francesco; Lombardo, Federico

    2017-02-01

    The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.

  15. Linear Prediction Using Refined Autocorrelation Function

    Directory of Open Access Journals (Sweden)

    M. Shahidur Rahman

    2007-07-01

    Full Text Available This paper proposes a new technique for improving the performance of linear prediction analysis by utilizing a refined version of the autocorrelation function. Problems in analyzing voiced speech using linear prediction occur often due to the harmonic structure of the excitation source, which causes the autocorrelation function to be an aliased version of that of the vocal tract impulse response. To estimate the vocal tract characteristics accurately, however, the effect of aliasing must be eliminated. In this paper, we employ homomorphic deconvolution technique in the autocorrelation domain to eliminate the aliasing effect occurred due to periodicity. The resulted autocorrelation function of the vocal tract impulse response is found to produce significant improvement in estimating formant frequencies. The accuracy of formant estimation is verified on synthetic vowels for a wide range of pitch frequencies typical for male and female speakers. The validity of the proposed method is also illustrated by inspecting the spectral envelopes of natural speech spoken by high-pitched female speaker. The synthesis filter obtained by the current method is guaranteed to be stable, which makes the method superior to many of its alternatives.

  16. Autocorrelation in queuing network-type production systems - revisited

    DEFF Research Database (Denmark)

    Nielsen, Erland Hejn

    2007-01-01

    , either production managers are missing important aspects in production planning, or the 'realistic' autocorrelation patterns inherent in actual production setups are not like those considered in the literature. In this paper, relevant and 'realistic' types of autocorrelation schemes are characterised...

  17. Design factors analyses of second-loop PRHRS

    Directory of Open Access Journals (Sweden)

    ZHANG Hongyan

    2017-05-01

    Full Text Available In order to study the operating characteristics of a second-loop Passive Residual Heat Removal System (PRHRS, the transient thermal analysis code RELAP5 is used to build simulation models of the main coolant system and second-loop PRHRS. Transient calculations and comparative analyses under station blackout accident and one-side feed water line break accident conditions are conducted for three critical design factors of the second-loop PRHRS:design capacity, emergency makeup tank and isolation valve opening speed. The impacts of the discussed design factors on the operating characteristics of the second-loop PRHRS are summarized based on calculations and analyses. The analysis results indicate that the system safety and cooling rate should be taken into consideration in designing PRHRS's capacity,and water injection from emergency makeup tank to steam generator can provide advantage to system cooling in the event of accident,and system startup performance can be improved by reducing the opening speed of isolation valve. The results can provide references for the design of the second-loop PRHRS in nuclear power plants.

  18. A Comparison of Various Forecasting Methods for Autocorrelated Time Series

    Directory of Open Access Journals (Sweden)

    Karin Kandananond

    2012-07-01

    Full Text Available The accuracy of forecasts significantly affects the overall performance of a whole supply chain system. Sometimes, the nature of consumer products might cause difficulties in forecasting for the future demands because of its complicated structure. In this study, two machine learning methods, artificial neural network (ANN and support vector machine (SVM, and a traditional approach, the autoregressive integrated moving average (ARIMA model, were utilized to predict the demand for consumer products. The training data used were the actual demand of six different products from a consumer product company in Thailand. Initially, each set of data was analysed using Ljung‐Box‐Q statistics to test for autocorrelation. Afterwards, each method was applied to different sets of data. The results indicated that the SVM method had a better forecast quality (in terms of MAPE than ANN and ARIMA in every category of products.

  19. Assessment of smoothed spectra using autocorrelation function

    International Nuclear Information System (INIS)

    Urbanski, P.; Kowalska, E.

    2006-01-01

    Recently, data and signal smoothing became almost standard procedures in the spectrometric and chromatographic methods. In radiometry, the main purpose to apply smoothing is minimisation of the statistical fluctuation and avoid distortion. The aim of the work was to find a qualitative parameter, which could be used, as a figure of merit for detecting distortion of the smoothed spectra, based on the linear model. It is assumed that as long as the part of the raw spectrum removed by the smoothing procedure (v s ) will be of random nature, the smoothed spectrum can be considered as undistorted. Thanks to this feature of the autocorrelation function, drifts of the mean value in the removed noise vs as well as its periodicity can be more easily detected from the autocorrelogram than from the original data

  20. Multivariate Process Control with Autocorrelated Data

    DEFF Research Database (Denmark)

    Kulahci, Murat

    2011-01-01

    As sensor and computer technology continues to improve, it becomes a normal occurrence that we confront with high dimensional data sets. As in many areas of industrial statistics, this brings forth various challenges in statistical process control and monitoring. This new high dimensional data...... often exhibit not only cross-­‐correlation among the quality characteristics of interest but also serial dependence as a consequence of high sampling frequency and system dynamics. In practice, the most common method of monitoring multivariate data is through what is called the Hotelling’s T2 statistic....... In this paper, we discuss the effect of autocorrelation (when it is ignored) on multivariate control charts based on these methods and provide some practical suggestions and remedies to overcome this problem....

  1. Response predictions using the observed autocorrelation function

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; H. Brodtkorb, Astrid; Jensen, Jørgen Juncher

    2018-01-01

    This article studies a procedure that facilitates short-time, deterministic predictions of the wave-induced motion of a marine vessel, where it is understood that the future motion of the vessel is calculated ahead of time. Such predictions are valuable to assist in the execution of many marine......-induced response in study. Thus, predicted (future) values ahead of time for a given time history recording are computed through a mathematical combination of the sample autocorrelation function and previous measurements recorded just prior to the moment of action. Importantly, the procedure does not need input...... show that predictions can be successfully made in a time horizon corresponding to about 8-9 wave periods ahead of current time (the moment of action)....

  2. Logistic regression for southern pine beetle outbreaks with spatial and temporal autocorrelation

    Science.gov (United States)

    M. L. Gumpertz; C.-T. Wu; John M. Pye

    2000-01-01

    Regional outbreaks of southern pine beetle (Dendroctonus frontalis Zimm.) show marked spatial and temporal patterns. While these patterns are of interest in themselves, we focus on statistical methods for estimating the effects of underlying environmental factors in the presence of spatial and temporal autocorrelation. The most comprehensive available information on...

  3. Genetic, molecular and functional analyses of complement factor I deficiency

    DEFF Research Database (Denmark)

    Nilsson, S.C.; Trouw, L.A.; Renault, N.

    2009-01-01

    Complete deficiency of complement inhibitor factor I (FI) results in secondary complement deficiency due to uncontrolled spontaneous alternative pathway activation leading to susceptibility to infections. Current genetic examination of two patients with near complete FI deficiency and three patie...

  4. Inference for local autocorrelations in locally stationary models.

    Science.gov (United States)

    Zhao, Zhibiao

    2015-04-01

    For non-stationary processes, the time-varying correlation structure provides useful insights into the underlying model dynamics. We study estimation and inferences for local autocorrelation process in locally stationary time series. Our constructed simultaneous confidence band can be used to address important hypothesis testing problems, such as whether the local autocorrelation process is indeed time-varying and whether the local autocorrelation is zero. In particular, our result provides an important generalization of the R function acf() to locally stationary Gaussian processes. Simulation studies and two empirical applications are developed. For the global temperature series, we find that the local autocorrelations are time-varying and have a "V" shape during 1910-1960. For the S&P 500 index, we conclude that the returns satisfy the efficient-market hypothesis whereas the magnitudes of returns show significant local autocorrelations.

  5. Spatial Autocorrelation Patterns of Understory Plant Species in a Subtropical Rainforest at Lanjenchi, Southern Taiwan

    Directory of Open Access Journals (Sweden)

    Su-Wei Fan

    2010-06-01

    Full Text Available Many studies described relationships between plant species and intrinsic or exogenous factors, but few quantified spatial scales of species patterns. In this study, quantitative methods were used to explore the spatial scale of understory species (including resident and transient species, in order to identify the influential factors of species distribution. Resident species (including herbaceous species, climbers and tree ferns < 1 m high were investigated on seven transects, each 5-meter wide and 300-meter long, at Lanjenchi plot in Nanjenshan Reserve, southern Taiwan. Transient species (seedling of canopy, subcanopy and shrub species < 1 cm diameter at breast height were censused in three of the seven transects. The herb coverage and seedling abundance were calculated for each 5 × 5 m quadrat along the transects, and Moran’s I and Galiano’s new local variance (NLV indices were then used to identify the spatial scale of autocorrelation for each species. Patterns of species abundance of understory layer varied among species at fine scale within 50 meters. Resident species showed a higher proportion of significant autocorrelation than the transient species. Species with large size or prolonged fronds or stems tended to show larger scales in autocorrelation. However, dispersal syndromes and fruit types did not relate to any species’ spatial patterns. Several species showed a significant autocorrelation at a 180-meter class which happened to correspond to the local replicates of topographical features in hilltops. The spatial patterns of understory species at Lanjenchi plot are mainly influenced by species’ intrinsic traits and topographical characteristics.

  6. SARDA HITL Preliminary Human Factors Measures and Analyses

    Science.gov (United States)

    Hyashi, Miwa; Dulchinos, Victoria

    2012-01-01

    Human factors data collected during the SARDA HITL Simulation Experiment include a variety of subjective measures, including the NASA TLX, questionnaire questions regarding situational awareness, advisory usefulness, UI usability, and controller trust. Preliminary analysis of the TLX data indicate that workload may not be adversely affected by use of the advisories, additionally, the controller's subjective ratings of the advisories may suggest acceptance of the tool.

  7. Parametric and factor analyses of dynamic scintigraphic studies

    International Nuclear Information System (INIS)

    Surova, H.; Samal, M.; Karny, M.

    1986-01-01

    Processing dynamic examinations in nuclear medicine is done as a rule with regard to the regions of interest and dynamic curves or by means of parametric images. The disadvantage of both methods is the processing of the summation of all processes in overlapping anatomical structures. This disadvantage is eliminated by processing using factor analysis. A different approach from those used formerly makes it possible to use information relating to both time and space, as well as direct quantification of the results in imp./pix./sec. (author)

  8. Application of principal component and factor analyses in electron spectroscopy

    International Nuclear Information System (INIS)

    Siuda, R.; Balcerowska, G.

    1998-01-01

    Fundamentals of two methods, taken from multivariate analysis and known as principal component analysis (PCA) and factor analysis (FA), are presented. Both methods are well known in chemometrics. Since 1979, when application of the methods to electron spectroscopy was reported for the first time, they became to be more and more popular in different branches of electron spectroscopy. The paper presents examples of standard applications of the method of Auger electron spectroscopy (AES), X-ray photoelectron spectroscopy (XPS), and electron energy loss spectroscopy (EELS). Advantages one can take from application of the methods, their potentialities as well as their limitations are pointed out. (author)

  9. Balance Maintenance in the Upright Body Position: Analysis of Autocorrelation

    Directory of Open Access Journals (Sweden)

    Stodolka¹ Jacek

    2016-04-01

    Full Text Available The present research aimed to analyze values of the autocorrelation function measured for different time values of ground reaction forces during stable upright standing. It was hypothesized that if recording of force in time depended on the quality and way of regulating force by the central nervous system (as a regulator, then the application of autocorrelation for time series in the analysis of force changes in time function would allow to determine regulator properties and its functioning. The study was performed on 82 subjects (students, athletes, senior and junior soccer players and subjects who suffered from lower limb injuries. The research was conducted with the use of two Kistler force plates and was based on measurements of ground reaction forces taken during a 15 s period of standing upright while relaxed. The results of the autocorrelation function were statistically analyzed. The research revealed a significant correlation between a derivative extreme and velocity of reaching the extreme by the autocorrelation function, described as gradient strength. Low correlation values (all statistically significant were observed between time of the autocorrelation curve passing through 0 axis and time of reaching the first peak by the said function. Parameters computed on the basis of the autocorrelation function are a reliable means to evaluate the process of flow of stimuli in the nervous system. Significant correlations observed between the parameters of the autocorrelation function indicate that individual parameters provide similar properties of the central nervous system.

  10. Autocorrelation in queuing network type production systems - revisited

    DEFF Research Database (Denmark)

    Nielsen, Erland Hejn

    -production systems (Takahashi and Nakamura, 1998) establishes that autocorrelation plays definitely a non-negligible role in relation to the dimensioning as well as functioning of Kanban-controlled production flow lines . This must logically either imply that production managers are missing an important aspect...... in their production planning reasoning, or that the 'realistic' autocorrelation patterns , inherent in actual production setups , are not like those so far considered in the literature. In this paper, an attempt to characterise relevant and 'realistic' types of autocorrelation schemes as well as their levels...

  11. Estimating the variation, autocorrelation, and environmental sensitivity of phenotypic selection

    NARCIS (Netherlands)

    Chevin, Luis-Miguel; Visser, Marcel E.; Tufto, Jarle

    2015-01-01

    Despite considerable interest in temporal and spatial variation of phenotypic selection, very few methods allow quantifying this variation while correctly accounting for the error variance of each individual estimate. Furthermore, the available methods do not estimate the autocorrelation of

  12. Thirty-two phase sequences design with good autocorrelation ...

    Indian Academy of Sciences (India)

    mum peak aperiodic autocorrelation sidelobe level one are called Barker Sequences. ... the generation and processing of polyphase signals have now become easy ..... Cook C E, Bernfield M 1967 An introduction to theory and application.

  13. Estimating the variation, autocorrelation, and environmental sensitivity of phenotypic selection

    NARCIS (Netherlands)

    Chevin, Luis-Miguel; Visser, Marcel E.; Tufto, Jarle

    Despite considerable interest in temporal and spatial variation of phenotypic selection, very few methods allow quantifying this variation while correctly accounting for the error variance of each individual estimate. Furthermore, the available methods do not estimate the autocorrelation of

  14. Spatial Autocorrelation and Uncertainty Associated with Remotely-Sensed Data

    Directory of Open Access Journals (Sweden)

    Daniel A. Griffith

    2016-06-01

    Full Text Available Virtually all remotely sensed data contain spatial autocorrelation, which impacts upon their statistical features of uncertainty through variance inflation, and the compounding of duplicate information. Estimating the nature and degree of this spatial autocorrelation, which is usually positive and very strong, has been hindered by computational intensity associated with the massive number of pixels in realistically-sized remotely-sensed images, a situation that more recently has changed. Recent advances in spatial statistical estimation theory support the extraction of information and the distilling of knowledge from remotely-sensed images in a way that accounts for latent spatial autocorrelation. This paper summarizes an effective methodological approach to achieve this end, illustrating results with a 2002 remotely sensed-image of the Florida Everglades, and simulation experiments. Specifically, uncertainty of spatial autocorrelation parameter in a spatial autoregressive model is modeled with a beta-beta mixture approach and is further investigated with three different sampling strategies: coterminous sampling, random sub-region sampling, and increasing domain sub-regions. The results suggest that uncertainty associated with remotely-sensed data should be cast in consideration of spatial autocorrelation. It emphasizes that one remaining challenge is to better quantify the spatial variability of spatial autocorrelation estimates across geographic landscapes.

  15. Complementary Exploratory and Confirmatory Factor Analyses of the French WISC-V: Analyses Based on the Standardization Sample.

    Science.gov (United States)

    Lecerf, Thierry; Canivez, Gary L

    2017-12-28

    Interpretation of the French Wechsler Intelligence Scale for Children-Fifth Edition (French WISC-V; Wechsler, 2016a) is based on a 5-factor model including Verbal Comprehension (VC), Visual Spatial (VS), Fluid Reasoning (FR), Working Memory (WM), and Processing Speed (PS). Evidence for the French WISC-V factorial structure was established exclusively through confirmatory factor analyses (CFAs). However, as recommended by Carroll (1995); Reise (2012), and Brown (2015), factorial structure should derive from both exploratory factor analysis (EFA) and CFA. The first goal of this study was to examine the factorial structure of the French WISC-V using EFA. The 15 French WISC-V primary and secondary subtest scaled scores intercorrelation matrix was used and factor extraction criteria suggested from 1 to 4 factors. To disentangle the contribution of first- and second-order factors, the Schmid and Leiman (1957) orthogonalization transformation (SLT) was applied. Overall, no EFA evidence for 5 factors was found. Results indicated that the g factor accounted for about 67% of the common variance and that the contributions of the first-order factors were weak (3.6 to 11.9%). CFA was used to test numerous alternative models. Results indicated that bifactor models produced better fit to these data than higher-order models. Consistent with previous studies, findings suggested dominance of the general intelligence factor and that users should thus emphasize the Full Scale IQ (FSIQ) when interpreting the French WISC-V. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Coded aperture imagery filtered autocorrelation decoding; Imagerie par ouverture de codage decodage par autocorrelation filtree

    Energy Technology Data Exchange (ETDEWEB)

    Rouyer, A. [CEA Bruyeres-le-Chatel, 91 (France)

    2005-10-15

    Coded aperture imagery is particularly suited for imaging objects emitting penetrating radiation (hard X rays, gamma, neutrons), or for particles with rectilinear trajectories (electrons, protons, alpha particles, etc.). It is used when methods based on classical optical principles (reflection, refraction, diffraction), are invalid, or when the source emission is too weak for the well known pinhole method to give a usable image. The optical system consists in an aperture through an absorbing screen, named coding aperture, whose transmission is calculated in such a way that the spatial resolution is similar to that of a simple pinhole device, but with a far superior radiation collecting efficiency. We present a new decoding method,, called filtered autocorrelation, and illustrate its performances on images obtained with various coding apertures. (author)

  17. Autocorrelation analysis of plasma plume light emissions in deep penetration laser welding of steel

    Czech Academy of Sciences Publication Activity Database

    Mrňa, Libor; Šarbort, Martin; Řeřucha, Šimon; Jedlička, Petr

    2017-01-01

    Roč. 29, č. 1 (2017), s. 1-10, č. článku 012009. ISSN 1042-346X R&D Projects: GA MŠk(CZ) LO1212; GA MŠk ED0017/01/01 Institutional support: RVO:68081731 Keywords : laser welding * plasma plume * light emissions * autocorrelation analysis * weld depth Subject RIV: BH - Optics, Masers, Lasers OBOR OECD: Optics (including laser optics and quantum optics) Impact factor: 1.492, year: 2016

  18. Spatial autocorrelation analysis of tourist arrivals using municipal data: A Serbian example

    Directory of Open Access Journals (Sweden)

    Stankov Uglješa

    2017-01-01

    Full Text Available Spatial autocorrelation methodologies can be used to reveal patterns and temporal changes of different spatial variables, including tourism arrivals. The research adopts a GIS-based approach to spatially analyse tourist arrivals in Serbia, using Global Moran's I and Anselin's Local Moran's I statistics applied on the level of municipalities. To assess feasibility of this approach the article discusses spatial changes of tourist arrivals in order to identify potentially significant trends of interest for tourism development policy in Serbia. There is a significant spatial inequality in the distribution of tourism arrivals in Serbia that is not adequately addressed in tourism development plans. The results of global autocorrelation suggest the existence of low and decreasing spatial clustering for domestic tourist arrivals and high, relatively stable spatial clustering for international tourists. Local autocorrelation statistics revealed different of domestic and international tourism arrivals. In order to assess feasibility of this approach these results are discussed in their significance to tourism development policy in Serbia.

  19. Binary codes with impulse autocorrelation functions for dynamic experiments

    International Nuclear Information System (INIS)

    Corran, E.R.; Cummins, J.D.

    1962-09-01

    A series of binary codes exist which have autocorrelation functions approximating to an impulse function. Signals whose behaviour in time can be expressed by such codes have spectra which are 'whiter' over a limited bandwidth and for a finite time than signals from a white noise generator. These codes are used to determine system dynamic responses using the correlation technique. Programmes have been written to compute codes of arbitrary length and to compute 'cyclic' autocorrelation and cross-correlation functions. Complete listings of these programmes are given, and a code of 1019 bits is presented. (author)

  20. What autocorrelation tells us about motor variability: insights from dart throwing.

    Directory of Open Access Journals (Sweden)

    Robert J van Beers

    Full Text Available In sports such as golf and darts it is important that one can produce ballistic movements of an object towards a goal location with as little variability as possible. A factor that influences this variability is the extent to which motor planning is updated from movement to movement based on observed errors. Previous work has shown that for reaching movements, our motor system uses the learning rate (the proportion of an error that is corrected for in the planning of the next movement that is optimal for minimizing the endpoint variability. Here we examined whether the learning rate is hard-wired and therefore automatically optimal, or whether it is optimized through experience. We compared the performance of experienced dart players and beginners in a dart task. A hallmark of the optimal learning rate is that the lag-1 autocorrelation of movement endpoints is zero. We found that the lag-1 autocorrelation of experienced dart players was near zero, implying a near-optimal learning rate, whereas it was negative for beginners, suggesting a larger than optimal learning rate. We conclude that learning rates for trial-by-trial motor learning are optimized through experience. This study also highlights the usefulness of the lag-1 autocorrelation as an index of performance in studying motor-skill learning.

  1. Estimating the Autocorrelated Error Model with Trended Data: Further Results,

    Science.gov (United States)

    1979-11-01

    Perhaps the most serious deficiency of OLS in the presence of autocorrelation is not inefficiency but bias in its estimated standard errors--a bias...k for all t has variance var(b) = o2/ Tk2 2This refutes Maeshiro’s (1976) conjecture that "an estimator utilizing relevant extraneous information

  2. Waveguide superconducting single-photon autocorrelators for quantum photonic applications

    NARCIS (Netherlands)

    Sahin, D.; Gaggero, A.; Frucci, G.; Jahanmirinejad, S.; Sprengers, J.P.; Mattioli, F.; Leoni, R.; Beetz, J.; Lermer, M.; Kamp, M.; Höfling, S.; Fiore, A.; Hasan, Z.U.; Hemmer, P.R.; Lee, H.; Santori, C.M.

    2013-01-01

    We report a novel component for integrated quantum photonic applications, a waveguide single-photon autocorrelator. It is based on two superconducting nanowire detectors patterned onto the same GaAs ridge waveguide. Combining the electrical output of the two detectors in a correlation card enables

  3. Performances Of Estimators Of Linear Models With Autocorrelated ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...

  4. New approaches for calculating Moran's index of spatial autocorrelation.

    Science.gov (United States)

    Chen, Yanguang

    2013-01-01

    Spatial autocorrelation plays an important role in geographical analysis; however, there is still room for improvement of this method. The formula for Moran's index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran's index. Moran's scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran's index and Geary's coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran's index and Geary's coefficient will be clarified and defined. One of theoretical findings is that Moran's index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be employed to validate the innovatory models and methods. This work is a methodological study, which will simplify the process of autocorrelation analysis. The results of this study will lay the foundation for the scaling analysis of spatial autocorrelation.

  5. New approaches for calculating Moran's index of spatial autocorrelation.

    Directory of Open Access Journals (Sweden)

    Yanguang Chen

    Full Text Available Spatial autocorrelation plays an important role in geographical analysis; however, there is still room for improvement of this method. The formula for Moran's index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran's index. Moran's scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran's index and Geary's coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran's index and Geary's coefficient will be clarified and defined. One of theoretical findings is that Moran's index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be employed to validate the innovatory models and methods. This work is a methodological study, which will simplify the process of autocorrelation analysis. The results of this study will lay the foundation for the scaling analysis of spatial autocorrelation.

  6. Performances of estimators of linear auto-correlated error model ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...

  7. Kernel maximum autocorrelation factor and minimum noise fraction transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...

  8. Insight into the Stigma of Suicide Loss Survivors: Factor Analyses of Family Stereotypes, Prejudices, and Discriminations.

    Science.gov (United States)

    Corrigan, Patrick W; Sheehan, Lindsay; Al-Khouja, Maya A; Lewy, Stanley; Major, Deborah R; Mead, Jessica; Redmon, Megghun; Rubey, Charles T; Weber, Stephanie

    2018-01-01

    Families of individuals who die by suicide report public stigma that threatens their well-being. This study used a community-based participatory (CBPR) approach to describe a factor structure for the family stigma of suicide. Candidate items (n = 82) from a previous qualitative study were presented in an online survey format. Members of the public (n = 232) indicated how much they thought items represented public views and behaviors towards family members who lost a loved one to suicide. Factor analyses revealed two factors for stereotypes (dysfunctional, blameworthy), one factor for prejudice (fear and distrust), and three factors for discrimination (exclusion, secrecy, and avoidance).

  9. Exploring the effects of spatial autocorrelation when identifying key drivers of wildlife crop-raiding.

    Science.gov (United States)

    Songhurst, Anna; Coulson, Tim

    2014-03-01

    Few universal trends in spatial patterns of wildlife crop-raiding have been found. Variations in wildlife ecology and movements, and human spatial use have been identified as causes of this apparent unpredictability. However, varying spatial patterns of spatial autocorrelation (SA) in human-wildlife conflict (HWC) data could also contribute. We explicitly explore the effects of SA on wildlife crop-raiding data in order to facilitate the design of future HWC studies. We conducted a comparative survey of raided and nonraided fields to determine key drivers of crop-raiding. Data were subsampled at different spatial scales to select independent raiding data points. The model derived from all data was fitted to subsample data sets. Model parameters from these models were compared to determine the effect of SA. Most methods used to account for SA in data attempt to correct for the change in P-values; yet, by subsampling data at broader spatial scales, we identified changes in regression estimates. We consequently advocate reporting both model parameters across a range of spatial scales to help biological interpretation. Patterns of SA vary spatially in our crop-raiding data. Spatial distribution of fields should therefore be considered when choosing the spatial scale for analyses of HWC studies. Robust key drivers of elephant crop-raiding included raiding history of a field and distance of field to a main elephant pathway. Understanding spatial patterns and determining reliable socio-ecological drivers of wildlife crop-raiding is paramount for designing mitigation and land-use planning strategies to reduce HWC. Spatial patterns of HWC are complex, determined by multiple factors acting at more than one scale; therefore, studies need to be designed with an understanding of the effects of SA. Our methods are accessible to a variety of practitioners to assess the effects of SA, thereby improving the reliability of conservation management actions.

  10. A quantitative method to analyse an open answer questionnaire: A case study about the Boltzmann Factor

    International Nuclear Information System (INIS)

    Battaglia, Onofrio Rosario; Di Paola, Benedetto

    2015-01-01

    This paper describes a quantitative method to analyse an openended questionnaire. Student responses to a specially designed written questionnaire are quantitatively analysed by not hierarchical clustering called k-means method. Through this we can characterise behaviour students with respect their expertise to formulate explanations for phenomena or processes and/or use a given model in the different context. The physics topic is about the Boltzmann Factor, which allows the students to have a unifying view of different phenomena in different contexts.

  11. Momentum autocorrelation function of a classic diatomic chain

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Ming B., E-mail: mingbyu@gmail.com

    2016-10-23

    A classical harmonic diatomic chain is studied using the recurrence relations method. The momentum autocorrelation function results from contributions of acoustic and optical branches. By use of convolution theorem, analytical expressions for the acoustic and optical contributions are derived as even-order Bessel function expansions with coefficients given in terms of integrals of elliptic functions in real axis and a contour parallel to the imaginary axis, respectively. - Highlights: • Momentum autocorrelation function of a classic diatomic chain is studied. • It is derived as even-order Bessel function expansion using the convolution theorem. • The expansion coefficients are integrals of elliptic functions. • Addition theorem is used to reduce complex elliptic function to complex sum of real ones.

  12. IMPROVING CONTROL ROOM DESIGN AND OPERATIONS BASED ON HUMAN FACTORS ANALYSES OR HOW MUCH HUMAN FACTORS UPGRADE IS ENOUGH ?

    Energy Technology Data Exchange (ETDEWEB)

    HIGGINS,J.C.; OHARA,J.M.; ALMEIDA,P.

    2002-09-19

    THE JOSE CABRERA NUCLEAR POWER PLANT IS A ONE LOOP WESTINGHOUSE PRESSURIZED WATER REACTOR. IN THE CONTROL ROOM, THE DISPLAYS AND CONTROLS USED BY OPERATORS FOR THE EMERGENCY OPERATING PROCEDURES ARE DISTRIBUTED ON FRONT AND BACK PANELS. THIS CONFIGURATION CONTRIBUTED TO RISK IN THE PROBABILISTIC SAFETY ASSESSMENT WHERE IMPORTANT OPERATOR ACTIONS ARE REQUIRED. THIS STUDY WAS UNDERTAKEN TO EVALUATE THE IMPACT OF THE DESIGN ON CREW PERFORMANCE AND PLANT SAFETY AND TO DEVELOP DESIGN IMPROVEMENTS.FIVE POTENTIAL EFFECTS WERE IDENTIFIED. THEN NUREG-0711 [1], PROGRAMMATIC, HUMAN FACTORS, ANALYSES WERE CONDUCTED TO SYSTEMATICALLY EVALUATE THE CR-LA YOUT TO DETERMINE IF THERE WAS EVIDENCE OF THE POTENTIAL EFFECTS. THESE ANALYSES INCLUDED OPERATING EXPERIENCE REVIEW, PSA REVIEW, TASK ANALYSES, AND WALKTHROUGH SIMULATIONS. BASED ON THE RESULTS OF THESE ANALYSES, A VARIETY OF CONTROL ROOM MODIFICATIONS WERE IDENTIFIED. FROM THE ALTERNATIVES, A SELECTION WAS MADE THAT PROVIDED A REASONABLEBALANCE BE TWEEN PERFORMANCE, RISK AND ECONOMICS, AND MODIFICATIONS WERE MADE TO THE PLANT.

  13. The Chinese Family Assessment Instrument (C-FAI): Hierarchical Confirmatory Factor Analyses and Factorial Invariance

    Science.gov (United States)

    Shek, Daniel T. L.; Ma, Cecilia M. S.

    2010-01-01

    Objective: This paper examines the dimensionality and factorial invariance of the Chinese Family Assessment Instrument (C-FAI) using multigroup confirmatory factor analyses (MCFAs). Method: A total of 3,649 students responded to the C-FAI in a community survey. Results: Results showed that there are five dimensions of the C-FAI (communication,…

  14. Stress intensity factor analyses of surface cracks in three-dimensional structures

    International Nuclear Information System (INIS)

    Miyazaki, Noriyuki; Shibata, Katsuyuki; Watanabe, Takayuki; Tagata, Kazunori.

    1983-11-01

    The stress intensity factor analyses of surface cracks in various three-dimensional structures were performed using the finite element computer program EPAS-J1. The results obtained by EPAS-J1 were compared with other finite element solutions or results obtained by the simplified estimation methods. Among the simplified estimation methods, the equations proposed by Newman and Raju give the distributions of the stress intensity factor along a crack front, which were compared with the result obtained by EPAS-J1. It was confirmed by comparing the results that EPAS-J1 gives reasonable stress intensity factors of surface cracks in three-dimensional structures. (author)

  15. A Novel Acoustic Liquid Level Determination Method for Coal Seam Gas Wells Based on Autocorrelation Analysis

    Directory of Open Access Journals (Sweden)

    Ximing Zhang

    2017-11-01

    Full Text Available In coal seam gas (CSG wells, water is periodically removed from the wellbore in order to keep the bottom-hole flowing pressure at low levels, facilitating the desorption of methane gas from the coal bed. In order to calculate gas flow rate and further optimize well performance, it is necessary to accurately monitor the liquid level in real-time. This paper presents a novel method based on autocorrelation function (ACF analysis for determining the liquid level in CSG wells under intense noise conditions. The method involves the calculation of the acoustic travel time in the annulus and processing the autocorrelation signal in order to extract the weak echo under high background noise. In contrast to previous works, the non-linear dependence of the acoustic velocity on temperature and pressure is taken into account. To locate the liquid level of a coal seam gas well the travel time is computed iteratively with the non-linear velocity model. Afterwards, the proposed method is validated using experimental laboratory investigations that have been developed for liquid level detection under two scenarios, representing the combination of low pressure, weak signal, and intense noise generated by gas flowing and leakage. By adopting an evaluation indicator called Crest Factor, the results have shown the superiority of the ACF-based method compared to Fourier filtering (FFT. In the two scenarios, the maximal measurement error from the proposed method was 0.34% and 0.50%, respectively. The latent periodic characteristic of the reflected signal can be extracted by the ACF-based method even when the noise is larger than 1.42 Pa, which is impossible for FFT-based de-noising. A case study focused on a specific CSG well is presented to illustrate the feasibility of the proposed approach, and also to demonstrate that signal processing with autocorrelation analysis can improve the sensitivity of the detection system.

  16. On the Decay Ratio Determination in BWR Stability Analysis by Auto-Correlation Function Techniques

    International Nuclear Information System (INIS)

    Behringer, K.; Hennig, D.

    2002-11-01

    A novel auto-correlation function (ACF) method has been investigated for determining the oscillation frequency and the decay ratio in BWR stability analyses. The neutron signals are band-pass filtered to separate the oscillation peak in the power spectral density (PSD) from background. Two linear second-order oscillation models are considered. These models, corrected for signal filtering and including a background term under the peak in the PSD, are then least-squares fitted to the ACF of the previously filtered neutron signal, in order to determine the oscillation frequency and the decay ratio. Our method uses fast Fourier transform techniques with signal segmentation for filtering and ACF estimation. Gliding 'short-term' ACF estimates on a record allow the evaluation of uncertainties. Numerical results are given which have been obtained from neutron data of the recent Forsmark I and Forsmark II NEA benchmark project. Our results are compared with those obtained by other participants in the benchmark project. The present PSI report is an extended version of the publication K. Behringer, D. Hennig 'A novel auto-correlation function method for the determination of the decay ratio in BWR stability studies' (Behringer, Hennig, 2002)

  17. Round Robin Analyses on Stress Intensity Factors of Inner Surface Cracks in Welded Stainless Steel Pipes

    Directory of Open Access Journals (Sweden)

    Chang-Gi Han

    2016-12-01

    Full Text Available Austenitic stainless steels (ASSs are widely used for nuclear pipes as they exhibit a good combination of mechanical properties and corrosion resistance. However, high tensile residual stresses may occur in ASS welds because postweld heat treatment is not generally conducted in order to avoid sensitization, which causes a stress corrosion crack. In this study, round robin analyses on stress intensity factors (SIFs were carried out to examine the appropriateness of structural integrity assessment methods for ASS pipe welds with two types of circumferential cracks. Typical stress profiles were generated from finite element analyses by considering residual stresses and normal operating conditions. Then, SIFs of cracked ASS pipes were determined by analytical equations represented in fitness-for-service assessment codes as well as reference finite element analyses. The discrepancies of estimated SIFs among round robin participants were confirmed due to different assessment procedures and relevant considerations, as well as the mistakes of participants. The effects of uncertainty factors on SIFs were deducted from sensitivity analyses and, based on the similarity and conservatism compared with detailed finite element analysis results, the R6 code, taking into account the applied internal pressure and combination of stress components, was recommended as the optimum procedure for SIF estimation.

  18. Psychosocial Factors Related to Lateral and Medial Epicondylitis: Results From Pooled Study Analyses.

    Science.gov (United States)

    Thiese, Matthew S; Hegmann, Kurt T; Kapellusch, Jay; Merryweather, Andrew; Bao, Stephen; Silverstein, Barbara; Tang, Ruoliang; Garg, Arun

    2016-06-01

    The goal is to assess the relationships between psychosocial factors and both medial and lateral epicondylitis after adjustment for personal and job physical exposures. One thousand eight hundred twenty-four participants were included in pooled analyses. Ten psychosocial factors were assessed. One hundred twenty-one (6.6%) and 34 (1.9%) participants have lateral and medial epicondylitis, respectively. Nine psychosocial factors assessed had significant trends or associations with lateral epicondylitis, the largest of which was between physical exhaustion after work and lateral epicondylitis with and odds ratio of 7.04 (95% confidence interval = 2.02 to 24.51). Eight psychosocial factors had significant trends or relationships with medial epicondylitis, with the largest being between mental exhaustion after work with an odds ratio of 6.51 (95% confidence interval = 1.57 to 27.04). The breadth and strength of these associations after adjustment for confounding factors demonstrate meaningful relationships that need to be further investigated in prospective analyses.

  19. Distributions of Autocorrelated First-Order Kinetic Outcomes: Illness Severity.

    Directory of Open Access Journals (Sweden)

    James D Englehardt

    Full Text Available Many complex systems produce outcomes having recurring, power law-like distributions over wide ranges. However, the form necessarily breaks down at extremes, whereas the Weibull distribution has been demonstrated over the full observed range. Here the Weibull distribution is derived as the asymptotic distribution of generalized first-order kinetic processes, with convergence driven by autocorrelation, and entropy maximization subject to finite positive mean, of the incremental compounding rates. Process increments represent multiplicative causes. In particular, illness severities are modeled as such, occurring in proportion to products of, e.g., chronic toxicant fractions passed by organs along a pathway, or rates of interacting oncogenic mutations. The Weibull form is also argued theoretically and by simulation to be robust to the onset of saturation kinetics. The Weibull exponential parameter is shown to indicate the number and widths of the first-order compounding increments, the extent of rate autocorrelation, and the degree to which process increments are distributed exponential. In contrast with the Gaussian result in linear independent systems, the form is driven not by independence and multiplicity of process increments, but by increment autocorrelation and entropy. In some physical systems the form may be attracting, due to multiplicative evolution of outcome magnitudes towards extreme values potentially much larger and smaller than control mechanisms can contain. The Weibull distribution is demonstrated in preference to the lognormal and Pareto I for illness severities versus (a toxicokinetic models, (b biologically-based network models, (c scholastic and psychological test score data for children with prenatal mercury exposure, and (d time-to-tumor data of the ED01 study.

  20. Size determinations of plutonium colloids using autocorrelation photon spectroscopy

    International Nuclear Information System (INIS)

    Triay, I.R.; Rundberg, R.S.; Mitchell, A.J.; Ott, M.A.; Hobart, D.E.; Palmer, P.D.; Newton, T.W.; Thompson, J.L.

    1989-01-01

    Autocorrelation Photon Spectroscopy (APS) is a light-scattering technique utilized to determine the size distribution of colloidal suspensions. The capabilities of the APS methodology have been assessed by analyzing colloids of known sizes. Plutonium(IV) colloid samples were prepared by a variety of methods including: dilution; peptization; and alpha-induced auto-oxidation of Pu(III). The size of theses Pu colloids was analyzed using APS. The sizes determined for the Pu colloids studied varied from 1 to 370 nanometers. 7 refs., 5 figs., 3 tabs

  1. Spectral velocity estimation using autocorrelation functions for sparse data sets

    DEFF Research Database (Denmark)

    2006-01-01

    The distribution of velocities of blood or tissue is displayed using ultrasound scanners by finding the power spectrum of the received signal. This is currently done by making a Fourier transform of the received signal and then showing spectra in an M-mode display. It is desired to show a B......-mode image for orientation, and data for this has to acquired interleaved with the flow data. The power spectrum can be calculated from the Fourier transform of the autocorrelation function Ry (k), where its span of lags k is given by the number of emission N in the data segment for velocity estimation...

  2. Stable Blind Deconvolution over the Reals from Additional Autocorrelations

    KAUST Repository

    Walk, Philipp

    2017-10-22

    Recently the one-dimensional time-discrete blind deconvolution problem was shown to be solvable uniquely, up to a global phase, by a semi-definite program for almost any signal, provided its autocorrelation is known. We will show in this work that under a sufficient zero separation of the corresponding signal in the $z-$domain, a stable reconstruction against additive noise is possible. Moreover, the stability constant depends on the signal dimension and on the signals magnitude of the first and last coefficients. We give an analytical expression for this constant by using spectral bounds of Vandermonde matrices.

  3. Autocorrelation based reconstruction of two-dimensional binary objects

    International Nuclear Information System (INIS)

    Mejia-Barbosa, Y.; Castaneda, R.

    2005-10-01

    A method for reconstructing two-dimensional binary objects from its autocorrelation function is discussed. The objects consist of a finite set of identical elements. The reconstruction algorithm is based on the concept of class of element pairs, defined as the set of element pairs with the same separation vector. This concept allows to solve the redundancy introduced by the element pairs of each class. It is also shown that different objects, consisting of an equal number of elements and the same classes of pairs, provide Fraunhofer diffraction patterns with identical intensity distributions. However, the method predicts all the possible objects that produce the same Fraunhofer pattern. (author)

  4. Higher- and Lower-Order Factor Analyses of the Temperament in Middle Childhood Questionnaire

    Science.gov (United States)

    Kotelnikova, Yuliya; Olino, Thomas M.; Klein, Daniel N.; Mackrell, Sarah V.M.; Hayden, Elizabeth P.

    2017-01-01

    The Temperament in Middle Childhood Questionnaire (TMCQ; Simonds & Rothbart, 2004) is a widely used parent-report measure of temperament. However, neither its lower- nor higher-order structures have been tested via a bottom-up, empirically based approach. We conducted higher- and lower-order exploratory factor analyses (EFAs) of the TMCQ in a large (N = 654) sample of 9-year-olds. Item-level EFAs identified 92 items as suitable (i.e., with loadings ≥.40) for constructing lower-order factors, only half of which resembled a TMCQ scale posited by the measure’s authors. Higher-order EFAs of the lower-order factors showed that a three-factor structure (Impulsivity/Negative Affectivity, Negative Affectivity, and Openness/Assertiveness) was the only admissible solution. Overall, many TMCQ items did not load well onto a lower-order factor. In addition, only three factors, which did not show a clear resemblance to Rothbart’s four-factor model of temperament in middle childhood, were needed to account for the higher-order structure of the TMCQ. PMID:27002124

  5. Break point on the auto-correlation function of Elsässer variable z- in the super-Alfvénic solar wind fluctuations

    Science.gov (United States)

    Wang, X.; Tu, C. Y.; He, J.; Wang, L.

    2017-12-01

    It has been a longstanding debate on what the nature of Elsässer variables z- observed in the Alfvénic solar wind is. It is widely believed that z- represents inward propagating Alfvén waves and undergoes non-linear interaction with z+ to produce energy cascade. However, z- variations sometimes show nature of convective structures. Here we present a new data analysis on z- autocorrelation functions to get some definite information on its nature. We find that there is usually a break point on the z- auto-correlation function when the fluctuations show nearly pure Alfvénicity. The break point observed by Helios-2 spacecraft near 0.3 AU is at the first time lag ( 81 s), where the autocorrelation coefficient has the value less than that at zero-time lag by a factor of more than 0.4. The autocorrelation function breaks also appear in the WIND observations near 1 AU. The z- autocorrelation function is separated by the break into two parts: fast decreasing part and slowly decreasing part, which cannot be described in a whole by an exponential formula. The breaks in the z- autocorrelation function may represent that the z- time series are composed of high-frequency white noise and low-frequency apparent structures, which correspond to the flat and steep parts of the function, respectively. This explanation is supported by a simple test with a superposition of an artificial random data series and a smoothed random data series. Since in many cases z- autocorrelation functions do not decrease very quickly at large time lag and cannot be considered as the Lanczos type, no reliable value for correlation-time can be derived. Our results showed that in these cases with high Alfvénicity, z- should not be considered as inward-propagating wave. The power-law spectrum of z+ should be made by fluid turbulence cascade process presented by Kolmogorov.

  6. ISAR Imaging of Ship Targets Based on an Integrated Cubic Phase Bilinear Autocorrelation Function

    Directory of Open Access Journals (Sweden)

    Jibin Zheng

    2017-03-01

    Full Text Available For inverse synthetic aperture radar (ISAR imaging of a ship target moving with ocean waves, the image constructed with the standard range-Doppler (RD technique is blurred and the range-instantaneous-Doppler (RID technique has to be used to improve the image quality. In this paper, azimuth echoes in a range cell of the ship target are modeled as noisy multicomponent cubic phase signals (CPSs after the motion compensation and a RID ISAR imaging algorithm is proposed based on the integrated cubic phase bilinear autocorrelation function (ICPBAF. The ICPBAF is bilinear and based on the two-dimensionally coherent energy accumulation. Compared to five other estimation algorithms, the ICPBAF can acquire higher cross term suppression and anti-noise performance with a reasonable computational cost. Through simulations and analyses with the synthetic model and real radar data, we verify the effectiveness of the ICPBAF and corresponding RID ISAR imaging algorithm.

  7. Explaining local-scale species distributions: relative contributions of spatial autocorrelation and landscape heterogeneity for an avian assemblage.

    Directory of Open Access Journals (Sweden)

    Brady J Mattsson

    Full Text Available Understanding interactions between mobile species distributions and landcover characteristics remains an outstanding challenge in ecology. Multiple factors could explain species distributions including endogenous evolutionary traits leading to conspecific clustering and endogenous habitat features that support life history requirements. Birds are a useful taxon for examining hypotheses about the relative importance of these factors among species in a community. We developed a hierarchical Bayes approach to model the relationships between bird species occupancy and local landcover variables accounting for spatial autocorrelation, species similarities, and partial observability. We fit alternative occupancy models to detections of 90 bird species observed during repeat visits to 316 point-counts forming a 400-m grid throughout the Patuxent Wildlife Research Refuge in Maryland, USA. Models with landcover variables performed significantly better than our autologistic and null models, supporting the hypothesis that local landcover heterogeneity is important as an exogenous driver for species distributions. Conspecific clustering alone was a comparatively poor descriptor of local community composition, but there was evidence for spatial autocorrelation in all species. Considerable uncertainty remains whether landcover combined with spatial autocorrelation is most parsimonious for describing bird species distributions at a local scale. Spatial structuring may be weaker at intermediate scales within which dispersal is less frequent, information flows are localized, and landcover types become spatially diversified and therefore exhibit little aggregation. Examining such hypotheses across species assemblages contributes to our understanding of community-level associations with conspecifics and landscape composition.

  8. Sensitivity analyses of factors influencing CMAQ performance for fine particulate nitrate.

    Science.gov (United States)

    Shimadera, Hikari; Hayami, Hiroshi; Chatani, Satoru; Morino, Yu; Mori, Yasuaki; Morikawa, Tazuko; Yamaji, Kazuyo; Ohara, Toshimasa

    2014-04-01

    Improvement of air quality models is required so that they can be utilized to design effective control strategies for fine particulate matter (PM2.5). The Community Multiscale Air Quality modeling system was applied to the Greater Tokyo Area of Japan in winter 2010 and summer 2011. The model results were compared with observed concentrations of PM2.5 sulfate (SO4(2-)), nitrate (NO3(-)) and ammonium, and gaseous nitric acid (HNO3) and ammonia (NH3). The model approximately reproduced PM2.5 SO4(2-) concentration, but clearly overestimated PM2.5 NO3(-) concentration, which was attributed to overestimation of production of ammonium nitrate (NH4NO3). This study conducted sensitivity analyses of factors associated with the model performance for PM2.5 NO3(-) concentration, including temperature and relative humidity, emission of nitrogen oxides, seasonal variation of NH3 emission, HNO3 and NH3 dry deposition velocities, and heterogeneous reaction probability of dinitrogen pentoxide. Change in NH3 emission directly affected NH3 concentration, and substantially affected NH4NO3 concentration. Higher dry deposition velocities of HNO3 and NH3 led to substantial reductions of concentrations of the gaseous species and NH4NO3. Because uncertainties in NH3 emission and dry deposition processes are probably large, these processes may be key factors for improvement of the model performance for PM2.5 NO3(-). The Community Multiscale Air Quality modeling system clearly overestimated the concentration of fine particulate nitrate in the Greater Tokyo Area of Japan, which was attributed to overestimation of production of ammonium nitrate. Sensitivity analyses were conducted for factors associated with the model performance for nitrate. Ammonia emission and dry deposition of nitric acid and ammonia may be key factors for improvement of the model performance.

  9. Assessment of drug-induced arrhythmic risk using limit cycle and autocorrelation analysis of human iPSC-cardiomyocyte contractility

    Energy Technology Data Exchange (ETDEWEB)

    Kirby, R. Jason [Sanford Burnham Prebys Medical Discovery Institute, Conrad Prebys Center for Chemical Genomics, 6400 Sanger Rd, Orlando, FL 32827 (United States); Qi, Feng [Sanford Burnham Prebys Medical Discovery Institute, Applied Bioinformatics Facility, 6400 Sanger Rd, Orlando, FL 32827 (United States); Phatak, Sharangdhar; Smith, Layton H. [Sanford Burnham Prebys Medical Discovery Institute, Conrad Prebys Center for Chemical Genomics, 6400 Sanger Rd, Orlando, FL 32827 (United States); Malany, Siobhan, E-mail: smalany@sbpdiscovery.org [Sanford Burnham Prebys Medical Discovery Institute, Conrad Prebys Center for Chemical Genomics, 6400 Sanger Rd, Orlando, FL 32827 (United States)

    2016-08-15

    Cardiac safety assays incorporating label-free detection of human stem-cell derived cardiomyocyte contractility provide human relevance and medium throughput screening to assess compound-induced cardiotoxicity. In an effort to provide quantitative analysis of the large kinetic datasets resulting from these real-time studies, we applied bioinformatic approaches based on nonlinear dynamical system analysis, including limit cycle analysis and autocorrelation function, to systematically assess beat irregularity. The algorithms were integrated into a software program to seamlessly generate results for 96-well impedance-based data. Our approach was validated by analyzing dose- and time-dependent changes in beat patterns induced by known proarrhythmic compounds and screening a cardiotoxicity library to rank order compounds based on their proarrhythmic potential. We demonstrate a strong correlation for dose-dependent beat irregularity monitored by electrical impedance and quantified by autocorrelation analysis to traditional manual patch clamp potency values for hERG blockers. In addition, our platform identifies non-hERG blockers known to cause clinical arrhythmia. Our method provides a novel suite of medium-throughput quantitative tools for assessing compound effects on cardiac contractility and predicting compounds with potential proarrhythmia and may be applied to in vitro paradigms for pre-clinical cardiac safety evaluation. - Highlights: • Impedance-based monitoring of human iPSC-derived cardiomyocyte contractility • Limit cycle analysis of impedance data identifies aberrant oscillation patterns. • Nonlinear autocorrelation function quantifies beat irregularity. • Identification of hERG and non-hERG inhibitors with known risk of arrhythmia • Automated software processes limit cycle and autocorrelation analyses of 96w data.

  10. Assessment of drug-induced arrhythmic risk using limit cycle and autocorrelation analysis of human iPSC-cardiomyocyte contractility

    International Nuclear Information System (INIS)

    Kirby, R. Jason; Qi, Feng; Phatak, Sharangdhar; Smith, Layton H.; Malany, Siobhan

    2016-01-01

    Cardiac safety assays incorporating label-free detection of human stem-cell derived cardiomyocyte contractility provide human relevance and medium throughput screening to assess compound-induced cardiotoxicity. In an effort to provide quantitative analysis of the large kinetic datasets resulting from these real-time studies, we applied bioinformatic approaches based on nonlinear dynamical system analysis, including limit cycle analysis and autocorrelation function, to systematically assess beat irregularity. The algorithms were integrated into a software program to seamlessly generate results for 96-well impedance-based data. Our approach was validated by analyzing dose- and time-dependent changes in beat patterns induced by known proarrhythmic compounds and screening a cardiotoxicity library to rank order compounds based on their proarrhythmic potential. We demonstrate a strong correlation for dose-dependent beat irregularity monitored by electrical impedance and quantified by autocorrelation analysis to traditional manual patch clamp potency values for hERG blockers. In addition, our platform identifies non-hERG blockers known to cause clinical arrhythmia. Our method provides a novel suite of medium-throughput quantitative tools for assessing compound effects on cardiac contractility and predicting compounds with potential proarrhythmia and may be applied to in vitro paradigms for pre-clinical cardiac safety evaluation. - Highlights: • Impedance-based monitoring of human iPSC-derived cardiomyocyte contractility • Limit cycle analysis of impedance data identifies aberrant oscillation patterns. • Nonlinear autocorrelation function quantifies beat irregularity. • Identification of hERG and non-hERG inhibitors with known risk of arrhythmia • Automated software processes limit cycle and autocorrelation analyses of 96w data

  11. The Nursing Performance Instrument: Exploratory and Confirmatory Factor Analyses in Registered Nurses.

    Science.gov (United States)

    Sagherian, Knar; Steege, Linsey M; Geiger-Brown, Jeanne; Harrington, Donna

    2018-04-01

    The optimal performance of nurses in healthcare settings plays a critical role in care quality and patient safety. Despite this importance, few measures are provided in the literature that evaluate nursing performance as an independent construct from competencies. The nine-item Nursing Performance Instrument (NPI) was developed to fill this gap. The aim of this study was to examine and confirm the underlying factor structure of the NPI in registered nurses. The design was cross-sectional, using secondary data collected between February 2008 and April 2009 for the "Fatigue in Nursing Survey" (N = 797). The sample was predominantly dayshift female nurses working in acute care settings. Using Mplus software, exploratory and confirmatory factor analyses were applied to the NPI data, which were divided into two equal subsamples. Multiple fit indices were used to evaluate the fit of the alternative models. The three-factor model was determined to fit the data adequately. The factors that were labeled as "physical/mental decrements," "consistent practice," and "behavioral change" were moderately to strongly intercorrelated, indicating good convergent validity. The reliability coefficients for the subscales were acceptable. The NPI consists of three latent constructs. This instrument has the potentialto be used as a self-monitoring instrument that addressesnurses' perceptions of performance while providing patient care.

  12. Covariance Estimation and Autocorrelation of NORAD Two-Line Element Sets

    National Research Council Canada - National Science Library

    Osweiler, Victor P

    2006-01-01

    This thesis investigates NORAD two-line element sets (TLE) containing satellite mean orbital elements for the purpose of estimating a covariance matrix and formulating an autocorrelation relationship...

  13. Relationships of Functional Tests Following ACL Reconstruction: Exploratory Factor Analyses of the Lower Extremity Assessment Protocol.

    Science.gov (United States)

    DiFabio, Melissa; Slater, Lindsay V; Norte, Grant; Goetschius, John; Hart, Joseph M; Hertel, Jay

    2018-03-01

    After ACL reconstruction (ACLR), deficits are often assessed using a variety of functional tests, which can be time consuming. It is unknown whether these tests provide redundant or unique information. To explore relationships between components of a battery of functional tests, the Lower Extremity Assessment Protocol (LEAP) was created to aid in developing the most informative, concise battery of tests for evaluating ACLR patients. Descriptive, cross-sectional. Laboratory. 76 ACLR patients (6.86±3.07 months postoperative) and 54 healthy participants. Isokinetic knee flexion and extension at 90 and 180 degrees/second, maximal voluntary isometric contraction for knee extension and flexion, single leg balance, 4 hopping tasks (single, triple, crossover, and 6-meter timed hop), and a bilateral drop vertical jump that was scored with the Landing Error Scoring System (LESS). Peak torque, average torque, average power, total work, fatigue indices, center of pressure area and velocity, hop distance and time, and LESS score. A series of factor analyses were conducted to assess grouping of functional tests on the LEAP for each limb in the ACLR and healthy groups and limb symmetry indices (LSI) for both groups. Correlations were run between measures that loaded on retained factors. Isokinetic and isometric strength tests for knee flexion and extension, hopping, balance, and fatigue index were identified as unique factors for all limbs. The LESS score loaded with various factors across the different limbs. The healthy group LSI analysis produced more factors than the ACLR LSI analysis. Individual measures within each factor had moderate to strong correlations. Isokinetic and isometric strength, hopping, balance, and fatigue index provided unique information. Within each category of measures, not all tests may need to be included for a comprehensive functional assessment of ACLR patients due to the high amount of shared variance between them.

  14. ObStruct: a method to objectively analyse factors driving population structure using Bayesian ancestry profiles.

    Directory of Open Access Journals (Sweden)

    Velimir Gayevskiy

    Full Text Available Bayesian inference methods are extensively used to detect the presence of population structure given genetic data. The primary output of software implementing these methods are ancestry profiles of sampled individuals. While these profiles robustly partition the data into subgroups, currently there is no objective method to determine whether the fixed factor of interest (e.g. geographic origin correlates with inferred subgroups or not, and if so, which populations are driving this correlation. We present ObStruct, a novel tool to objectively analyse the nature of structure revealed in Bayesian ancestry profiles using established statistical methods. ObStruct evaluates the extent of structural similarity between sampled and inferred populations, tests the significance of population differentiation, provides information on the contribution of sampled and inferred populations to the observed structure and crucially determines whether the predetermined factor of interest correlates with inferred population structure. Analyses of simulated and experimental data highlight ObStruct's ability to objectively assess the nature of structure in populations. We show the method is capable of capturing an increase in the level of structure with increasing time since divergence between simulated populations. Further, we applied the method to a highly structured dataset of 1,484 humans from seven continents and a less structured dataset of 179 Saccharomyces cerevisiae from three regions in New Zealand. Our results show that ObStruct provides an objective metric to classify the degree, drivers and significance of inferred structure, as well as providing novel insights into the relationships between sampled populations, and adds a final step to the pipeline for population structure analyses.

  15. Intensity autocorrelation measurements of frequency combs in the terahertz range

    Science.gov (United States)

    Benea-Chelmus, Ileana-Cristina; Rösch, Markus; Scalari, Giacomo; Beck, Mattias; Faist, Jérôme

    2017-09-01

    We report on direct measurements of the emission character of quantum cascade laser based frequency combs, using intensity autocorrelation. Our implementation is based on fast electro-optic sampling, with a detection spectral bandwidth matching the emission bandwidth of the comb laser, around 2.5 THz. We find the output of these frequency combs to be continuous even in the locked regime, but accompanied by a strong intensity modulation. Moreover, with our record temporal resolution of only few hundreds of femtoseconds, we can resolve correlated intensity modulation occurring on time scales as short as the gain recovery time, about 4 ps. By direct comparison with pulsed terahertz light originating from a photoconductive emitter, we demonstrate the peculiar emission pattern of these lasers. The measurement technique is self-referenced and ultrafast, and requires no reconstruction. It will be of significant importance in future measurements of ultrashort pulses from quantum cascade lasers.

  16. Determination of the friction coefficient via the force autocorrelation function. A molecular dynamics investigation for a dense Lennard-Jones fluid

    International Nuclear Information System (INIS)

    Vogelsang, R.; Hoheisel, C.

    1987-01-01

    For a large region of dense fluid states of a Lennard-Jones system, they have calculated the friction coefficient by the force autocorrelation function of a Brownian-type particle by molecular dynamics (MD). The time integral over the force autocorrelation function showed an interesting behavior and the expected plateau value when the mass of the Brownian particle was chosen to be about a factor of 100 larger than the mass of the fluid particle. Sufficient agreement was found for the friction coefficient calculated by this way and that obtained by calculations of the self-diffusion coefficient using the common relation between these coefficients. Furthermore, a modified friction coefficient was determined by integration of the force autocorrelation function up to the first maximum. This coefficient can successfully be used to derive a reasonable soft part of the friction coefficient necessary for the Rice-Allnatt approximation for the shear velocity and simple liquids

  17. An autocorrelation method to detect low frequency earthquakes within tremor

    Science.gov (United States)

    Brown, J.R.; Beroza, G.C.; Shelly, D.R.

    2008-01-01

    Recent studies have shown that deep tremor in the Nankai Trough under western Shikoku consists of a swarm of low frequency earthquakes (LFEs) that occur as slow shear slip on the down-dip extension of the primary seismogenic zone of the plate interface. The similarity of tremor in other locations suggests a similar mechanism, but the absence of cataloged low frequency earthquakes prevents a similar analysis. In this study, we develop a method for identifying LFEs within tremor. The method employs a matched-filter algorithm, similar to the technique used to infer that tremor in parts of Shikoku is comprised of LFEs; however, in this case we do not assume the origin times or locations of any LFEs a priori. We search for LFEs using the running autocorrelation of tremor waveforms for 6 Hi-Net stations in the vicinity of the tremor source. Time lags showing strong similarity in the autocorrelation represent either repeats, or near repeats, of LFEs within the tremor. We test the method on an hour of Hi-Net recordings of tremor and demonstrates that it extracts both known and previously unidentified LFEs. Once identified, we cross correlate waveforms to measure relative arrival times and locate the LFEs. The results are able to explain most of the tremor as a swarm of LFEs and the locations of newly identified events appear to fill a gap in the spatial distribution of known LFEs. This method should allow us to extend the analysis of Shelly et al. (2007a) to parts of the Nankai Trough in Shikoku that have sparse LFE coverage, and may also allow us to extend our analysis to other regions that experience deep tremor, but where LFEs have not yet been identified. Copyright 2008 by the American Geophysical Union.

  18. ADDING A NEW STEP WITH SPATIAL AUTOCORRELATION TO IMPROVE THE FOUR-STEP TRAVEL DEMAND MODEL WITH FEEDBACK FOR A DEVELOPING CITY

    Directory of Open Access Journals (Sweden)

    Xuesong FENG, Ph.D Candidate

    2009-01-01

    Full Text Available It is expected that improvement of transport networks could give rise to the change of spatial distributions of population-related factors and car ownership, which are expected to further influence travel demand. To properly reflect such an interdependence mechanism, an aggregate multinomial logit (A-MNL model was firstly applied to represent the spatial distributions of these exogenous variables of the travel demand model by reflecting the influence of transport networks. Next, the spatial autocorrelation analysis is introduced into the log-transformed A-MNL model (called SPA-MNL model. Thereafter, the SPA-MNL model is integrated into the four-step travel demand model with feedback (called 4-STEP model. As a result, an integrated travel demand model is newly developed and named as the SPA-STEP model. Using person trip data collected in Beijing, the performance of the SPA-STEP model is empirically compared with the 4-STEP model. It was proven that the SPA-STEP model is superior to the 4-STEP model in accuracy; most of the estimated parameters showed statistical differences in values. Moreover, though the results of the simulations to the same set of assumed scenarios by the 4-STEP model and the SPA-STEP model consistently suggested the same sustainable path for the future development of Beijing, it was found that the environmental sustainability and the traffic congestion for these scenarios were generally overestimated by the 4-STEP model compared with the corresponding analyses by the SPA-STEP model. Such differences were clearly generated by the introduction of the new modeling step with spatial autocorrelation.

  19. Inequality of obesity and socioeconomic factors in Iran: a systematic review and meta- analyses.

    Science.gov (United States)

    Djalalinia, Shirin; Peykari, Niloofar; Qorbani, Mostafa; Larijani, Bagher; Farzadfar, Farshad

    2015-01-01

    Socioeconomic status and demographic factors, such as education, occupation, place of residence, gender, age, and marital status have been reported to be associated with obesity. We conducted a systematic review to summarize evidences on associations between socioeconomic factors and obesity/overweight in Iranian population. We systematically searched international databases; ISI, PubMed/Medline, Scopus, and national databases Iran-medex, Irandoc, and Scientific Information Database (SID). We refined data for associations between socioeconomic factors and obesity/overweight by sex, age, province, and year. There were no limitations for time and languages. Based on our search strategy we found 151 records; of them 139 were from international databases and the remaining 12 were obtained from national databases. After removing duplicates, via the refining steps, only 119 articles were found related to our study domains. Extracted results were attributed to 146596 person/data from included studies. Increased ages, low educational levels, being married, residence in urban area, as well as female sex were clearly associated with obesity. RESULTS could be useful for better health policy and more planned studies in this field. These also could be used for future complementary analyses.

  20. Sequence and expression analyses of ethylene response factors highly expressed in latex cells from Hevea brasiliensis.

    Directory of Open Access Journals (Sweden)

    Piyanuch Piyatrakul

    Full Text Available The AP2/ERF superfamily encodes transcription factors that play a key role in plant development and responses to abiotic and biotic stress. In Hevea brasiliensis, ERF genes have been identified by RNA sequencing. This study set out to validate the number of HbERF genes, and identify ERF genes involved in the regulation of latex cell metabolism. A comprehensive Hevea transcriptome was improved using additional RNA reads from reproductive tissues. Newly assembled contigs were annotated in the Gene Ontology database and were assigned to 3 main categories. The AP2/ERF superfamily is the third most represented compared with other transcription factor families. A comparison with genomic scaffolds led to an estimation of 114 AP2/ERF genes and 1 soloist in Hevea brasiliensis. Based on a phylogenetic analysis, functions were predicted for 26 HbERF genes. A relative transcript abundance analysis was performed by real-time RT-PCR in various tissues. Transcripts of ERFs from group I and VIII were very abundant in all tissues while those of group VII were highly accumulated in latex cells. Seven of the thirty-five ERF expression marker genes were highly expressed in latex. Subcellular localization and transactivation analyses suggested that HbERF-VII candidate genes encoded functional transcription factors.

  1. Some notes concerning the fourier transformation of auto-correlation functions; Quelques notes sur la transformee de fourier des fonctions d'autocorrelation

    Energy Technology Data Exchange (ETDEWEB)

    Froelicher, B; Dalfes, A [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1968-07-01

    A study is made of the passage of the auto-correlation function to the frequency spectrum by a numerical Fourier transformation. Two principal characteristics of auto-correlation functions, the time between two points and the total time, are related to two oscillations which appear in the frequency spectrum and which deform it. Various methods are proposed for reducing the effect of these two parasitic oscillations and for re-obtaining the real spectrum. (authors) [French] On etudie le passage de la fonction d'autocorrelation au spectre de frequence par transformee de Fourier numerique. Deux caracteristiques principales des fonctions d'autocorrelation, la duree entre points et la duree totale sont reliees a deux oscillations qui apparaissent dans le spectre de frequence et le deforment. Diverses methodes sont proposees pour reduire l'effet de ces deux oscillations parasites, et retrouver le spectre reel. (auteurs)

  2. Disease Mapping and Regression with Count Data in the Presence of Overdispersion and Spatial Autocorrelation: A Bayesian Model Averaging Approach

    Science.gov (United States)

    Mohebbi, Mohammadreza; Wolfe, Rory; Forbes, Andrew

    2014-01-01

    This paper applies the generalised linear model for modelling geographical variation to esophageal cancer incidence data in the Caspian region of Iran. The data have a complex and hierarchical structure that makes them suitable for hierarchical analysis using Bayesian techniques, but with care required to deal with problems arising from counts of events observed in small geographical areas when overdispersion and residual spatial autocorrelation are present. These considerations lead to nine regression models derived from using three probability distributions for count data: Poisson, generalised Poisson and negative binomial, and three different autocorrelation structures. We employ the framework of Bayesian variable selection and a Gibbs sampling based technique to identify significant cancer risk factors. The framework deals with situations where the number of possible models based on different combinations of candidate explanatory variables is large enough such that calculation of posterior probabilities for all models is difficult or infeasible. The evidence from applying the modelling methodology suggests that modelling strategies based on the use of generalised Poisson and negative binomial with spatial autocorrelation work well and provide a robust basis for inference. PMID:24413702

  3. Assessing an organizational culture instrument based on the Competing Values Framework: Exploratory and confirmatory factor analyses

    Science.gov (United States)

    Helfrich, Christian D; Li, Yu-Fang; Mohr, David C; Meterko, Mark; Sales, Anne E

    2007-01-01

    Background The Competing Values Framework (CVF) has been widely used in health services research to assess organizational culture as a predictor of quality improvement implementation, employee and patient satisfaction, and team functioning, among other outcomes. CVF instruments generally are presented as well-validated with reliable aggregated subscales. However, only one study in the health sector has been conducted for the express purpose of validation, and that study population was limited to hospital managers from a single geographic locale. Methods We used exploratory and confirmatory factor analyses to examine the underlying structure of data from a CVF instrument. We analyzed cross-sectional data from a work environment survey conducted in the Veterans Health Administration (VHA). The study population comprised all staff in non-supervisory positions. The survey included 14 items adapted from a popular CVF instrument, which measures organizational culture according to four subscales: hierarchical, entrepreneurial, team, and rational. Results Data from 71,776 non-supervisory employees (approximate response rate 51%) from 168 VHA facilities were used in this analysis. Internal consistency of the subscales was moderate to strong (α = 0.68 to 0.85). However, the entrepreneurial, team, and rational subscales had higher correlations across subscales than within, indicating poor divergent properties. Exploratory factor analysis revealed two factors, comprising the ten items from the entrepreneurial, team, and rational subscales loading on the first factor, and two items from the hierarchical subscale loading on the second factor, along with one item from the rational subscale that cross-loaded on both factors. Results from confirmatory factor analysis suggested that the two-subscale solution provides a more parsimonious fit to the data as compared to the original four-subscale model. Conclusion This study suggests that there may be problems applying conventional

  4. LETTER TO THE EDITOR: Exhaustive search for low-autocorrelation binary sequences

    Science.gov (United States)

    Mertens, S.

    1996-09-01

    Binary sequences with low autocorrelations are important in communication engineering and in statistical mechanics as ground states of the Bernasconi model. Computer searches are the main tool in the construction of such sequences. Owing to the exponential size 0305-4470/29/18/005/img1 of the configuration space, exhaustive searches are limited to short sequences. We discuss an exhaustive search algorithm with run-time characteristic 0305-4470/29/18/005/img2 and apply it to compile a table of exact ground states of the Bernasconi model up to N = 48. The data suggest F > 9 for the optimal merit factor in the limit 0305-4470/29/18/005/img3.

  5. Interactions between risk factors in the prediction of onset of eating disorders: Exploratory hypothesis generating analyses.

    Science.gov (United States)

    Stice, Eric; Desjardins, Christopher D

    2018-06-01

    Because no study has tested for interactions between risk factors in the prediction of future onset of each eating disorder, this exploratory study addressed this lacuna to generate hypotheses to be tested in future confirmatory studies. Data from three prevention trials that targeted young women at high risk for eating disorders due to body dissatisfaction (N = 1271; M age 18.5, SD 4.2) and collected diagnostic interview data over 3-year follow-up were combined to permit sufficient power to predict onset of anorexia nervosa (AN), bulimia nervosa (BN), binge eating disorder (BED), and purging disorder (PD) using classification tree analyses, an analytic technique uniquely suited to detecting interactions. Low BMI was the most potent predictor of AN onset, and body dissatisfaction amplified this relation. Overeating was the most potent predictor of BN onset, and positive expectancies for thinness and body dissatisfaction amplified this relation. Body dissatisfaction was the most potent predictor of BED onset, and overeating, low dieting, and thin-ideal internalization amplified this relation. Dieting was the most potent predictor of PD onset, and negative affect and positive expectancies for thinness amplified this relation. Results provided evidence of amplifying interactions between risk factors suggestive of cumulative risk processes that were distinct for each disorder; future confirmatory studies should test the interactive hypotheses generated by these analyses. If hypotheses are confirmed, results may allow interventionists to target ultra high-risk subpopulations with more intensive prevention programs that are uniquely tailored for each eating disorder, potentially improving the yield of prevention efforts. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Risk Factor Analyses for the Return of Spontaneous Circulation in the Asphyxiation Cardiac Arrest Porcine Model

    Directory of Open Access Journals (Sweden)

    Cai-Jun Wu

    2015-01-01

    Full Text Available Background: Animal models of asphyxiation cardiac arrest (ACA are frequently used in basic research to mirror the clinical course of cardiac arrest (CA. The rates of the return of spontaneous circulation (ROSC in ACA animal models are lower than those from studies that have utilized ventricular fibrillation (VF animal models. The purpose of this study was to characterize the factors associated with the ROSC in the ACA porcine model. Methods: Forty-eight healthy miniature pigs underwent endotracheal tube clamping to induce CA. Once induced, CA was maintained untreated for a period of 8 min. Two minutes following the initiation of cardiopulmonary resuscitation (CPR, defibrillation was attempted until ROSC was achieved or the animal died. To assess the factors associated with ROSC in this CA model, logistic regression analyses were performed to analyze gender, the time of preparation, the amplitude spectrum area (AMSA from the beginning of CPR and the pH at the beginning of CPR. A receiver-operating characteristic (ROC curve was used to evaluate the predictive value of AMSA for ROSC. Results: ROSC was only 52.1% successful in this ACA porcine model. The multivariate logistic regression analyses revealed that ROSC significantly depended on the time of preparation, AMSA at the beginning of CPR and pH at the beginning of CPR. The area under the ROC curve in for AMSA at the beginning of CPR was 0.878 successful in predicting ROSC (95% confidence intervals: 0.773∼0.983, and the optimum cut-off value was 15.62 (specificity 95.7% and sensitivity 80.0%. Conclusions: The time of preparation, AMSA and the pH at the beginning of CPR were associated with ROSC in this ACA porcine model. AMSA also predicted the likelihood of ROSC in this ACA animal model.

  7. The specification of weight structures in network autocorrelation models of social influence

    NARCIS (Netherlands)

    Leenders, Roger Th.A.J.

    2002-01-01

    Many physical and social phenomena are embedded within networks of interdependencies, the so-called 'context' of these phenomena. In network analysis, this type of process is typically modeled as a network autocorrelation model. Parameter estimates and inferences based on autocorrelation models,

  8. A Quantized Analog Delay for an ir-UWB Quadrature Downconversion Autocorrelation Receiver

    NARCIS (Netherlands)

    Bagga, S.; Zhang, L.; Serdijn, W.A.; Long, J.R.; Busking, E.B.

    2005-01-01

    A quantized analog delay is designed as a requirement for the autocorrelation function in the quadrature downconversion autocorrelation receiver (QDAR). The quantized analog delay is comprised of a quantizer, multiple binary delay lines and an adder circuit. Being the foremost element, the quantizer

  9. Generalised partial autocorrelations and the mutual information between past and future

    DEFF Research Database (Denmark)

    Proietti, Tommaso; Luati, Alessandra

    the generalized partial autocorrelations as the partial autocorrelation coefficients of an auxiliary process, we derive their properties and relate them to essential features of the original process. Based on a parameterisation suggested by Barndorff-Nielsen and Schou (1973) and on Whittle likelihood, we develop...

  10. MATLAB-Based Program for Teaching Autocorrelation Function and Noise Concepts

    Science.gov (United States)

    Jovanovic Dolecek, G.

    2012-01-01

    An attractive MATLAB-based tool for teaching the basics of autocorrelation function and noise concepts is presented in this paper. This tool enhances traditional in-classroom lecturing. The demonstrations of the tool described here highlight the description of the autocorrelation function (ACF) in a general case for wide-sense stationary (WSS)…

  11. Analyse of the prevalence rate and risk factors of pulmonary embolism in the patients with dyspnea

    International Nuclear Information System (INIS)

    Cao Yanxia; Su Jian; Wang Bingsheng; Wu Songhong; Dai Ruiting; Cao Caixia

    2005-01-01

    Objective: To analyse the prevalence rate and risk factors of pulmonary embolism (PE) in patients with dyspnea and to explore the predisposing causes and its early clinical manifestations. Methods: Retrospective analysis was done in 461 patients with dyspnea performed 99 Tc m -macroaggregated albumin (MAA) lung perfusion imaging and 99 Tc m -DTPA ventilation imaging or 99 Tc m -MAA perfusion imaging and chest X-ray examination. Among them, 48 cases without apparent disease were considered as control group, whereas the remaining patients with other underlying illnesses as patients group. PEMS statistics software package was used for estimation of prevalence rate, χ 2 test and PE risk factor analysis. Results: There were 251 PE patients among 461 patients, the prevalence rate [ (π)=95% confidence interval (CI) ] was: lower extremity thrombosis and varicosity (80.79-95.47 ), post cesarean section (55.64-87.12), lower extremity bone surgery or fracture (52.76-87.27 ), cancer operation (52.19-78.19), atrial fibrillation or heart failure (53.30-74.88), obesity (23.14-50.20), post abdominal surgery (20.23-59.43), diabetes (19.12-63.95), chronic bronchitis (1.80-23.06), normal control group (3.47-22.66). Except chronic bronchitis, PE prevalence rate between patients group and control group had significant difference (P 99 Tc m -MAA and DTPA lung imaging should be done as early as possible. (authors)

  12. Risk factors for headache in the UK military: cross-sectional and longitudinal analyses.

    Science.gov (United States)

    Rona, Roberto J; Jones, Margaret; Goodwin, Laura; Hull, Lisa; Wessely, Simon

    2013-05-01

    To assess the importance of service demographic, mental disorders, and deployment factors on headache severity and prevalence, and to assess the impact of headache on functional impairment. There is no information on prevalence and risk factors of headache in the UK military. Recent US reports suggest that deployment, especially a combat role, is associated with headache. Such an association may have serious consequences on personnel during deployment. A survey was carried out between 2004 and 2006 (phase 1) and again between 2007 and 2009 (phase 2) of randomly selected UK military personnel to study the health consequences of the Iraq and Afghanistan wars. This study is based on those who participated in phase 2 and includes cross-sectional and longitudinal analyses. Headache severity in the last month and functional impairment at phase 2 were the main outcomes. Forty-six percent complained of headache in phase 2, half of whom endorsed moderate or severe headache. Severe headache was strongly associated with probable post-traumatic stress disorder (multinomial odds ratio [MOR] 9.6, 95% confidence interval [CI] 6.4-14.2), psychological distress (MOR 6.15, 95% CI 4.8-7.9), multiple physical symptoms (MOR 18.2, 95% CI 13.4-24.6) and self-reported mild traumatic brain injury (MOR 3.5, 95% CI 1.4-8.6) after adjustment for service demographic factors. Mild headache was also associated with these variables but at a lower level. Moderate and severe headache were associated with functional impairment, but the association was partially explained by mental disorders. Mental ill health was also associated with reporting moderate and severe headache at both phase 1 and phase 2. Deployment and a combat role were not associated with headache. Moderate and severe headache are common in the military and have an impact on functional impairment. They are more strongly associated with mental disorders than with mild traumatic brain injury. © 2013 American Headache Society.

  13. Analyses of associations between reactive oxygen metabolites and antioxidant capacity and related factors among healthy adolescents.

    Science.gov (United States)

    Tamae, Kazuyoshi; Eto, Toshiharu; Aoki, Kazuhiro; Nakamaru, Shingo; Koshikawa, Kazunori; Sakuma, Kazuhiko; Hirano, Takeshi

    2013-12-01

    Evidence based on epidemiologic investigations using biochemical parameter is meaningful for health promotion and administration among adolescents. We conducted Reactive Oxygen Metabolites (ROM) and Biological Antioxidant Potentials (BAP) tests, along with a questionnaire survey, for a sample of 74 high school students (16.51±0.11 years of aged mean±SE), to investigate the associations between ROM, BAP, and related factors, including BMI and blood biochemical data. Venous blood samples (approximately 7cc) were collected. At the same time, each individual's information was obtained from the questionnaire. The mental health status was investigated using the Center for Epidemiologic Study Depression scale (CES-D) included in the same questionnaire. The mean values and standard errors of all variables were calculated. In addition, the relationships between ROM and BAP with these factors were analyzed. The results revealed the preferred levels of ROM (261.95 ± 9.52 U.CARR) and, BAP (2429.89±53.39 µmol/L) and blood biochemical data. Few significant relationships between two markers and related factors were found. So, we detected a cluster with an imbalance between ROM and BAP, which means low antioxidant ability, whereas the other clusters had conditions with moderate balance or good balance between them. Moreover, we determined the Oxidative stress-Antioxidant capacity ratio (OAR), using the ROM and BAP values, in order to clarify the characteristic of the detected clusters.However, comparative analyses across the three clusters did not yield significant differences in all related factors. No correlations between ROM, BAP and related factors were indicated, although significant association between ROM and BAP was observed (R2=0.1156, R=0.340, P=0.013). The reason for these results can be explained by the influences of good health and young age. On the other hand, present study suggests that some latent problems among adolescents may be related to unhealthy

  14. Genome-wide identification and function analyses of heat shock transcription factors in potato

    Directory of Open Access Journals (Sweden)

    Ruimin eTang

    2016-04-01

    Full Text Available Heat shock transcription factors (Hsfs play vital roles in the regulation of tolerance to various stresses in living organisms. To dissect the mechanisms of the Hsfs in potato adaptation to abiotic stresses, genome and transcriptome analyses of Hsf gene family were investigated in Solanum tuberosum L. Twenty-seven StHsf members were identified by bioinformatics and phylogenetic analyses and were classified into A, B and C groups according to their structural and phylogenetic features. StHsfs in the same class shared similar gene structures and conserved motifs. The chromosomal location analysis showed that 27 Hsfs were located in 10 of 12 chromosomes (except chromosome 1 and chromosome 5 and that 18 of these genes formed 9 paralogous pairs. Expression profiles of StHsfs in 12 different organs and tissues uncovered distinct spatial expression patterns of these genes and their potential roles in the process of growth and development. Promoter and quantitative real-time polymerase chain reaction (qRT-PCR detections of StHsfs were conducted and demonstrated that these genes were all responsive to various stresses. StHsf004, StHsf007, StHsf009, StHsf014 and StHsf019 were constitutively expressed under non-stress conditions, and some specific Hsfs became the predominant Hsfs in response to different abiotic stresses, indicating their important and diverse regulatory roles in adverse conditions. A co-expression network between StHsfs and StHsf-co-expressed genes was generated based on the publicly-available potato transcriptomic databases and identified key candidate StHsfs for further functional studies.

  15. Code conforming determination of cumulative usage factors for general elastic-plastic finite element analyses

    International Nuclear Information System (INIS)

    Rudolph, Juergen; Goetz, Andreas; Hilpert, Roland

    2012-01-01

    The procedures of fatigue analyses of several relevant nuclear and conventional design codes (ASME, KTA, EN, AD) for power plant components differentiate between an elastic, simplified elastic-plastic and elastic-plastic fatigue check. As a rule, operational load levels will exclude the purely elastic fatigue check. The application of the code procedure of the simplified elastic-plastic fatigue check is common practice. Nevertheless, resulting cumulative usage factors may be overly conservative mainly due to high code based plastification penalty factors Ke. As a consequence, the more complex and still code conforming general elastic-plastic fatigue analysis methodology based on non-linear finite element analysis (FEA) is applied for fatigue design as an alternative. The requirements of the FEA and the material law to be applied have to be clarified in a first step. Current design codes only give rough guidelines on these relevant items. While the procedure for the simplified elastic-plastic fatigue analysis and the associated code passages are based on stress related cycle counting and the determination of pseudo elastic equivalent stress ranges, an adaptation to elastic-plastic strains and strain ranges is required for the elastic-plastic fatigue check. The associated requirements are explained in detail in the paper. If the established and implemented evaluation mechanism (cycle counting according to the peak and valley respectively the rainflow method, calculation of stress ranges from arbitrary load-time histories and determination of cumulative usage factors based on all load events) is to be retained, a conversion of elastic-plastic strains and strain ranges into pseudo elastic stress ranges is required. The algorithm to be applied is described in the paper. It has to be implemented in the sense of an extended post processing operation of FEA e.g. by APDL scripts in ANSYS registered . Variations of principal stress (strain) directions during the loading

  16. Multilevel models for multiple-baseline data: modeling across-participant variation in autocorrelation and residual variance.

    Science.gov (United States)

    Baek, Eun Kyeng; Ferron, John M

    2013-03-01

    Multilevel models (MLM) have been used as a method for analyzing multiple-baseline single-case data. However, some concerns can be raised because the models that have been used assume that the Level-1 error covariance matrix is the same for all participants. The purpose of this study was to extend the application of MLM of single-case data in order to accommodate across-participant variation in the Level-1 residual variance and autocorrelation. This more general model was then used in the analysis of single-case data sets to illustrate the method, to estimate the degree to which the autocorrelation and residual variances differed across participants, and to examine whether inferences about treatment effects were sensitive to whether or not the Level-1 error covariance matrix was allowed to vary across participants. The results from the analyses of five published studies showed that when the Level-1 error covariance matrix was allowed to vary across participants, some relatively large differences in autocorrelation estimates and error variance estimates emerged. The changes in modeling the variance structure did not change the conclusions about which fixed effects were statistically significant in most of the studies, but there was one exception. The fit indices did not consistently support selecting either the more complex covariance structure, which allowed the covariance parameters to vary across participants, or the simpler covariance structure. Given the uncertainty in model specification that may arise when modeling single-case data, researchers should consider conducting sensitivity analyses to examine the degree to which their conclusions are sensitive to modeling choices.

  17. Identification and expression analyses of MYB and WRKY transcription factor genes in Papaver somniferum L.

    Science.gov (United States)

    Kakeshpour, Tayebeh; Nayebi, Shadi; Rashidi Monfared, Sajad; Moieni, Ahmad; Karimzadeh, Ghasem

    2015-10-01

    Papaver somniferum L. is an herbaceous, annual and diploid plant that is important from pharmacological and strategic point of view. The cDNA clones of two putative MYB and WRKY genes were isolated (GeneBank accession numbers KP411870 and KP203854, respectively) from this plant, via the nested-PCR method, and characterized. The MYB transcription factor (TF) comprises 342 amino acids, and exhibits the structural features of the R2R3MYB protein family. The WRKY TF, a 326 amino acid-long polypeptide, falls structurally into the group II of WRKY protein family. Quantitative real-time PCR (qRT-PCR) analyses indicate the presence of these TFs in all organs of P. somniferum L. and Papaver bracteatum L. Highest expression levels of these two TFs were observed in the leaf tissues of P. somniferum L. while in P. bracteatum L. the espression levels were highest in the root tissues. Promoter analysis of the 10 co-expressed gene clustered involved in noscapine biosynthesis pathway in P. somniferum L. suggested that not only these 10 genes are co-expressed, but also share common regulatory motifs and TFs including MYB and WRKY TFs, and that may explain their common regulation.

  18. Evolutionary and Expression Analyses of the Apple Basic Leucine Zipper Transcription Factor Family

    Science.gov (United States)

    Zhao, Jiao; Guo, Rongrong; Guo, Chunlei; Hou, Hongmin; Wang, Xiping; Gao, Hua

    2016-01-01

    Transcription factors (TFs) play essential roles in the regulatory networks controlling many developmental processes in plants. Members of the basic leucine (Leu) zipper (bZIP) TF family, which is unique to eukaryotes, are involved in regulating diverse processes, including flower and vascular development, seed maturation, stress signaling, and defense responses to pathogens. The bZIP proteins have a characteristic bZIP domain composed of a DNA-binding basic region and a Leu zipper dimerization region. In this study, we identified 112 apple (Malus domestica Borkh) bZIP TF-encoding genes, termed MdbZIP genes. Synteny analysis indicated that segmental and tandem duplication events, as well as whole genome duplication, have contributed to the expansion of the apple bZIP family. The family could be divided into 11 groups based on structural features of the encoded proteins, as well as on the phylogenetic relationship of the apple bZIP proteins to those of the model plant Arabidopsis thaliana (AtbZIP genes). Synteny analysis revealed that several paired MdbZIP genes and AtbZIP gene homologs were located in syntenic genomic regions. Furthermore, expression analyses of group A MdbZIP genes showed distinct expression levels in 10 different organs. Moreover, changes in these expression profiles in response to abiotic stress conditions and various hormone treatments identified MdbZIP genes that were responsive to high salinity and drought, as well as to different phytohormones. PMID:27066030

  19. Concept of ground facilities and the analyses of the factors for cost estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J. Y.; Choi, H. J.; Choi, J. W.; Kim, S. K.; Cho, D. K

    2007-09-15

    The geologic disposal of spent fuels generated from the nuclear power plants is the only way to protect the human beings and the surrounding environments present and future. The direct disposal of the spent fuels from the nuclear power plants is considered, and a Korean Reference HLW disposal System(KRS) suitable for our representative geological conditions have been developed. In this study, the concept of the spent fuel encapsulation process as a key of the above ground facilities for deep geological disposal was established. To do this, the design requirements, such as the functions and the spent fuel accumulations, were reviewed. Also, the design principles and the bases were established. Based on the requirements and the bases, the encapsulation process of the spent fuel from receiving spent fuel of nuclear power plants to transferring canister into the underground repository was established. Simulation for the above-ground facility in graphic circumstances through KRS design concept and disposal scenarios for spent nuclear fuel showed that an appropriate process was performed based on facility design concept and required for more improvement on construction facility by actual demonstration test. And, based on the concept of the above ground facilities for the Korean Reference HLW disposal System, the analyses of the factors for the cost estimation was carried out.

  20. Logistic regression and multiple classification analyses to explore risk factors of under-5 mortality in bangladesh

    International Nuclear Information System (INIS)

    Bhowmik, K.R.; Islam, S.

    2016-01-01

    Logistic regression (LR) analysis is the most common statistical methodology to find out the determinants of childhood mortality. However, the significant predictors cannot be ranked according to their influence on the response variable. Multiple classification (MC) analysis can be applied to identify the significant predictors with a priority index which helps to rank the predictors. The main objective of the study is to find the socio-demographic determinants of childhood mortality at neonatal, post-neonatal, and post-infant period by fitting LR model as well as to rank those through MC analysis. The study is conducted using the data of Bangladesh Demographic and Health Survey 2007 where birth and death information of children were collected from their mothers. Three dichotomous response variables are constructed from children age at death to fit the LR and MC models. Socio-economic and demographic variables significantly associated with the response variables separately are considered in LR and MC analyses. Both the LR and MC models identified the same significant predictors for specific childhood mortality. For both the neonatal and child mortality, biological factors of children, regional settings, and parents socio-economic status are found as 1st, 2nd, and 3rd significant groups of predictors respectively. Mother education and household environment are detected as major significant predictors of post-neonatal mortality. This study shows that MC analysis with or without LR analysis can be applied to detect determinants with rank which help the policy makers taking initiatives on a priority basis. (author)

  1. Evolutionary and Expression Analyses of the Apple Basic Leucine Zipper Transcription Factor Family

    Directory of Open Access Journals (Sweden)

    Jiao eZhao

    2016-03-01

    Full Text Available Transcription factors (TFs play essential roles in the regulatory networks controlling many developmental processes in plants. Members of the basic leucine (Leu zipper (bZIP TF family, which is unique to eukaryotes, are involved in regulating diverse processes, including flower and vascular development, seed maturation, stress signaling and defense responses to pathogens. The bZIP proteins have a characteristic bZIP domain composed of a DNA-binding basic region and a Leu zipper dimerization region. In this study, we identified 112 apple (Malus domestica Borkh bZIP TF-encoding genes, termed MdbZIP genes. Synteny analysis indicated that segmental and tandem duplication events, as well as whole genome duplication, have contributed to the expansion of the apple bZIP family. The family could be divided into 11 groups based on structural features of the encoded proteins, as well as on the phylogenetic relationship of the apple bZIP proteins to those of the model plant Arabidopsis thaliana (AtbZIP genes. Synteny analysis revealed that several paired MdbZIP genes and AtbZIP gene homologs were located in syntenic genomic regions. Furthermore, expression analyses of group A MdbZIP genes showed distinct expression levels in ten different organs. Moreover, changes in these expression profiles in response to abiotic stress conditions and various hormone treatments identified MdbZIP genes that were responsive to high salinity and drought, as well as to different phytohormones.

  2. AFM topographies of densely packed nanoparticles: a quick way to determine the lateral size distribution by autocorrelation function analysis

    Czech Academy of Sciences Publication Activity Database

    Fekete, Ladislav; Kůsová, Kateřina; Petrák, Václav; Kratochvílová, Irena

    2012-01-01

    Roč. 14, č. 8 (2012), s. 1-10 ISSN 1388-0764 R&D Projects: GA AV ČR KAN200100801; GA TA ČR TA01011165; GA ČR(CZ) GAP304/10/1951; GA ČR GPP204/12/P235 Institutional research plan: CEZ:AV0Z10100520 Keywords : lateral grain size distribution * AFM * autocorrelation function * nanodiamond Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 2.175, year: 2012

  3. Adaptive endpoint detection of seismic signal based on auto-correlated function

    International Nuclear Information System (INIS)

    Fan Wanchun; Shi Ren

    2001-01-01

    Based on the analysis of auto-correlation function, the notion of the distance between auto-correlation function was quoted, and the characterization of the noise and the signal with noise were discussed by using the distance. Then, the method of auto- adaptable endpoint detection of seismic signal based on auto-correlated similarity was summed up. The steps of implementation and determining of the thresholds were presented in detail. The experimental results that were compared with the methods based on artificial detecting show that this method has higher sensitivity even in a low signal with noise ratio circumstance

  4. Effectiveness of a selective alcohol prevention program targeting personality risk factors: Results of interaction analyses.

    Science.gov (United States)

    Lammers, Jeroen; Goossens, Ferry; Conrod, Patricia; Engels, Rutger; Wiers, Reinout W; Kleinjan, Marloes

    2017-08-01

    To explore whether specific groups of adolescents (i.e., scoring high on personality risk traits, having a lower education level, or being male) benefit more from the Preventure intervention with regard to curbing their drinking behaviour. A clustered randomized controlled trial, with participants randomly assigned to a 2-session coping skills intervention or a control no-intervention condition. Fifteen secondary schools throughout The Netherlands; 7 schools in the intervention and 8 schools in the control condition. 699 adolescents aged 13-15; 343 allocated to the intervention and 356 to the control condition; with drinking experience and elevated scores in either negative thinking, anxiety sensitivity, impulsivity or sensation seeking. Differential effectiveness of the Preventure program was examined for the personality traits group, education level and gender on past-month binge drinking (main outcome), binge frequency, alcohol use, alcohol frequency and problem drinking, at 12months post-intervention. Preventure is a selective school-based alcohol prevention programme targeting personality risk factors. The comparator was a no-intervention control. Intervention effects were moderated by the personality traits group and by education level. More specifically, significant intervention effects were found on reducing alcohol use within the anxiety sensitivity group (OR=2.14, CI=1.40, 3.29) and reducing binge drinking (OR=1.76, CI=1.38, 2.24) and binge drinking frequency (β=0.24, p=0.04) within the sensation seeking group at 12months post-intervention. Also, lower educated young adolescents reduced binge drinking (OR=1.47, CI=1.14, 1.88), binge drinking frequency (β=0.25, p=0.04), alcohol use (OR=1.32, CI=1.06, 1.65) and alcohol use frequency (β=0.47, p=0.01), but not those in the higher education group. Post hoc latent-growth analyses revealed significant effects on the development of binge drinking (β=-0.19, p=0.02) and binge drinking frequency (β=-0.10, p=0

  5. A broadly tunable autocorrelator for ultra-short, ultra-high power infrared optical pulses

    Energy Technology Data Exchange (ETDEWEB)

    Szarmes, E.B.; Madey, J.M.J. [Duke Univ., Durham, NC (United States)

    1995-12-31

    We describe the design of a crossed-beam, optical autocorrelator that uses an uncoated, birefringent beamsplitter to split a linearly polarized incident pulse into two orthogonally polarized pulses, and a Type II, SHG crystal to generate the intensity autocorrelation function. The uncoated beamsplitter accommodates extremely broad tunability while precluding any temporal distortion of ultrashort optical pulses at the dielectric interface, and the specific design provides efficient operation between 1 {mu}m and 4 {mu}m. Furthermore, the use of Type II SHG completely eliminates any single-beam doubling, so the autocorrelator can be operated at very shallow crossed-beam angles without generating a background pedestal. The autocorrelator has been constructed and installed in the Mark III laboratory at Duke University as a broadband diagnostic for ongoing compression experiments on the chirped-pulse FEL.

  6. Performing T-tests to Compare Autocorrelated Time Series Data Collected from Direct-Reading Instruments.

    Science.gov (United States)

    O'Shaughnessy, Patrick; Cavanaugh, Joseph E

    2015-01-01

    Industrial hygienists now commonly use direct-reading instruments to evaluate hazards in the workplace. The stored values over time from these instruments constitute a time series of measurements that are often autocorrelated. Given the need to statistically compare two occupational scenarios using values from a direct-reading instrument, a t-test must consider measurement autocorrelation or the resulting test will have a largely inflated type-1 error probability (false rejection of the null hypothesis). A method is described for both the one-sample and two-sample cases which properly adjusts for autocorrelation. This method involves the computation of an "equivalent sample size" that effectively decreases the actual sample size when determining the standard error of the mean for the time series. An example is provided for the one-sample case, and an example is given where a two-sample t-test is conducted for two autocorrelated time series comprised of lognormally distributed measurements.

  7. Autocorrelated process control: Geometric Brownian Motion approach versus Box-Jenkins approach

    Science.gov (United States)

    Salleh, R. M.; Zawawi, N. I.; Gan, Z. F.; Nor, M. E.

    2018-04-01

    Existing of autocorrelation will bring a significant effect on the performance and accuracy of process control if the problem does not handle carefully. When dealing with autocorrelated process, Box-Jenkins method will be preferred because of the popularity. However, the computation of Box-Jenkins method is too complicated and challenging which cause of time-consuming. Therefore, an alternative method which known as Geometric Brownian Motion (GBM) is introduced to monitor the autocorrelated process. One real case of furnace temperature data is conducted to compare the performance of Box-Jenkins and GBM methods in monitoring autocorrelation process. Both methods give the same results in terms of model accuracy and monitoring process control. Yet, GBM is superior compared to Box-Jenkins method due to its simplicity and practically with shorter computational time.

  8. Toward sub-femtosecond pump-probe experiments: a dispersionless autocorrelator with attosecond resolution

    Energy Technology Data Exchange (ETDEWEB)

    Constant, E.; Mevel, E.; Zair, A.; Bagnoud, V.; Salin, F. [Bordeaux-1 Univ., Talence (FR). Centre Lasers Intenses et Applications (CELIA)

    2001-07-01

    We designed a dispersionless autocorrelator with a sub-femtosecond resolution suitable for the characterization of ultrashort X-UV pulses. We present a proof of feasibility experiment with 11 fs infrared pulses. (orig.)

  9. Detecting land cover change using a sliding window temporal autocorrelation approach

    CSIR Research Space (South Africa)

    Kleynhans, W

    2012-07-01

    Full Text Available There has been recent developments in the use of hypertemporal satellite time series data for land cover change detection and classification. Recently, an Autocorrelation function (ACF) change detection method was proposed to detect the development...

  10. Development of a Body Image Concern Scale using both exploratory and confirmatory factor analyses in Chinese university students

    Directory of Open Access Journals (Sweden)

    He W

    2017-05-01

    Full Text Available Wenxin He, Qiming Zheng, Yutian Ji, Chanchan Shen, Qisha Zhu, Wei Wang Department of Clinical Psychology and Psychiatry, School of Public Health, Zhejiang University College of Medicine, Hangzhou, People’s Republic of China Background: The body dysmorphic disorder is prevalent in general population and in psychiatric, dermatological, and plastic-surgery patients, but there lacks a structure-validated, comprehensive self-report measure of body image concerns, which is established through both exploratory and confirmatory factor analyses. Methods: We have composed a 34-item matrix targeting the body image concerns and trialed it in 328 male and 365 female Chinese university students. Answers to the matrix dealt with treatments including exploratory factor analyses, reserve of qualified items, and confirmatory factor analyses of latent structures. Results: Six latent factors, namely the Social Avoidance, Appearance Dissatisfaction, Preoccupation with Reassurance, Perceived Distress/Discrimination, Defect Hiding, and Embarrassment in Public, were identified. The factors and their respective items have composed a 24-item questionnaire named as the Body Image Concern Scale. Each factor earned a satisfactory internal reliability, and the intercorrelations between these factors were in a median level. Women scored significantly higher than men did on the Appearance Dissatisfaction, Preoccupation with Reassurance, and Defect Hiding. Conclusion: The Body Image Concern Scale has displayed its structure validation and gender preponderance in Chinese university students. Keywords: body dysmorphic disorder, body image, factor analysis, questionnaire development

  11. Exploratory and Confirmatory Factor Analyses of the WISC-IV with Gifted Students

    Science.gov (United States)

    Rowe, Ellen W.; Dandridge, Jessica; Pawlush, Alexandra; Thompson, Dawna F.; Ferrier, David E.

    2014-01-01

    These 2 studies investigated the factor structure of the Wechsler Intelligence Scale for Children-4th edition (WISC-IV; Wechsler, 2003a) with exploratory factor analysis (EFA; Study 1) and confirmatory factor analysis (CFA; Study 2) among 2 independent samples of gifted students. The EFA sample consisted of 225 children who were referred for a…

  12. Systematic assessment of environmental risk factors for bipolar disorder: an umbrella review of systematic reviews and meta-analyses.

    Science.gov (United States)

    Bortolato, Beatrice; Köhler, Cristiano A; Evangelou, Evangelos; León-Caballero, Jordi; Solmi, Marco; Stubbs, Brendon; Belbasis, Lazaros; Pacchiarotti, Isabella; Kessing, Lars V; Berk, Michael; Vieta, Eduard; Carvalho, André F

    2017-03-01

    The pathophysiology of bipolar disorder is likely to involve both genetic and environmental risk factors. In our study, we aimed to perform a systematic search of environmental risk factors for BD. In addition, we assessed possible hints of bias in this literature, and identified risk factors supported by high epidemiological credibility. We searched the Pubmed/MEDLINE, EMBASE and PsycInfo databases up to 7 October 2016 to identify systematic reviews and meta-analyses of observational studies that assessed associations between putative environmental risk factors and BD. For each meta-analysis, we estimated its summary effect size by means of both random- and fixed-effects models, 95% confidence intervals (CIs), the 95% prediction interval, and heterogeneity. Evidence of small-study effects and excess of significance bias was also assessed. Sixteen publications met the inclusion criteria (seven meta-analyses and nine qualitative systematic reviews). Fifty-one unique environmental risk factors for BD were evaluated. Six meta-analyses investigated associations with a risk factor for BD. Only irritable bowel syndrome (IBS) emerged as a risk factor for BD supported by convincing evidence (k=6; odds ratio [OR]=2.48; 95% CI=2.35-2.61; P<.001), and childhood adversity was supported by highly suggestive evidence. Asthma and obesity were risk factors for BD supported by suggestive evidence, and seropositivity to Toxoplasma gondii and a history of head injury were supported by weak evidence. Notwithstanding that several environmental risk factors for BD were identified, few meta-analyses of observational studies were available. Therefore, further well-designed and adequately powered studies are necessary to map the environmental risk factors for BD. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. An investigation on thermal patterns in Iran based on spatial autocorrelation

    Science.gov (United States)

    Fallah Ghalhari, Gholamabbas; Dadashi Roudbari, Abbasali

    2018-02-01

    The present study aimed at investigating temporal-spatial patterns and monthly patterns of temperature in Iran using new spatial statistical methods such as cluster and outlier analysis, and hotspot analysis. To do so, climatic parameters, monthly average temperature of 122 synoptic stations, were assessed. Statistical analysis showed that January with 120.75% had the most fluctuation among the studied months. Global Moran's Index revealed that yearly changes of temperature in Iran followed a strong spatially clustered pattern. Findings showed that the biggest thermal cluster pattern in Iran, 0.975388, occurred in May. Cluster and outlier analyses showed that thermal homogeneity in Iran decreases in cold months, while it increases in warm months. This is due to the radiation angle and synoptic systems which strongly influence thermal order in Iran. The elevations, however, have the most notable part proved by Geographically weighted regression model. Iran's thermal analysis through hotspot showed that hot thermal patterns (very hot, hot, and semi-hot) were dominant in the South, covering an area of 33.5% (about 552,145.3 km2). Regions such as mountain foot and low lands lack any significant spatial autocorrelation, 25.2% covering about 415,345.1 km2. The last is the cold thermal area (very cold, cold, and semi-cold) with about 25.2% covering about 552,145.3 km2 of the whole area of Iran.

  14. Investigation of the factor structure of the Wechsler Adult Intelligence Scale--Fourth Edition (WAIS-IV): exploratory and higher order factor analyses.

    Science.gov (United States)

    Canivez, Gary L; Watkins, Marley W

    2010-12-01

    The present study examined the factor structure of the Wechsler Adult Intelligence Scale--Fourth Edition (WAIS-IV; D. Wechsler, 2008a) standardization sample using exploratory factor analysis, multiple factor extraction criteria, and higher order exploratory factor analysis (J. Schmid & J. M. Leiman, 1957) not included in the WAIS-IV Technical and Interpretation Manual (D. Wechsler, 2008b). Results indicated that the WAIS-IV subtests were properly associated with the theoretically proposed first-order factors, but all but one factor-extraction criterion recommended extraction of one or two factors. Hierarchical exploratory analyses with the Schmid and Leiman procedure found that the second-order g factor accounted for large portions of total and common variance, whereas the four first-order factors accounted for small portions of total and common variance. It was concluded that the WAIS-IV provides strong measurement of general intelligence, and clinical interpretation should be primarily at that level.

  15. Impact of cardiovascular risk factors on medical expenditure: evidence from epidemiological studies analysing data on health checkups and medical insurance.

    Science.gov (United States)

    Nakamura, Koshi

    2014-01-01

    Concerns have increasingly been raised about the medical economic burden in Japan, of which approximately 20% is attributable to cardiovascular disease, including coronary heart disease and stroke. Because the management of risk factors is essential for the prevention of cardiovascular disease, it is important to understand the relationship between cardiovascular risk factors and medical expenditure in the Japanese population. However, only a few Japanese epidemiological studies analysing data on health checkups and medical insurance have provided evidence on this topic. Patients with cardiovascular risk factors, including obesity, hypertension, and diabetes, may incur medical expenditures through treatment of the risk factors themselves and through procedures for associated diseases that usually require hospitalization and sometimes result in death. Untreated risk factors may cause medical expenditure surges, mainly due to long-term hospitalization, more often than risk factors preventively treated by medication. On an individual patient level, medical expenditures increase with the number of concomitant cardiovascular risk factors. For single risk factors, personal medical expenditure may increase with the severity of that factor. However, on a population level, the medical economic burden attributable to cardiovascular risk factors results largely from a single, particularly prevalent risk factor, especially from mildly-to-moderately abnormal levels of the factor. Therefore, cardiovascular risk factors require management on the basis of both a cost-effective strategy of treating high-risk patients and a population strategy for reducing both the ill health and medical economic burdens that result from cardiovascular disease.

  16. Factors for analysing and improving performance of R&D in Malaysian universities

    NARCIS (Netherlands)

    Ramli, Mohammad Shakir; de Boer, S.J.; de Bruijn, E.J.

    2004-01-01

    This paper presents a model for analysing and improving performance of R&D in Malaysian universities. There are various general models for R&D analysis, but none is specific for improving the performance of R&D in Malaysian universities. This research attempts to fill a gap in the body of knowledge

  17. Exploratory and Confirmatory Factor Analyses of Delirium Symptoms in a Sample of Nursing Home Residents.

    Science.gov (United States)

    Moyo, Patience; Huang, Ting-Ying; Simoni-Wastila, Linda; Harrington, Donna

    2018-02-01

    This study examined the latent constructs of delirium symptoms among nursing home (NH) residents in the United States. Cross-sectional NH assessment data (Minimum Data Set 2.0) from the 2009 Medicare Current Beneficiary Survey were used. Data from two independent, randomly selected subsamples of residents ≥65 years were analyzed using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). There were 367 and 366 individuals in the EFA and CFA, respectively. Assessment of multiple model fit statistics in CFA indicated that the two-factor structure provided better fit for the data than a one-factor solution. The two factors represented cognitive and behavioral latent constructs as suggested by the related literature. A correlation of .72 between these constructs suggested moderate discriminant validity. This finding emphasizes the importance of health care providers to be attentive to both cognitive and behavioral symptoms when diagnosing, treating, and managing delirium.

  18. The role of host genetic factors in respiratory tract infectious diseases: systematic review, meta-analyses and field synopsis

    NARCIS (Netherlands)

    Patarčić, Inga; Gelemanović, Andrea; Kirin, Mirna; Kolčić, Ivana; Theodoratou, Evropi; Baillie, Kenneth J.; de Jong, Menno D.; Rudan, Igor; Campbell, Harry; Polašek, Ozren

    2015-01-01

    Host genetic factors have frequently been implicated in respiratory infectious diseases, often with inconsistent results in replication studies. We identified 386 studies from the total of 24,823 studies identified in a systematic search of four bibliographic databases. We performed meta-analyses of

  19. Who will volunteer? Analysing individual and structural factors of volunteering in Swiss sports clubs.

    Science.gov (United States)

    Schlesinger, Torsten; Nagel, Siegfried

    2013-01-01

    This article analyses the conditions influencing volunteering in sports clubs. It focuses not only on individual characteristics of volunteers but also on the corresponding structural conditions of sports clubs. It proposes a model of voluntary work in sports clubs based on economic behaviour theory. The influences of both the individual and context levels on the decision to engage in voluntary work are estimated in different multilevel models. Results of these multilevel analyses indicate that volunteering is not just an outcome of individual characteristics such as lower workloads, higher income, children belonging to the sports club, longer club memberships, or a strong commitment to the club. It is also influenced by club-specific structural conditions; volunteering is more probable in rural sports clubs whereas growth-oriented goals in clubs have a destabilising effect.

  20. Analysing Factors That Drives Customer Loyalty Of Shoes Laundry Quickcares Manado

    OpenAIRE

    Mawa, Maria; Tumbuan, Willem J.F.Alfa; Tielung, Maria V.J

    2017-01-01

    This research is to analyzing what are the factors that drive customer loyalty of shoe laundry Quickcares Manado. Shoe Laundry is the services that offer to clean and taking care shoe, Customer Loyalty is the result of consistently positive emotional experience, physical attribute-based satisfaction and perceived value of an experience, which includes the product or services. It is important to know factors that drive customer Loyalty in business, in order to guarantee business continuity and...

  1. The Self-Description Inventory+, Part 1: Factor Structure and Convergent Validity Analyses

    Science.gov (United States)

    2013-07-01

    measures 12 scales of personality. The current report examines the possibility of replacing the EQ with a Five Factor Model ( FFM ) measure of...Checklist. Our results show that the SDI + has scales that are intercorrelated in a manner consistent with the FFM (Experiment 1), a factor structure...met the criteria showing it to be an FFM instrument, we will conduct concurrent validity research to determine if the SDI+ has greater predictive

  2. Environmental risk factors of pregnancy outcomes: a summary of recent meta-analyses of epidemiological studies.

    Science.gov (United States)

    Nieuwenhuijsen, Mark J; Dadvand, Payam; Grellier, James; Martinez, David; Vrijheid, Martine

    2013-01-15

    Various epidemiological studies have suggested associations between environmental exposures and pregnancy outcomes. Some studies have tempted to combine information from various epidemiological studies using meta-analysis. We aimed to describe the methodologies used in these recent meta-analyses of environmental exposures and pregnancy outcomes. Furthermore, we aimed to report their main findings. We conducted a bibliographic search with relevant search terms. We obtained and evaluated 16 recent meta-analyses. The number of studies included in each reported meta-analysis varied greatly, with the largest number of studies available for environmental tobacco smoke. Only a small number of the studies reported having followed meta-analysis guidelines or having used a quality rating system. Generally they tested for heterogeneity and publication bias. Publication bias did not occur frequently.The meta-analyses found statistically significant negative associations between environmental tobacco smoke and stillbirth, birth weight and any congenital anomalies; PM2.5 and preterm birth; outdoor air pollution and some congenital anomalies; indoor air pollution from solid fuel use and stillbirth and birth weight; polychlorinated biphenyls (PCB) exposure and birth weight; disinfection by-products in water and stillbirth, small for gestational age and some congenital anomalies; occupational exposure to pesticides and solvents and some congenital anomalies; and agent orange and some congenital anomalies. The number of meta-analyses of environmental exposures and pregnancy outcomes is small and they vary in methodology. They reported statistically significant associations between environmental exposures such as environmental tobacco smoke, air pollution and chemicals and pregnancy outcomes.

  3. Environmental risk factors of pregnancy outcomes: a summary of recent meta-analyses of epidemiological studies

    Directory of Open Access Journals (Sweden)

    Nieuwenhuijsen Mark J

    2013-01-01

    Full Text Available Abstract Background Various epidemiological studies have suggested associations between environmental exposures and pregnancy outcomes. Some studies have tempted to combine information from various epidemiological studies using meta-analysis. We aimed to describe the methodologies used in these recent meta-analyses of environmental exposures and pregnancy outcomes. Furthermore, we aimed to report their main findings. Methods We conducted a bibliographic search with relevant search terms. We obtained and evaluated 16 recent meta-analyses. Results The number of studies included in each reported meta-analysis varied greatly, with the largest number of studies available for environmental tobacco smoke. Only a small number of the studies reported having followed meta-analysis guidelines or having used a quality rating system. Generally they tested for heterogeneity and publication bias. Publication bias did not occur frequently. The meta-analyses found statistically significant negative associations between environmental tobacco smoke and stillbirth, birth weight and any congenital anomalies; PM2.5 and preterm birth; outdoor air pollution and some congenital anomalies; indoor air pollution from solid fuel use and stillbirth and birth weight; polychlorinated biphenyls (PCB exposure and birth weight; disinfection by-products in water and stillbirth, small for gestational age and some congenital anomalies; occupational exposure to pesticides and solvents and some congenital anomalies; and agent orange and some congenital anomalies. Conclusions The number of meta-analyses of environmental exposures and pregnancy outcomes is small and they vary in methodology. They reported statistically significant associations between environmental exposures such as environmental tobacco smoke, air pollution and chemicals and pregnancy outcomes.

  4. Lessons for public health campaigns from analysing commercial food marketing success factors: a case study

    Science.gov (United States)

    2012-01-01

    Background Commercial food marketing has considerably shaped consumer food choice behaviour. Meanwhile, public health campaigns for healthier eating have had limited impact to date. Social marketing suggests that successful commercial food marketing campaigns can provide useful lessons for public sector activities. The aim of the present study was to empirically identify food marketing success factors that, using the social marketing approach, could help improve public health campaigns to promote healthy eating. Methods In this case-study analysis, 27 recent and successful commercial food and beverage marketing cases were purposively sampled from different European countries. The cases involved different consumer target groups, product categories, company sizes and marketing techniques. The analysis focused on cases of relatively healthy food types, and nutrition and health-related aspects in the communication related to the food. Visual as well as written material was gathered, complemented by semi-structured interviews with 12 food market trend experts and 19 representatives of food companies and advertising agencies. Success factors were identified by a group of experts who reached consensus through discussion structured by a card sorting method. Results Six clusters of success factors emerged from the analysis and were labelled as "data and knowledge", "emotions", "endorsement", "media", "community" and "why and how". Each cluster subsumes two or three success factors and is illustrated by examples. In total, 16 factors were identified. It is argued that the factors "nutritional evidence", "trend awareness", "vertical endorsement", "simple naturalness" and "common values" are of particular importance in the communication of health with regard to food. Conclusions The present study identified critical factors for the success of commercial food marketing campaigns related to the issue of nutrition and health, which are possibly transferable to the public health

  5. Lessons for public health campaigns from analysing commercial food marketing success factors: a case study.

    Science.gov (United States)

    Aschemann-Witzel, Jessica; Perez-Cueto, Federico J A; Niedzwiedzka, Barbara; Verbeke, Wim; Bech-Larsen, Tino

    2012-02-21

    Commercial food marketing has considerably shaped consumer food choice behaviour. Meanwhile, public health campaigns for healthier eating have had limited impact to date. Social marketing suggests that successful commercial food marketing campaigns can provide useful lessons for public sector activities. The aim of the present study was to empirically identify food marketing success factors that, using the social marketing approach, could help improve public health campaigns to promote healthy eating. In this case-study analysis, 27 recent and successful commercial food and beverage marketing cases were purposively sampled from different European countries. The cases involved different consumer target groups, product categories, company sizes and marketing techniques. The analysis focused on cases of relatively healthy food types, and nutrition and health-related aspects in the communication related to the food. Visual as well as written material was gathered, complemented by semi-structured interviews with 12 food market trend experts and 19 representatives of food companies and advertising agencies. Success factors were identified by a group of experts who reached consensus through discussion structured by a card sorting method. Six clusters of success factors emerged from the analysis and were labelled as "data and knowledge", "emotions", "endorsement", "media", "community" and "why and how". Each cluster subsumes two or three success factors and is illustrated by examples. In total, 16 factors were identified. It is argued that the factors "nutritional evidence", "trend awareness", "vertical endorsement", "simple naturalness" and "common values" are of particular importance in the communication of health with regard to food. The present study identified critical factors for the success of commercial food marketing campaigns related to the issue of nutrition and health, which are possibly transferable to the public health sector. Whether or not a particular

  6. Demographic, socioeconomic, and behavioral factors affecting patterns of tooth decay in the permanent dentition: principal components and factor analyses.

    Science.gov (United States)

    Shaffer, John R; Polk, Deborah E; Feingold, Eleanor; Wang, Xiaojing; Cuenco, Karen T; Weeks, Daniel E; DeSensi, Rebecca S; Weyant, Robert J; Crout, Richard; McNeil, Daniel W; Marazita, Mary L

    2013-08-01

    Dental caries of the permanent dentition is a multifactorial disease resulting from the complex interplay of endogenous and environmental risk factors. The disease is not easily quantitated due to the innumerable possible combinations of carious lesions across individual tooth surfaces of the permanent dentition. Global measures of decay, such as the DMFS index (which was developed for surveillance applications), may not be optimal for studying the epidemiology of dental caries because they ignore the distinct patterns of decay across the dentition. We hypothesize that specific risk factors may manifest their effects on specific tooth surfaces leading to patterns of decay that can be identified and studied. In this study, we utilized two statistical methods of extracting patterns of decay from surface-level caries data to create novel phenotypes with which to study the risk factors affecting dental caries. Intra-oral dental examinations were performed on 1068 participants aged 18-75 years to assess dental caries. The 128 tooth surfaces of the permanent dentition were scored as carious or not and used as input for principal components analysis (PCA) and factor analysis (FA), two methods of identifying underlying patterns without a priori knowledge of the patterns. Demographic (age, sex, birth year, race/ethnicity, and educational attainment), anthropometric (height, body mass index, waist circumference), endogenous (saliva flow), and environmental (tooth brushing frequency, home water source, and home water fluoride) risk factors were tested for association with the caries patterns identified by PCA and FA, as well as DMFS, for comparison. The ten strongest patterns (i.e. those that explain the most variation in the data set) extracted by PCA and FA were considered. The three strongest patterns identified by PCA reflected (i) global extent of decay (i.e. comparable to DMFS index), (ii) pit and fissure surface caries and (iii) smooth surface caries, respectively. The

  7. Analyses of Digman's child-personality data: derivation of Big-Five factor scores from each of six samples.

    Science.gov (United States)

    Goldberg, L R

    2001-10-01

    One of the world's richest collections of teacher descriptions of elementary-school children was obtained by John M. Digman from 1959 to 1967 in schools on two Hawaiian islands. In six phases of data collection, 88 teachers described 2,572 of their students, using one of five different sets of personality variables. The present report provides findings from new analyses of these important data, which have never before been analyzed in a comprehensive manner. When factors developed from carefully selected markers of the Big-Five factor structure were compared to those based on the total set of variables in each sample, the congruence between both types of factors was quite high. Attempts to extend the structure to 6 and 7 factors revealed no other broad factors beyond the Big Five in any of the 6 samples. These robust findings provide significant new evidence for the structure of teacher-based assessments of child personality attributes.

  8. Employing the Five-Factor Mentoring Instrument: Analysing Mentoring Practices for Teaching Primary Science

    Science.gov (United States)

    Hudson, Peter; Usak, Muhammet; Savran-Gencer, Ayse

    2009-01-01

    Primary science education is a concern around the world and quality mentoring within schools can develop pre-service teachers' practices. A five-factor model for mentoring has been identified, namely, personal attributes, system requirements, pedagogical knowledge, modelling, and feedback. Final-year pre-service teachers (mentees, n = 211) from…

  9. Comparative Factor Analyses of the Personal Attributes Questionnaire and the Bem Sex-Role Inventory.

    Science.gov (United States)

    Antill, John K.; Cunningham, John D.

    1982-01-01

    Compared the Personal Attributes Questionnaire (PAQ) and the Bem Sex Role Inventory (BSRI) as measures of androgyny. Results showed that femininty (Concern for Others) and masculinity (Dominance) accounted for most of the variance, but for PAQ, clusters of male- and female-valued items (i.e., Extroversion and Insecurity) formed subsidiary factors.…

  10. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-04-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.

  11. Modeling Seasonal Influenza Transmission and Its Association with Climate Factors in Thailand Using Time-Series and ARIMAX Analyses.

    Science.gov (United States)

    Chadsuthi, Sudarat; Iamsirithaworn, Sopon; Triampo, Wannapong; Modchang, Charin

    2015-01-01

    Influenza is a worldwide respiratory infectious disease that easily spreads from one person to another. Previous research has found that the influenza transmission process is often associated with climate variables. In this study, we used autocorrelation and partial autocorrelation plots to determine the appropriate autoregressive integrated moving average (ARIMA) model for influenza transmission in the central and southern regions of Thailand. The relationships between reported influenza cases and the climate data, such as the amount of rainfall, average temperature, average maximum relative humidity, average minimum relative humidity, and average relative humidity, were evaluated using cross-correlation function. Based on the available data of suspected influenza cases and climate variables, the most appropriate ARIMA(X) model for each region was obtained. We found that the average temperature correlated with influenza cases in both central and southern regions, but average minimum relative humidity played an important role only in the southern region. The ARIMAX model that includes the average temperature with a 4-month lag and the minimum relative humidity with a 2-month lag is the appropriate model for the central region, whereas including the minimum relative humidity with a 4-month lag results in the best model for the southern region.

  12. Rigorous home range estimation with movement data: a new autocorrelated kernel density estimator.

    Science.gov (United States)

    Fleming, C H; Fagan, W F; Mueller, T; Olson, K A; Leimgruber, P; Calabrese, J M

    2015-05-01

    Quantifying animals' home ranges is a key problem in ecology and has important conservation and wildlife management applications. Kernel density estimation (KDE) is a workhorse technique for range delineation problems that is both statistically efficient and nonparametric. KDE assumes that the data are independent and identically distributed (IID). However, animal tracking data, which are routinely used as inputs to KDEs, are inherently autocorrelated and violate this key assumption. As we demonstrate, using realistically autocorrelated data in conventional KDEs results in grossly underestimated home ranges. We further show that the performance of conventional KDEs actually degrades as data quality improves, because autocorrelation strength increases as movement paths become more finely resolved. To remedy these flaws with the traditional KDE method, we derive an autocorrelated KDE (AKDE) from first principles to use autocorrelated data, making it perfectly suited for movement data sets. We illustrate the vastly improved performance of AKDE using analytical arguments, relocation data from Mongolian gazelles, and simulations based upon the gazelle's observed movement process. By yielding better minimum area estimates for threatened wildlife populations, we believe that future widespread use of AKDE will have significant impact on ecology and conservation biology.

  13. Lessons for public health campaigns from analysing commercial food marketing success factors

    DEFF Research Database (Denmark)

    Aschemann-Witzel, Jessica; JA Perez-Cueto, Federico; Niedzwiedzka, Barbara

    2012-01-01

    Background: Commercial food marketing has considerably shaped consumer food choice behaviour. Meanwhile, public health campaigns for healthier eating have had limited impact to date. Social marketing suggests that successful commercial food marketing campaigns can provide useful lessons for public...... sector activities. The aim of the present study was to empirically identify food marketing success factors that, using the social marketing approach, could help improve public health campaigns to promote healthy eating. Methods: In this case-study analysis, 27 recent and successful commercial food...... in the communication related to the food. Visual as well as written material was gathered, complemented by semi-structured interviews with 12 food market trend experts and 19 representatives of food companies and advertising agencies. Success factors were identified by a group of experts who reached consensus through...

  14. Monte Carlo analyses of the source multiplication factor of the YALINA booster facility

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, Alberto; Gohar, Y.; Kondev, F.; Aliberti, Gerardo [Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States); Bolshinsky, I. [Idaho National Laboratory, P. O. Box 2528, Idaho Falls, Idaho 83403 (United States); Kiyavitskaya, Hanna; Bournos, Victor; Fokov, Yury; Routkovskaya, Christina; Serafimovich, Ivan [Joint Institute for Power and Nuclear Research-Sosny, National Academy of Sciences, Minsk, acad. Krasin, 99, 220109 (Belarus)

    2008-07-01

    The multiplication factor of a subcritical assembly is affected by the energy spectrum and spatial distribution of the neutron source. In a critical assembly, neutrons emerge from the fission reactions with an average energy of approx2 MeV; in a deuteron accelerator driven subcritical assembly, neutrons emerge from the fusion target with a fixed energy of 2.45 or 14.1 MeV, from the Deuterium-Deuterium (D-D) and Deuterium-Tritium (D-T) reactions respectively. This study aims at generating accurate neutronics models for the YALINA Booster facility, based on the use of different Monte Carlo neutron transport codes, at defining the facility key physical parameters, and at comparing the neutron multiplication factor for three different neutron sources: fission, D-D and D-T. The calculated values are compared with the experimental results. (authors)

  15. Monte Carlo analyses of the source multiplication factor of the YALINA booster facility

    International Nuclear Information System (INIS)

    Talamo, Alberto; Gohar, Y.; Kondev, F.; Aliberti, Gerardo; Bolshinsky, I.; Kiyavitskaya, Hanna; Bournos, Victor; Fokov, Yury; Routkovskaya, Christina; Serafimovich, Ivan

    2008-01-01

    The multiplication factor of a subcritical assembly is affected by the energy spectrum and spatial distribution of the neutron source. In a critical assembly, neutrons emerge from the fission reactions with an average energy of ∼2 MeV; in a deuteron accelerator driven subcritical assembly, neutrons emerge from the fusion target with a fixed energy of 2.45 or 14.1 MeV, from the Deuterium-Deuterium (D-D) and Deuterium-Tritium (D-T) reactions respectively. This study aims at generating accurate neutronics models for the YALINA Booster facility, based on the use of different Monte Carlo neutron transport codes, at defining the facility key physical parameters, and at comparing the neutron multiplication factor for three different neutron sources: fission, D-D and D-T. The calculated values are compared with the experimental results. (authors)

  16. A model for analysing factors which may influence quality management procedures in higher education

    Directory of Open Access Journals (Sweden)

    Cătălin MAICAN

    2015-12-01

    Full Text Available In all universities, the Office for Quality Assurance defines the procedure for assessing the performance of the teaching staff, with a view to establishing students’ perception as regards the teachers’ activity from the point of view of the quality of the teaching process, of the relationship with the students and of the assistance provided for learning. The present paper aims at creating a combined model for evaluation, based on Data Mining statistical methods: starting from the findings revealed by the evaluations teachers performed to students, using the cluster analysis and the discriminant analysis, we identified the subjects which produced significant differences between students’ grades, subjects which were subsequently subjected to an evaluation by students. The results of these analyses allowed the formulation of certain measures for enhancing the quality of the evaluation process.

  17. Heritable patterns of tooth decay in the permanent dentition: principal components and factor analyses.

    Science.gov (United States)

    Shaffer, John R; Feingold, Eleanor; Wang, Xiaojing; Tcuenco, Karen T; Weeks, Daniel E; DeSensi, Rebecca S; Polk, Deborah E; Wendell, Steve; Weyant, Robert J; Crout, Richard; McNeil, Daniel W; Marazita, Mary L

    2012-03-09

    Dental caries is the result of a complex interplay among environmental, behavioral, and genetic factors, with distinct patterns of decay likely due to specific etiologies. Therefore, global measures of decay, such as the DMFS index, may not be optimal for identifying risk factors that manifest as specific decay patterns, especially if the risk factors such as genetic susceptibility loci have small individual effects. We used two methods to extract patterns of decay from surface-level caries data in order to generate novel phenotypes with which to explore the genetic regulation of caries. The 128 tooth surfaces of the permanent dentition were scored as carious or not by intra-oral examination for 1,068 participants aged 18 to 75 years from 664 biological families. Principal components analysis (PCA) and factor analysis (FA), two methods of identifying underlying patterns without a priori surface classifications, were applied to our data. The three strongest caries patterns identified by PCA recaptured variation represented by DMFS index (correlation, r = 0.97), pit and fissure surface caries (r = 0.95), and smooth surface caries (r = 0.89). However, together, these three patterns explained only 37% of the variability in the data, indicating that a priori caries measures are insufficient for fully quantifying caries variation. In comparison, the first pattern identified by FA was strongly correlated with pit and fissure surface caries (r = 0.81), but other identified patterns, including a second pattern representing caries of the maxillary incisors, were not representative of any previously defined caries indices. Some patterns identified by PCA and FA were heritable (h(2) = 30-65%, p = 0.043-0.006), whereas other patterns were not, indicating both genetic and non-genetic etiologies of individual decay patterns. This study demonstrates the use of decay patterns as novel phenotypes to assist in understanding the multifactorial nature of dental caries.

  18. Lessons for public health campaigns from analysing commercial food marketing success factors: a case study

    OpenAIRE

    Aschemann-Witzel, Jessica; Perez-Cueto, Federico JA; Niedzwiedzka, Barbara; Verbeke, Wim; Bech-Larsen, Tino

    2012-01-01

    Abstract Background Commercial food marketing has considerably shaped consumer food choice behaviour. Meanwhile, public health campaigns for healthier eating have had limited impact to date. Social marketing suggests that successful commercial food marketing campaigns can provide useful lessons for public sector activities. The aim of the present study was to empirically identify food marketing success factors that, using the social marketing approach, could help improve public health campaig...

  19. Psychometric Properties of the Heart Disease Knowledge Scale: Evidence from Item and Confirmatory Factor Analyses.

    Science.gov (United States)

    Lim, Bee Chiu; Kueh, Yee Cheng; Arifin, Wan Nor; Ng, Kok Huan

    2016-07-01

    Heart disease knowledge is an important concept for health education, yet there is lack of evidence on proper validated instruments used to measure levels of heart disease knowledge in the Malaysian context. A cross-sectional, survey design was conducted to examine the psychometric properties of the adapted English version of the Heart Disease Knowledge Questionnaire (HDKQ). Using proportionate cluster sampling, 788 undergraduate students at Universiti Sains Malaysia, Malaysia, were recruited and completed the HDKQ. Item analysis and confirmatory factor analysis (CFA) were used for the psychometric evaluation. Construct validity of the measurement model was included. Most of the students were Malay (48%), female (71%), and from the field of science (51%). An acceptable range was obtained with respect to both the difficulty and discrimination indices in the item analysis results. The difficulty index ranged from 0.12-0.91 and a discrimination index of ≥ 0.20 were reported for the final retained 23 items. The final CFA model showed an adequate fit to the data, yielding a 23-item, one-factor model [weighted least squares mean and variance adjusted scaled chi-square difference = 1.22, degrees of freedom = 2, P-value = 0.544, the root mean square error of approximation = 0.03 (90% confidence interval = 0.03, 0.04); close-fit P-value = > 0.950]. Adequate psychometric values were obtained for Malaysian undergraduate university students using the 23-item, one-factor model of the adapted HDKQ.

  20. Adaptive endpoint detection of seismic signal based on auto-correlated function

    International Nuclear Information System (INIS)

    Fan Wanchun; Shi Ren

    2000-01-01

    There are certain shortcomings for the endpoint detection by time-waveform envelope and/or by checking the travel table (both labelled as the artificial detection method). Based on the analysis of the auto-correlation function, the notion of the distance between auto-correlation functions was quoted, and the characterizations of the noise and the signal with noise were discussed by using the distance. Then, the method of auto-adaptable endpoint detection of seismic signal based on auto-correlated similarity was summed up. The steps of implementation and determining of the thresholds were presented in detail. The experimental results that were compared with the methods based on artificial detecting show that this method has higher sensitivity even in a low SNR circumstance

  1. Spectral Velocity Estimation using the Autocorrelation Function and Sparse data Sequences

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2005-01-01

    Ultrasound scanners can be used for displaying the distribution of velocities in blood vessels by finding the power spectrum of the received signal. It is desired to show a B-mode image for orientation and data for this has to be acquired interleaved with the flow data. Techniques for maintaining...... both the B-mode frame rate, and at the same time have the highest possible $f_{prf}$ only limited by the depth of investigation, are, thus, of great interest. The power spectrum can be calculated from the Fourier transform of the autocorrelation function $R_r(k)$. The lag $k$ corresponds...... of the sequence. The audio signal has also been synthesized from the autocorrelation data by passing white, Gaussian noise through a filter designed from the power spectrum of the autocorrelation function. The results show that both the full velocity range can be maintained at the same time as a B-mode image...

  2. Spectrum sensing algorithm based on autocorrelation energy in cognitive radio networks

    Science.gov (United States)

    Ren, Shengwei; Zhang, Li; Zhang, Shibing

    2016-10-01

    Cognitive radio networks have wide applications in the smart home, personal communications and other wireless communication. Spectrum sensing is the main challenge in cognitive radios. This paper proposes a new spectrum sensing algorithm which is based on the autocorrelation energy of signal received. By taking the autocorrelation energy of the received signal as the statistics of spectrum sensing, the effect of the channel noise on the detection performance is reduced. Simulation results show that the algorithm is effective and performs well in low signal-to-noise ratio. Compared with the maximum generalized eigenvalue detection (MGED) algorithm, function of covariance matrix based detection (FMD) algorithm and autocorrelation-based detection (AD) algorithm, the proposed algorithm has 2 11 dB advantage.

  3. Recursive Estimation for Dynamical Systems with Different Delay Rates Sensor Network and Autocorrelated Process Noises

    Directory of Open Access Journals (Sweden)

    Jianxin Feng

    2014-01-01

    Full Text Available The recursive estimation problem is studied for a class of uncertain dynamical systems with different delay rates sensor network and autocorrelated process noises. The process noises are assumed to be autocorrelated across time and the autocorrelation property is described by the covariances between different time instants. The system model under consideration is subject to multiplicative noises or stochastic uncertainties. The sensor delay phenomenon occurs in a random way and each sensor in the sensor network has an individual delay rate which is characterized by a binary switching sequence obeying a conditional probability distribution. By using the orthogonal projection theorem and an innovation analysis approach, the desired recursive robust estimators including recursive robust filter, predictor, and smoother are obtained. Simulation results are provided to demonstrate the effectiveness of the proposed approaches.

  4. Nanoscale and femtosecond optical autocorrelator based on a single plasmonic nanostructure

    International Nuclear Information System (INIS)

    Melentiev, P N; Afanasiev, A E; Balykin, V I; Tausenev, A V; Konyaschenko, A V; Klimov, V V

    2014-01-01

    We demonstrated a nanoscale size, ultrafast and multiorder optical autocorrelator with a single plasmonic nanostructure for measuring the spatio-temporal dynamics of femtosecond laser light. As a nanostructure, we use a split hole resonator (SHR), which was made in an aluminium nanofilm. The Al material yields the fastest response time (100 as). The SHR nanostructure ensures a high nonlinear optical efficiency of the interaction with laser radiation, which leads to (1) the second, (2) the third harmonics generation and (3) the multiphoton luminescence, which, in turn, are used to perform multi-order autocorrelation measurements. The nano-sized SHR makes it possible to conduct autocorrelation measurements (i) with a subwavelength spatial resolution and (ii) with no significant influence on the duration of the laser pulse. The time response realized by the SHR nanostructure is about 10 fs. (letter)

  5. Multicollinearity in prognostic factor analyses using the EORTC QLQ-C30: identification and impact on model selection.

    Science.gov (United States)

    Van Steen, Kristel; Curran, Desmond; Kramer, Jocelyn; Molenberghs, Geert; Van Vreckem, Ann; Bottomley, Andrew; Sylvester, Richard

    2002-12-30

    Clinical and quality of life (QL) variables from an EORTC clinical trial of first line chemotherapy in advanced breast cancer were used in a prognostic factor analysis of survival and response to chemotherapy. For response, different final multivariate models were obtained from forward and backward selection methods, suggesting a disconcerting instability. Quality of life was measured using the EORTC QLQ-C30 questionnaire completed by patients. Subscales on the questionnaire are known to be highly correlated, and therefore it was hypothesized that multicollinearity contributed to model instability. A correlation matrix indicated that global QL was highly correlated with 7 out of 11 variables. In a first attempt to explore multicollinearity, we used global QL as dependent variable in a regression model with other QL subscales as predictors. Afterwards, standard diagnostic tests for multicollinearity were performed. An exploratory principal components analysis and factor analysis of the QL subscales identified at most three important components and indicated that inclusion of global QL made minimal difference to the loadings on each component, suggesting that it is redundant in the model. In a second approach, we advocate a bootstrap technique to assess the stability of the models. Based on these analyses and since global QL exacerbates problems of multicollinearity, we therefore recommend that global QL be excluded from prognostic factor analyses using the QLQ-C30. The prognostic factor analysis was rerun without global QL in the model, and selected the same significant prognostic factors as before. Copyright 2002 John Wiley & Sons, Ltd.

  6. Metabolomics analyses identify platelet activating factors and heme breakdown products as Lassa fever biomarkers.

    Directory of Open Access Journals (Sweden)

    Trevor V Gale

    2017-09-01

    Full Text Available Lassa fever afflicts tens of thousands of people in West Africa annually. The rapid progression of patients from febrile illness to fulminant syndrome and death provides incentive for development of clinical prognostic markers that can guide case management. The small molecule profile of serum from febrile patients triaged to the Viral Hemorrhagic Fever Ward at Kenema Government Hospital in Sierra Leone was assessed using untargeted Ultra High Performance Liquid Chromatography Mass Spectrometry. Physiological dysregulation resulting from Lassa virus (LASV infection occurs at the small molecule level. Effects of LASV infection on pathways mediating blood coagulation, and lipid, amino acid, nucleic acid metabolism are manifest in changes in the levels of numerous metabolites in the circulation. Several compounds, including platelet activating factor (PAF, PAF-like molecules and products of heme breakdown emerged as candidates that may prove useful in diagnostic assays to inform better care of Lassa fever patients.

  7. Performance study of K{sub e} factors in simplified elastic plastic fatigue analyses with emphasis on thermal cyclic loading

    Energy Technology Data Exchange (ETDEWEB)

    Lang, Hermann, E-mail: hermann.lang@areva.com [AREVA NP GmbH, PEEA-G, Henri-Dunant-Strasse 50, 91058 Erlangen (Germany); Rudolph, Juergen; Ziegler, Rainer [AREVA NP GmbH, PEEA-G, Henri-Dunant-Strasse 50, 91058 Erlangen (Germany)

    2011-08-15

    As code-based fully elastic plastic code conforming fatigue analyses are still time consuming, simplified elastic plastic analysis is often applied. This procedure is known to be overly conservative for some conditions due to the applied plastification (penalty) factor K{sub e}. As a consequence, less conservative fully elastic plastic fatigue analyses based on non-linear finite element analyses (FEA) or simplified elastic plastic analysis based on more realistic K{sub e} factors have to be used for fatigue design. The demand for more realistic K{sub e} factors is covered as a requirement of practical fatigue analysis. Different code-based K{sub e} procedures are reviewed in this paper with special regard to performance under thermal cyclic loading conditions. Other approximation formulae such as those by Neuber, Seeger/Beste or Kuehnapfel are not evaluated in this context because of their applicability to mechanical loading excluding thermal cyclic loading conditions typical for power plant operation. Besides the current code-based K{sub e} corrections, the ASME Code Case N-779 (e.g. Adam's proposal) and its modification in ASME Section VIII is considered. Comparison of elastic plastic results and results from the Rules for Nuclear Facility Components and Rules for Pressure Vessels reveals a considerable overestimation of usage factor in the case of ASME III and KTA 3201.2 for the examined examples. Usage factors according to RCC-M, Adams (ASME Code Case N-779), ASME VIII (alternative) and EN 13445-3 are essentially comparable and less conservative for these examples. The K{sub v} correction as well as the applied yield criterion (Tresca or von Mises) essentially influence the quality of the more advanced plasticity corrections (e.g. ASME Code Case N-779 and RCC-M). Hence, new proposals are based on a refined K{sub v} correction.

  8. Performance study of Ke factors in simplified elastic plastic fatigue analyses with emphasis on thermal cyclic loading

    International Nuclear Information System (INIS)

    Lang, Hermann; Rudolph, Juergen; Ziegler, Rainer

    2011-01-01

    As code-based fully elastic plastic code conforming fatigue analyses are still time consuming, simplified elastic plastic analysis is often applied. This procedure is known to be overly conservative for some conditions due to the applied plastification (penalty) factor K e . As a consequence, less conservative fully elastic plastic fatigue analyses based on non-linear finite element analyses (FEA) or simplified elastic plastic analysis based on more realistic K e factors have to be used for fatigue design. The demand for more realistic K e factors is covered as a requirement of practical fatigue analysis. Different code-based K e procedures are reviewed in this paper with special regard to performance under thermal cyclic loading conditions. Other approximation formulae such as those by Neuber, Seeger/Beste or Kuehnapfel are not evaluated in this context because of their applicability to mechanical loading excluding thermal cyclic loading conditions typical for power plant operation. Besides the current code-based K e corrections, the ASME Code Case N-779 (e.g. Adam's proposal) and its modification in ASME Section VIII is considered. Comparison of elastic plastic results and results from the Rules for Nuclear Facility Components and Rules for Pressure Vessels reveals a considerable overestimation of usage factor in the case of ASME III and KTA 3201.2 for the examined examples. Usage factors according to RCC-M, Adams (ASME Code Case N-779), ASME VIII (alternative) and EN 13445-3 are essentially comparable and less conservative for these examples. The K v correction as well as the applied yield criterion (Tresca or von Mises) essentially influence the quality of the more advanced plasticity corrections (e.g. ASME Code Case N-779 and RCC-M). Hence, new proposals are based on a refined K v correction.

  9. Evaluation of a modified 16-item Readiness for Interprofessional Learning Scale (RIPLS): Exploratory and confirmatory factor analyses.

    Science.gov (United States)

    Yu, Tzu-Chieh; Jowsey, Tanisha; Henning, Marcus

    2018-04-18

    The Readiness for Interprofessional Learning Scale (RIPLS) was developed to assess undergraduate readiness for engaging in interprofessional education (IPE). It has become an accepted and commonly used instrument. To determine utility of a modified 16-item RIPLS instrument, exploratory and confirmatory factor analyses were performed. Data used were collected from a pre- and post-intervention study involving 360 New Zealand undergraduate students from one university. Just over half of the participants were enrolled in medicine (51%) while the remainder were in pharmacy (27%) and nursing (22%). The intervention was a two-day simulation-based IPE course focused on managing unplanned acute medical problems in hospital wards ("ward calls"). Immediately prior to the course, 288 RIPLS were collected and immediately afterwards, 322 (response rates 80% and 89%, respectively). Exploratory factor analysis involving principal axis factoring with an oblique rotation method was conducted using pre-course data. The scree plot suggested a three-factor solution over two- and four-factor solutions. Subsequent confirmatory factor analysis performed using post-course data demonstrated partial goodness-of-fit for this suggested three-factor model. Based on these findings, further robust psychometric testing of the RIPLS or modified versions of it is recommended before embarking on its use in evaluative research in various healthcare education settings.

  10. Residential Load Manageability Factor Analyses by Load Sensitivity Affected by Temperature

    Directory of Open Access Journals (Sweden)

    N. Eskandari

    2016-12-01

    Full Text Available Load side management is the basic and significant principle to keeping the balance between generation side and consumption side of electrical power energy. Load side management on typical medium voltage feeder is the power energy consumption control of connected loads with variation of essential parameters that loads do reaction to their variation. Knowing amount of load's reaction to each parameters variation in typical medium voltage feeder during the day, leads to gain Load Manageability Factor (LMF for that specific feeder that helps power utilities to manage their connected loads. Calculating this LMF needs to find out each types of load with unique inherent features behavior to each parameters variation. This paper results and future work results will help us to catch mentioned LMF. In this paper analysis of residential load behavior due to temperature variation with training artificial neural network will be done. Load behavior due to other essential parameters variations like energy pricing variation, major event happening, and power utility announcing to the customers, and etc will study in future works. Collecting all related works results in a unit mathematical equation or an artificial neural network will gain LMF.

  11. Applying of factor analyses for determination of trace elements distribution in water from Vardar and its tributaries, Macedonia/Greece.

    Science.gov (United States)

    Popov, Stanko Ilić; Stafilov, Trajče; Sajn, Robert; Tănăselia, Claudiu; Bačeva, Katerina

    2014-01-01

    A systematic study was carried out to investigate the distribution of fifty-six elements in the water samples from river Vardar (Republic of Macedonia and Greece) and its major tributaries. The samples were collected from 27 sampling sites. Analyses were performed by mass spectrometry with inductively coupled plasma (ICP-MS) and atomic emission spectrometry with inductively coupled plasma (ICP-AES). Cluster and R mode factor analysis (FA) was used to identify and characterise element associations and four associations of elements were determined by the method of multivariate statistics. Three factors represent the associations of elements that occur in the river water naturally while Factor 3 represents an anthropogenic association of the elements (Cd, Ga, In, Pb, Re, Tl, Cu, and Zn) introduced in the river waters from the waste waters from the mining and metallurgical activities in the country.

  12. Analyses of Catharanthus roseus and Arabidopsis thaliana WRKY transcription factors reveal involvement in jasmonate signaling.

    Science.gov (United States)

    Schluttenhofer, Craig; Pattanaik, Sitakanta; Patra, Barunava; Yuan, Ling

    2014-06-20

    To combat infection to biotic stress plants elicit the biosynthesis of numerous natural products, many of which are valuable pharmaceutical compounds. Jasmonate is a central regulator of defense response to pathogens and accumulation of specialized metabolites. Catharanthus roseus produces a large number of terpenoid indole alkaloids (TIAs) and is an excellent model for understanding the regulation of this class of valuable compounds. Recent work illustrates a possible role for the Catharanthus WRKY transcription factors (TFs) in regulating TIA biosynthesis. In Arabidopsis and other plants, the WRKY TF family is also shown to play important role in controlling tolerance to biotic and abiotic stresses, as well as secondary metabolism. Here, we describe the WRKY TF families in response to jasmonate in Arabidopsis and Catharanthus. Publically available Arabidopsis microarrays revealed at least 30% (22 of 72) of WRKY TFs respond to jasmonate treatments. Microarray analysis identified at least six jasmonate responsive Arabidopsis WRKY genes (AtWRKY7, AtWRKY20, AtWRKY26, AtWRKY45, AtWRKY48, and AtWRKY72) that have not been previously reported. The Catharanthus WRKY TF family is comprised of at least 48 members. Phylogenetic clustering reveals 11 group I, 32 group II, and 5 group III WRKY TFs. Furthermore, we found that at least 25% (12 of 48) were jasmonate responsive, and 75% (9 of 12) of the jasmonate responsive CrWRKYs are orthologs of AtWRKYs known to be regulated by jasmonate. Overall, the CrWRKY family, ascertained from transcriptome sequences, contains approximately 75% of the number of WRKYs found in other sequenced asterid species (pepper, tomato, potato, and bladderwort). Microarray and transcriptomic data indicate that expression of WRKY TFs in Arabidopsis and Catharanthus are under tight spatio-temporal and developmental control, and potentially have a significant role in jasmonate signaling. Profiling of CrWRKY expression in response to jasmonate treatment

  13. Evaluating the factor structure, item analyses, and internal consistency of hospital anxiety and depression scale in Iranian infertile patients

    Directory of Open Access Journals (Sweden)

    Payam Amini

    2017-09-01

    Full Text Available Background: The hospital anxiety and depression scale (HADS is a common screening tool designed to measure the level of anxiety and depression in different factor structures and has been extensively used in non-psychiatric populations and individuals experiencing fertility problems. Objective: The aims of this study were to evaluate the factor structure, item analyses, and internal consistency of HADS in Iranian infertile patients. Materials and Methods: This cross-sectional study included 651 infertile patients (248 men and 403 women referred to a referral infertility Center in Tehran, Iran between January 2014 and January 2015. Confirmatory factor analysis was used to determine the underlying factor structure of the HADS among one, two, and threefactor models. Several goodness of fit indices were utilized such as comparative, normed and goodness of fit indices, Akaike information criterion, and the root mean squared error of approximation. In addition to HADS, the Satisfaction with Life Scale questionnaires as well as demographic and clinical information were administered to all patients. Results: The goodness of fit indices through CFAs exposed that three and onefactor model provided the best and worst fit to the total, male and female datasets compared to the other factor structure models for the infertile patients. The Cronbach’s alpha for anxiety and depression subscales were 0.866 and 0.753 respectively. The HADS subscales significantly correlated with SWLS, indicating an acceptable convergent validity. Conclusion: The HADS was found to be a three-factor structure screening instrument in the field of infertility.

  14. AFM topographies of densely packed nanoparticles: a quick way to determine the lateral size distribution by autocorrelation function analysis

    International Nuclear Information System (INIS)

    Fekete, L.; Kůsová, K.; Petrák, V.; Kratochvílová, I.

    2012-01-01

    The distribution of sizes is one of the basic characteristics of nanoparticles. Here, we propose a novel way to determine the lateral distribution of sizes from AFM topographies. Our algorithm is based on the autocorrelation function and can be applied both on topographies containing spatially separated and densely packed nanoparticles as well as on topographies of polycrystalline films. As no manual treatment is required, this algorithm can be easily automatable for batch processing. The algorithm works in principle with any kind of spatially mapped information (AFM current maps, optical microscope images, etc.), and as such has no size limitations. However, in the case of AFM topographies, the tip/sample convolution effects will be the factor limiting the smallest size to which the algorithm is applicable. Here, we demonstrate the usefulness of this algorithm on objects with sizes ranging between 20 nm and 1.5 μm.

  15. Autocorrelation studies of the arrival directions of UHECRs measured by the surface detector of the Pierre Auger Observatory

    Energy Technology Data Exchange (ETDEWEB)

    Schulte, Stephan

    2011-07-11

    method is illustrated by analysing an anisotropic example Monte Carlo Set. In chapter six the procedure to generate different Monte Carlo maps based on several source scenarios and the corresponding results obtained by applying all methods to the resulting samples are summarized. By studying the behavior of the methods depending on different parameters of the simulations, e.g. source density, source strength, deflection of the UHECRs, one can figure out which method is the most sensitive one and has the highest probability to find a true anisotropy in the arrival directions of UHECRs. In the end, the Cluster Algorithm with a Gaussian weighting gave the best results and was applied to the current data set in chapter seven. This led to a probability on the percent level at an energy of 50 EeV, i.e. with this autocorrelation method no clear deviation from the isotropic expectation has been found. However, the Cluster Algorithm has an advantage compared to the other introduced autocorrelation methods. It contains the directional information of the found clusters which can be used for further analyses. In an additional study we checked whether the region close to Cen A is special in terms of a high abundance of clusters. Indeed, in this region a clear deviation was found. But, since this was an a posteriori analysis, this is no proof for Cen A to be a source. (orig.)

  16. Autocorrelation studies of the arrival directions of UHECRs measured by the surface detector of the Pierre Auger Observatory

    International Nuclear Information System (INIS)

    Schulte, Stephan

    2011-01-01

    method is illustrated by analysing an anisotropic example Monte Carlo Set. In chapter six the procedure to generate different Monte Carlo maps based on several source scenarios and the corresponding results obtained by applying all methods to the resulting samples are summarized. By studying the behavior of the methods depending on different parameters of the simulations, e.g. source density, source strength, deflection of the UHECRs, one can figure out which method is the most sensitive one and has the highest probability to find a true anisotropy in the arrival directions of UHECRs. In the end, the Cluster Algorithm with a Gaussian weighting gave the best results and was applied to the current data set in chapter seven. This led to a probability on the percent level at an energy of 50 EeV, i.e. with this autocorrelation method no clear deviation from the isotropic expectation has been found. However, the Cluster Algorithm has an advantage compared to the other introduced autocorrelation methods. It contains the directional information of the found clusters which can be used for further analyses. In an additional study we checked whether the region close to Cen A is special in terms of a high abundance of clusters. Indeed, in this region a clear deviation was found. But, since this was an a posteriori analysis, this is no proof for Cen A to be a source. (orig.)

  17. Robustness of variance and autocorrelation as indicators of critical slowing down

    NARCIS (Netherlands)

    Dakos, V.; Nes, van E.H.; Odorico, D' P.; Scheffer, M.

    2012-01-01

    Ecosystems close to a critical threshold lose resilience, in the sense that perturbations can more easily push them into an alternative state. Recently, it has been proposed that such loss of resilience may be detected from elevated autocorrelation and variance in the fluctuations of the state of an

  18. Partial autocorrelation functions of the fractional ARIMA processes with negative degree of differencing

    OpenAIRE

    Inoue, Akihiko; Kasahara, Yukio

    2004-01-01

    Let {Xn : ∈Z} be a fractional ARIMA(p,d,q) process with partial autocorrelation function α(·). In this paper, we prove that if d∈(−1/2,0) then |α(n)|~|d|/n as n→∞. This extends the previous result for the case 0

  19. Limit theory for the sample autocorrelations and extremes of a GARCH (1,1) process

    NARCIS (Netherlands)

    Mikosch, T; Starica, C

    2000-01-01

    The asymptotic theory for the sample autocorrelations and extremes of a GARCH(I, 1) process is provided. Special attention is given to the case when the sum of the ARCH and GARCH parameters is close to 1, that is, when one is close to an infinite Variance marginal distribution. This situation has

  20. Panel data models extended to spatial error autocorrelation or a spatially lagged dependent variable

    NARCIS (Netherlands)

    Elhorst, J. Paul

    2001-01-01

    This paper surveys panel data models extended to spatial error autocorrelation or a spatially lagged dependent variable. In particular, it focuses on the specification and estimation of four panel data models commonly used in applied research: the fixed effects model, the random effects model, the

  1. Crude oil market efficiency and modeling. Insights from the multiscaling autocorrelation pattern

    International Nuclear Information System (INIS)

    Alvarez-Ramirez, Jose; Alvarez, Jesus; Solis, Ricardo

    2010-01-01

    Empirical research on market inefficiencies focuses on the detection of autocorrelations in price time series. In the case of crude oil markets, statistical support is claimed for weak efficiency over a wide range of time-scales. However, the results are still controversial since theoretical arguments point to deviations from efficiency as prices tend to revert towards an equilibrium path. This paper studies the efficiency of crude oil markets by using lagged detrended fluctuation analysis (DFA) to detect delay effects in price autocorrelations quantified in terms of a multiscaling Hurst exponent (i.e., autocorrelations are dependent of the time scale). Results based on spot price data for the period 1986-2009 indicate important deviations from efficiency associated to lagged autocorrelations, so imposing the random walk for crude oil prices has pronounced costs for forecasting. Evidences in favor of price reversion to a continuously evolving mean underscores the importance of adequately incorporating delay effects and multiscaling behavior in the modeling of crude oil price dynamics. (author)

  2. Crude oil market efficiency and modeling. Insights from the multiscaling autocorrelation pattern

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez-Ramirez, Jose [Departamento de Ingenieria de Procesos e Hidraulica, Universidad Autonoma Metropolitana-Iztapalapa, Apartado Postal 55-534, Mexico D.F., 09340 (Mexico); Departamento de Economia, Universidad Autonoma Metropolitana-Iztapalapa, Apartado Postal 55-534, Mexico D.F., 09340 (Mexico); Alvarez, Jesus [Departamento de Ingenieria de Procesos e Hidraulica, Universidad Autonoma Metropolitana-Iztapalapa, Apartado Postal 55-534, Mexico D.F., 09340 (Mexico); Solis, Ricardo [Departamento de Economia, Universidad Autonoma Metropolitana-Iztapalapa, Apartado Postal 55-534, Mexico D.F., 09340 (Mexico)

    2010-09-15

    Empirical research on market inefficiencies focuses on the detection of autocorrelations in price time series. In the case of crude oil markets, statistical support is claimed for weak efficiency over a wide range of time-scales. However, the results are still controversial since theoretical arguments point to deviations from efficiency as prices tend to revert towards an equilibrium path. This paper studies the efficiency of crude oil markets by using lagged detrended fluctuation analysis (DFA) to detect delay effects in price autocorrelations quantified in terms of a multiscaling Hurst exponent (i.e., autocorrelations are dependent of the time scale). Results based on spot price data for the period 1986-2009 indicate important deviations from efficiency associated to lagged autocorrelations, so imposing the random walk for crude oil prices has pronounced costs for forecasting. Evidences in favor of price reversion to a continuously evolving mean underscores the importance of adequately incorporating delay effects and multiscaling behavior in the modeling of crude oil price dynamics. (author)

  3. An improved triple collocation algorithm for decomposing autocorrelated and white soil moisture retrieval errors

    Science.gov (United States)

    If not properly account for, auto-correlated errors in observations can lead to inaccurate results in soil moisture data analysis and reanalysis. Here, we propose a more generalized form of the triple collocation algorithm (GTC) capable of decomposing the total error variance of remotely-sensed surf...

  4. A model-free approach to eliminate autocorrelation when testing for process capability

    DEFF Research Database (Denmark)

    Vanmann, Kerstin; Kulahci, Murat

    2008-01-01

    There is an increasing use of on-line data acquisition systems in industry. This usually leads to autocorrelated data and implies that the assumption of independent observations has to be re-examined. Most decision procedures for capability analysis assume independent data. In this article we pre...

  5. Operator theory of angular momentum nad orientational auto-correlation functions

    International Nuclear Information System (INIS)

    Evans, M.W.

    1982-01-01

    The rigorous relation between the orientational auto-correlation function and the angular momentum autocorrelation function is described in two cases of interest. First when description of the complete zero THz- spectrum is required from the Mori continued fraction expansion for the angular momentum autocorrelation function and second when rotation/translation effects are important. The Mori-Evans theory of 1976, relying on the simple Shimizu relation is found to be essentially unaffected by the higher order corrections recently worked out by Ford and co-workers in the Markov limit. The mutual interaction of rotation and translation is important in determining the details of both the orientational and angular momentum auto-correlation function's (a.c.f.'s) in the presence of sample anisotropy or a symmetry breaking field. In this case it is essential to regard the angular momentum a.c.f. as non-Markovian and methods are developed to relate this to the orientational a.c.f. in the presence of rotation/translation coupling. (author)

  6. A simulation-based approach to capturing auto-correlated demand parameter uncertainty in inventory management

    NARCIS (Netherlands)

    Akçay, A.E.; Biller, B.; Tayur, S.

    2012-01-01

    We consider a repeated newsvendor setting where the parameters of the demand distribution are unknown, and we study the problem of setting inventory targets using only a limited amount of historical demand data. We assume that the demand process is autocorrelated and represented by an

  7. A spatio-temporal autocorrelation change detection approach using hyper-temporal satellite data

    CSIR Research Space (South Africa)

    Kleynhans, W

    2013-07-01

    Full Text Available -1 IEEE International Geoscience and Remote Sensing Symposium, Melbourne, Australia 21-26 July 2013 A SPATIO-TEMPORAL AUTOCORRELATION CHANGE DETECTION APPROACH USING HYPER-TEMPORAL SATELLITE DATA yzW. Kleynhans, yz,B.P Salmon,zK. J. Wessels...

  8. Power properties of invariant tests for spatial autocorrelation in linear regression

    NARCIS (Netherlands)

    Martellosio, F.

    2006-01-01

    Many popular tests for residual spatial autocorrelation in the context of the linear regression model belong to the class of invariant tests. This paper derives a number of exact properties of the power function of such tests. In particular, we extend the work of Krämer (2005, Journal of Statistical

  9. Velocity-Autocorrelation Function in Liquids, Deduced from Neutron Incoherent Scattering Results

    DEFF Research Database (Denmark)

    Carneiro, Kim

    1976-01-01

    The Fourier transform p(ω) of the velocity-autocorrelation function is derived from neutron incoherent scattering results, obtained from the two liquids Ar and H2. The quality and significance of the results are discussed with special emphasis on the long-time t-3/2 tail, found in computer simula...

  10. Simultaneous maximization of spatial and temporal autocorrelation in spatio-temporal data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2002-01-01

    . This is done by solving the generalized eigenproblem represented by the Rayleigh coefficient where is the dispersion of and is the dispersion of the difference between and spatially shifted. Hence, the new variates are obtained from the conjugate eigenvectors and the autocorrelations obtained are , i.e., high...

  11. Auto-correlation analysis of wave heights in the Bay of Bengal

    Indian Academy of Sciences (India)

    Time series observations of significant wave heights in the Bay of Bengal were subjected to auto- correlation analysis to determine temporal variability scale. The analysis indicates an exponen- tial fall of auto-correlation in the first few hours with a decorrelation time scale of about six hours. A similar figure was found earlier ...

  12. Environmental risk factors for autism: an evidence-based review of systematic reviews and meta-analyses.

    Science.gov (United States)

    Modabbernia, Amirhossein; Velthorst, Eva; Reichenberg, Abraham

    2017-01-01

    According to recent evidence, up to 40-50% of variance in autism spectrum disorder (ASD) liability might be determined by environmental factors. In the present paper, we conducted a review of systematic reviews and meta-analyses of environmental risk factors for ASD. We assessed each review for quality of evidence and provided a brief overview of putative mechanisms of environmental risk factors for ASD. Current evidence suggests that several environmental factors including vaccination, maternal smoking, thimerosal exposure, and most likely assisted reproductive technologies are unrelated to risk of ASD. On the contrary, advanced parental age is associated with higher risk of ASD. Birth complications that are associated with trauma or ischemia and hypoxia have also shown strong links to ASD, whereas other pregnancy-related factors such as maternal obesity, maternal diabetes, and caesarian section have shown a less strong (but significant) association with risk of ASD. The reviews on nutritional elements have been inconclusive about the detrimental effects of deficiency in folic acid and omega 3, but vitamin D seems to be deficient in patients with ASD. The studies on toxic elements have been largely limited by their design, but there is enough evidence for the association between some heavy metals (most important inorganic mercury and lead) and ASD that warrants further investigation. Mechanisms of the association between environmental factors and ASD are debated but might include non-causative association (including confounding), gene-related effect, oxidative stress, inflammation, hypoxia/ischemia, endocrine disruption, neurotransmitter alterations, and interference with signaling pathways. Compared to genetic studies of ASD, studies of environmental risk factors are in their infancy and have significant methodological limitations. Future studies of ASD risk factors would benefit from a developmental psychopathology approach, prospective design, precise exposure

  13. A clinical observational study analysing the factors associated with hyperventilation during actual cardiopulmonary resuscitation in the emergency department.

    Science.gov (United States)

    Park, Sang O; Shin, Dong Hyuk; Baek, Kwang Je; Hong, Dae Young; Kim, Eun Jung; Kim, Sang Chul; Lee, Kyeong Ryong

    2013-03-01

    This is the first study to identify the factors associated with hyperventilation during actual cardiopulmonary resuscitation (CPR) in the emergency department (ED). All CPR events in the ED were recorded by video from April 2011 to December 2011. The following variables were analysed using review of the recorded CPR data: ventilation rate (VR) during each minute and its associated factors including provider factors (experience, advanced cardiovascular life support (ACLS) certification), clinical factors (auscultation to confirm successful intubation, suctioning, and comments by the team leader) and time factors (time or day of CPR). Fifty-five adult CPR cases including a total of 673 min sectors were analysed. The higher rates of hyperventilation (VR>10/min) were delivered by inexperienced (53.3% versus 14.2%) or uncertified ACLS provider (52.2% versus 10.8%), during night time (61.0 versus 34.5%) or weekend CPR (53.1% versus 35.6%) and when auscultation to confirm successful intubation was performed (93.5% versus 52.8%) than not (all p<0.0001). However, experienced (25.3% versus 29.7%; p=0.448) or certified ACLS provider (20.6% versus 31.3%; p<0.0001) could not deliver high rate of proper ventilation (VR 8-10/min). Comment by the team leader was most strongly associated with the proper ventilation (odds ratio 7.035, 95% confidence interval 4.512-10.967). Hyperventilation during CPR was associated with inexperienced or uncertified ACLS provider, auscultation to confirm intubation, and night time or weekend CPR. And to deliver proper ventilation, comments by the team leader should be given regardless of providers' expert level. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Incorporating spatial autocorrelation into species distribution models alters forecasts of climate-mediated range shifts.

    Science.gov (United States)

    Crase, Beth; Liedloff, Adam; Vesk, Peter A; Fukuda, Yusuke; Wintle, Brendan A

    2014-08-01

    Species distribution models (SDMs) are widely used to forecast changes in the spatial distributions of species and communities in response to climate change. However, spatial autocorrelation (SA) is rarely accounted for in these models, despite its ubiquity in broad-scale ecological data. While spatial autocorrelation in model residuals is known to result in biased parameter estimates and the inflation of type I errors, the influence of unmodeled SA on species' range forecasts is poorly understood. Here we quantify how accounting for SA in SDMs influences the magnitude of range shift forecasts produced by SDMs for multiple climate change scenarios. SDMs were fitted to simulated data with a known autocorrelation structure, and to field observations of three mangrove communities from northern Australia displaying strong spatial autocorrelation. Three modeling approaches were implemented: environment-only models (most frequently applied in species' range forecasts), and two approaches that incorporate SA; autologistic models and residuals autocovariate (RAC) models. Differences in forecasts among modeling approaches and climate scenarios were quantified. While all model predictions at the current time closely matched that of the actual current distribution of the mangrove communities, under the climate change scenarios environment-only models forecast substantially greater range shifts than models incorporating SA. Furthermore, the magnitude of these differences intensified with increasing increments of climate change across the scenarios. When models do not account for SA, forecasts of species' range shifts indicate more extreme impacts of climate change, compared to models that explicitly account for SA. Therefore, where biological or population processes induce substantial autocorrelation in the distribution of organisms, and this is not modeled, model predictions will be inaccurate. These results have global importance for conservation efforts as inaccurate

  15. Temporal and spatial distribution characteristics in the natural plague foci of Chinese Mongolian gerbils based on spatial autocorrelation.

    Science.gov (United States)

    Du, Hai-Wen; Wang, Yong; Zhuang, Da-Fang; Jiang, Xiao-San

    2017-08-07

    The nest flea index of Meriones unguiculatus is a critical indicator for the prevention and control of plague, which can be used not only to detect the spatial and temporal distributions of Meriones unguiculatus, but also to reveal its cluster rule. This research detected the temporal and spatial distribution characteristics of the plague natural foci of Mongolian gerbils by body flea index from 2005 to 2014, in order to predict plague outbreaks. Global spatial autocorrelation was used to describe the entire spatial distribution pattern of the body flea index in the natural plague foci of typical Chinese Mongolian gerbils. Cluster and outlier analysis and hot spot analysis were also used to detect the intensity of clusters based on geographic information system methods. The quantity of M. unguiculatus nest fleas in the sentinel surveillance sites from 2005 to 2014 and host density data of the study area from 2005 to 2010 used in this study were provided by Chinese Center for Disease Control and Prevention. The epidemic focus regions of the Mongolian gerbils remain the same as the hot spot regions relating to the body flea index. High clustering areas possess a similar pattern as the distribution pattern of the body flea index indicating that the transmission risk of plague is relatively high. In terms of time series, the area of the epidemic focus gradually increased from 2005 to 2007, declined rapidly in 2008 and 2009, and then decreased slowly and began trending towards stability from 2009 to 2014. For the spatial change, the epidemic focus regions began moving northward from the southwest epidemic focus of the Mongolian gerbils from 2005 to 2007, and then moved from north to south in 2007 and 2008. The body flea index of Chinese gerbil foci reveals significant spatial and temporal aggregation characteristics through the employing of spatial autocorrelation. The diversity of temporary and spatial distribution is mainly affected by seasonal variation, the human

  16. Analysing the causes of chronic cough: relation to diesel exhaust, ozone, nitrogen oxides, sulphur oxides and other environmental factors

    Directory of Open Access Journals (Sweden)

    Wagner Ulrich

    2006-05-01

    Full Text Available Abstract Air pollution remains a leading cause of many respiratory diseases including chronic cough. Although episodes of incidental, dramatic air pollution are relatively rare, current levels of exposure of pollutants in industrialized and developing countries such as total articles, diesel exhaust particles and common cigarette smoke may be responsible for the development of chronic cough both in children and adults. The present study analyses the effects of common environmental factors as potential causes of chronic cough. Different PubMed-based researches were performed that related the term cough to various environmental factors. There is some evidence that chronic inhalation of diesel can lead to the development of cough. For long-term exposure to nitrogen dioxide (NO2, children were found to exhibit increased incidences of chronic cough and decreased lung function parameters. Although a number of studies did not show that outdoor pollution directly causes the development of asthma, they have demonstrated that high levels pollutants and their interaction with sunlight produce ozone (O3 and that repeated exposure to it can lead to chronic cough. In summary, next to the well-known air pollutants which also include particulate matter and sulphur dioxide, a number of other indoor and outdoor pollutants have been demonstrated to cause chronic cough and therefore, environmental factors have to be taken into account as potential initiators of both adult and pediatric chronic cough.

  17. Autocorrelation as a source of truncated Lévy flights in foreign exchange rates

    Science.gov (United States)

    Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Da Silva, Sergio

    2003-05-01

    We suggest that the ultraslow speed of convergence associated with truncated Lévy flights (Phys. Rev. Lett. 73 (1994) 2946) may well be explained by autocorrelations in data. We show how a particular type of autocorrelation generates power laws consistent with a truncated Lévy flight. Stock exchanges have been suggested to be modeled by a truncated Lévy flight (Nature 376 (1995) 46; Physica A 297 (2001) 509; Econom. Bull. 7 (2002) 1). Here foreign exchange rate data are taken instead. Scaling power laws in the “probability of return to the origin” are shown to emerge for most currencies. A novel approach to measure how distant a process is from a Gaussian regime is presented.

  18. Effect of nonlinear crystal thickness on the parameters of the autocorrelator of femtosecond light pulses

    International Nuclear Information System (INIS)

    Masalov, Anatolii V; Chudnovsky, Aleksandr V

    2004-01-01

    It is shown that the finite thickness of the second-harmonic crystal distorts the results of measurements in nonlinear autocorrelators intended for measuring the durations and fields of femtosecond light pulses mainly due to dispersive broadening (or compression) of the pulses being measured, as well as due to the group velocity mismatch between the fundamental and sum-frequency pulses. The refractive index dispersion of the crystal, scaled by half its thickness, distorts the pulse duration to a certain extent depending on its initial chirp and thus determines the width of the energy distribution recorded in the autocorrelator. As the crystal thickness increases, the group velocity mismatch leads to a transformation of the recorded distribution from the correlation function of intensity to the squared modulus of the field correlation function. In the case of Gaussian pulses, such a transformation does not affect significantly the recorded distribution. Errors of pulse duration measurements are estimated. (nonlinear optical phenomena)

  19. The Effect of Autocorrelation on the Hotelling T-2 Control Chart

    DEFF Research Database (Denmark)

    Vanhatalo, Erik; Kulahci, Murat

    2015-01-01

    One of the basic assumptions for traditional univariate and multivariate control charts is that the data are independent in time. For the latter, in many cases, the data are serially dependent (autocorrelated) and cross-correlated because of, for example, frequent sampling and process dynamics......- correlation structures for different magnitudes of shifts in the process mean is not fully explored in the literature. In this article, the performance of the Hotelling T-2 control chart for different shift sizes and various autocorrelation and cross- correlation structures are compared based on the average...... and using the raw data with adjusted control limits calculated through Monte Carlo simulations; and (iii) constructing the control chart for the residuals from a multivariate time series model fitted to the raw data. To limit the complexity, we use a first-order vector autoregressive process and focus...

  20. First measurements of subpicosecond electron beam structure by autocorrelation of coherent diffraction radiation

    CERN Document Server

    Lumpkin, Alex H; Rule, D W

    2001-01-01

    We report the initial measurements of subpicosecond electron beam structure using a nonintercepting technique based on the autocorrelation of coherent diffraction radiation (CDR). A far infrared (FIR) Michelson interferometer with a Golay detector was used to obtain the autocorrelation. The radiation was generated by a thermionic rf gun beam at 40 MeV as it passed through a 5-mm-tall slit/aperture in a metal screen whose surface was at 45 deg. to the beam direction. For the observed bunch lengths of about 450 fs (FWHM) with a shorter time spike on the leading edge, peak currents of about 100 A are indicated. Also a model was developed and used to calculate the CDR from the back of two metal strips separated by a 5-mm vertical gap. The demonstrated nonintercepting aspect of this method could allow on-line bunch length characterizations to be done during free-electron laser experiments.

  1. Autocorrelation and cross-correlation in time series of homicide and attempted homicide

    Science.gov (United States)

    Machado Filho, A.; da Silva, M. F.; Zebende, G. F.

    2014-04-01

    We propose in this paper to establish the relationship between homicides and attempted homicides by a non-stationary time-series analysis. This analysis will be carried out by Detrended Fluctuation Analysis (DFA), Detrended Cross-Correlation Analysis (DCCA), and DCCA cross-correlation coefficient, ρ(n). Through this analysis we can identify a positive cross-correlation between homicides and attempted homicides. At the same time, looked at from the point of view of autocorrelation (DFA), this analysis can be more informative depending on time scale. For short scale (days), we cannot identify auto-correlations, on the scale of weeks DFA presents anti-persistent behavior, and for long time scales (n>90 days) DFA presents a persistent behavior. Finally, the application of this new type of statistical analysis proved to be efficient and, in this sense, this paper can contribute to a more accurate descriptive statistics of crime.

  2. OFDM Signal Detector Based on Cyclic Autocorrelation Function and its Properties

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2011-12-01

    Full Text Available This paper is devoted to research of the general and particular properties of the OFDM signal detector based on the cyclic autocorrelation function. The cyclic autocorrelation function is estimated using DFT. The parameters of the testing signal have been chosen according to 802.11g WLAN. Some properties are described analytically; all events are examined via computer simulations. It is shown that the detector is able to detect an OFDM signal in the case of multipath propagation, inexact frequency synchronization and without time synchronization. The sensitivity of the detector could be decreased in the above cases. An important condition for proper value of the detector sampling interval was derived. Three types of the channels were studied and compared. Detection threshold SNR=-9 dB was found for the signal under consideration and for two-way propagation.

  3. Phylogeny, Functional Annotation, and Protein Interaction Network Analyses of the Xenopus tropicalis Basic Helix-Loop-Helix Transcription Factors

    Directory of Open Access Journals (Sweden)

    Wuyi Liu

    2013-01-01

    Full Text Available The previous survey identified 70 basic helix-loop-helix (bHLH proteins, but it was proved to be incomplete, and the functional information and regulatory networks of frog bHLH transcription factors were not fully known. Therefore, we conducted an updated genome-wide survey in the Xenopus tropicalis genome project databases and identified 105 bHLH sequences. Among the retrieved 105 sequences, phylogenetic analyses revealed that 103 bHLH proteins belonged to 43 families or subfamilies with 46, 26, 11, 3, 15, and 4 members in the corresponding supergroups. Next, gene ontology (GO enrichment analyses showed 65 significant GO annotations of biological processes and molecular functions and KEGG pathways counted in frequency. To explore the functional pathways, regulatory gene networks, and/or related gene groups coding for Xenopus tropicalis bHLH proteins, the identified bHLH genes were put into the databases KOBAS and STRING to get the signaling information of pathways and protein interaction networks according to available public databases and known protein interactions. From the genome annotation and pathway analysis using KOBAS, we identified 16 pathways in the Xenopus tropicalis genome. From the STRING interaction analysis, 68 hub proteins were identified, and many hub proteins created a tight network or a functional module within the protein families.

  4. Spatial autocorrelation method using AR model; Kukan jiko sokanho eno AR model no tekiyo

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, H; Obuchi, T; Saito, T [Iwate University, Iwate (Japan). Faculty of Engineering

    1996-05-01

    Examination was made about the applicability of the AR model to the spatial autocorrelation (SAC) method, which analyzes the surface wave phase velocity in a microtremor, for the estimation of the underground structure. In this examination, microtremor data recorded in Morioka City, Iwate Prefecture, was used. In the SAC method, a spatial autocorrelation function with the frequency as a variable is determined from microtremor data observed by circular arrays. Then, the Bessel function is adapted to the spatial autocorrelation coefficient with the distance between seismographs as a variable for the determination of the phase velocity. The result of the AR model application in this study and the results of the conventional BPF and FFT method were compared. It was then found that the phase velocities obtained by the BPF and FFT methods were more dispersed than the same obtained by the AR model. The dispersion in the BPF method is attributed to the bandwidth used in the band-pass filter and, in the FFT method, to the impact of the bandwidth on the smoothing of the cross spectrum. 2 refs., 7 figs.

  5. Spatial autocorrelation in farmland grasshopper assemblages (Orthoptera: Acrididae) in western France.

    Science.gov (United States)

    Badenhausser, I; Gouat, M; Goarant, A; Cornulier, T; Bretagnolle, V

    2012-10-01

    Agricultural intensification in western Europe has caused a dramatic loss of grassland surfaces in farmlands, which have resulted in strong declines in grassland invertebrates, leading to cascade effects at higher trophic levels among consumers of invertebrates. Grasshoppers are important components of grassland invertebrate assemblages in European agricultural ecosystems, particularly as prey for bird species. Understanding how grasshopper populations are distributed in fragmented landscapes with low grassland availability is critical for both studies in biodiversity conservation and insect management. We assessed the range and strength of spatial autocorrelation for two grasshopper taxa (Gomphocerinae subfamily and Calliptamus italicus L.) across an intensive farmland in western France. Data from surveys carried out over 8 yr in 1,715 grassland fields were analyzed using geostatistics. Weak spatial patterns were observed at small spatial scales, suggesting important local effects of management practices on grasshopper densities. Spatial autocorrelation patterns for both grasshopper taxa were only detected at intermediate scales. For Gomphocerinae, the range of spatial autocorrelation varied from 802 to 2,613 m according to the year, depending both on grasshopper density and on grassland surfaces in the study site, whereas spatial patterns for the Italian locust were more variable and not related to grasshopper density or grassland surfaces. Spatial patterns in the distribution of Gomphocerinae supported our hypothesis that habitat availability was a major driver of grasshopper distribution in the landscape, and suggested it was related to density-dependent processes such as dispersal.

  6. Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data

    Science.gov (United States)

    Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti

    2018-03-01

    In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.

  7. Speeding cis-trans regulation discovery by phylogenomic analyses coupled with screenings of an arrayed library of Arabidopsis transcription factors.

    Directory of Open Access Journals (Sweden)

    Gabriel Castrillo

    Full Text Available Transcriptional regulation is an important mechanism underlying gene expression and has played a crucial role in evolution. The number, position and interactions between cis-elements and transcription factors (TFs determine the expression pattern of a gene. To identify functionally relevant cis-elements in gene promoters, a phylogenetic shadowing approach with a lipase gene (LIP1 was used. As a proof of concept, in silico analyses of several Brassicaceae LIP1 promoters identified a highly conserved sequence (LIP1 element that is sufficient to drive strong expression of a reporter gene in planta. A collection of ca. 1,200 Arabidopsis thaliana TF open reading frames (ORFs was arrayed in a 96-well format (RR library and a convenient mating based yeast one hybrid (Y1H screening procedure was established. We constructed an episomal plasmid (pTUY1H to clone the LIP1 element and used it as bait for Y1H screenings. A novel interaction with an HD-ZIP (AtML1 TF was identified and abolished by a 2 bp mutation in the LIP1 element. A role of this interaction in transcriptional regulation was confirmed in planta. In addition, we validated our strategy by reproducing the previously reported interaction between a MYB-CC (PHR1 TF, a central regulator of phosphate starvation responses, with a conserved promoter fragment (IPS1 element containing its cognate binding sequence. Finally, we established that the LIP1 and IPS1 elements were differentially bound by HD-ZIP and MYB-CC family members in agreement with their genetic redundancy in planta. In conclusion, combining in silico analyses of orthologous gene promoters with Y1H screening of the RR library represents a powerful approach to decipher cis- and trans-regulatory codes.

  8. Adaptive non-collinear autocorrelation of few-cycle pulses with an angular tunable bi-mirror

    Energy Technology Data Exchange (ETDEWEB)

    Treffer, A., E-mail: treffer@mbi-berlin.de; Bock, M.; König, S.; Grunwald, R. [Max Born Institute for Nonlinear Optics and Short-Pulse Spectroscopy, Max Born Strasse 2A, D-12489 Berlin (Germany); Brunne, J.; Wallrabe, U. [Laboratory for Microactuators, Department of Microsystems Engineering, IMTEK, University of Freiburg, Georges-Koehler-Allee 102, Freiburg 79110 (Germany)

    2016-02-01

    Adaptive autocorrelation with an angular tunable micro-electro-mechanical system is reported. A piezo-actuated Fresnel bi-mirror structure was applied to measure the second order autocorrelation of near-infrared few-cycle laser pulses in a non-collinear setup at tunable superposition angles. Because of enabling measurements with variable scaling and minimizing the influence of distortions by adaptive self-reconstruction, the approach extends the capability of autocorrelators. Flexible scaling and robustness against localized amplitude obscurations are demonstrated. The adaptive reconstruction of temporal frequency information by the Fourier analysis of autocorrelation data is shown. Experimental results and numerical simulations of the beam propagation and interference are compared for variable angles.

  9. Multiscale Thermohydrologic Model Analyses of Heterogeneity and Thermal-Loading Factors for the Proposed Repository at Yucca Mountain

    International Nuclear Information System (INIS)

    Glascoe, L.G.; Buscheck, T.A.; Gansemer, J.; Sun, Y.; Lee, K.

    2002-01-01

    The MultiScale ThermoHydrologic Model (MSTHM) predicts thermohydrologic (TH) conditions in emplacement drifts and the adjoining host rock throughout the proposed nuclear-waste repository at Yucca Mountain. The MSTHM is a computationally efficient approach that accounts for TH processes occurring at a scale of a few tens of centimeters around individual waste packages and emplacement drifts, and for heat flow at the multi-kilometer scale at Yucca Mountain. The modeling effort presented here is an early investigation of the repository and is simulated at a lower temperature mode and with a different panel loading than the repository currently being considered for license application. We present these recent lower temperature mode MSTHM simulations that address the influence of repository-scale thermal-conductivity heterogeneity and the influence of preclosure operational factors affecting thermal-loading conditions. We can now accommodate a complex repository layout with emplacement drifts lying in non-parallel planes using a superposition process that combines results from multiple mountain-scale submodels. This development, along with other improvements to the MSTHM, enables more rigorous analyses of preclosure operational factors. These improvements include the ability to (1) predict TH conditions on a drift-by-drift basis, (2) represent sequential emplacement of waste packages along the drifts, and (3) incorporate distance- and time-dependent heat-removal efficiency associated with drift ventilation. Alternative approaches to addressing repository-scale thermal-conductivity heterogeneity are investigated. We find that only one of the four MSTHM submodel types needs to incorporate thermal-conductivity heterogeneity. For a particular repository design, we find that the most influential parameters are (1) percolation-flux distribution, (2) thermal-conductivity heterogeneity within the host-rock units, (3) the sequencing of waste-package emplacement, and (4) the

  10. Evaluation of bentonite alteration due to interactions with iron. Sensitivity analyses to identify the important factors for the bentonite alteration

    International Nuclear Information System (INIS)

    Sasamoto, Hiroshi; Wilson, James; Sato, Tsutomu

    2013-01-01

    Performance assessment of geological disposal systems for high-level radioactive waste requires a consideration of long-term systems behaviour. It is possible that the alteration of swelling clay present in bentonite buffers might have an impact on buffer functions. In the present study, iron (as a candidate overpack material)-bentonite (I-B) interactions were evaluated as the main buffer alteration scenario. Existing knowledge on alteration of bentonite during I-B interactions was first reviewed, then the evaluation methodology was developed considering modeling techniques previously used overseas. A conceptual model for smectite alteration during I-B interactions was produced. The following reactions and processes were selected: 1) release of Fe 2+ due to overpack corrosion; 2) diffusion of Fe 2+ in compacted bentonite; 3) sorption of Fe 2+ on smectite edge and ion exchange in interlayers; 4) dissolution of primary phases and formation of alteration products. Sensitivity analyses were performed to identify the most important factors for the alteration of bentonite by I-B interactions. (author)

  11. Damage detection and isolation via autocorrelation: a step toward passive sensing

    Science.gov (United States)

    Chang, Y. S.; Yuan, F. G.

    2018-03-01

    Passive sensing technique may eliminate the need of expending power from actuators and thus provide a means of developing a compact and simple structural health monitoring system. More importantly, it may provide a solution for monitoring the aircraft subjected to environmental loading from air flow during operation. In this paper, a non-contact auto-correlation based technique is exploited as a feasibility study for passive sensing application to detect damage and isolate the damage location. Its theoretical basis bears some resemblance to reconstructing Green's function from diffusive wavefield through cross-correlation. Localized high pressure air from air compressor are randomly and continuously applied on the one side surface of the aluminum panels through the air blow gun. A laser Doppler vibrometer (LDV) was used to scan a 90 mm × 90 mm area to create a 6 × 6 2D-array signals from the opposite side of the panels. The scanned signals were auto-correlated to reconstruct a "selfimpulse response" (or Green's function). The premise for stably reconstructing the accurate Green's function requires long sensing times. For a 609.6 mm × 609.6 mm flat aluminum panel, the sensing times roughly at least four seconds is sufficient to establish converged Green's function through correlation. For the integral stiffened aluminum panel, the geometrical features of the panel expedite the formation of the diffusive wavefield and thus shorten the sensing times. The damage is simulated by gluing a magnet onto the panels. Reconstructed Green's functions (RGFs) are used for damage detection and damage isolation based on an imaging condition with mean square deviation of the RGFs from the pristine and the damaged structure and the results are shown in color maps. The auto-correlation based technique is shown to consistently detect the simulated damage, image and isolate the damage in the structure subjected to high pressure air excitation. This technique may be transformed into

  12. Auto-correlation of velocity-fluctuations and frequency-dependent diffusion constant for hot electrons

    International Nuclear Information System (INIS)

    Roy, M.D.; Nag, B.R.

    1981-01-01

    A method has been developed for determining the auto-correlation functions of the fluctuations in the transverse and the parallel components of hot carrier-velocity in a semiconductor by Monte Carlo simulation. The functions for electrons in InSb are determined by this method for applied electric fields of 50 V/cm, 75 V/cm, and 100 V/cm. With increasing value of the time interval the transverse auto-correlation function fall nearly exponentially to zero, but the parallel function falls sharply to a negative peak, then rises to positive values and finally becomes zero. The interval beyond which the auto-correlation function is zero and the correlation time are also evaluated. The correlation time is found to be approximately 1.6 times the relaxation time calculated from the chord mobility. The effect of the flight sampling time on the value of variance of the displacement, is investigated in terms of the low frequency diffusion constants, determined from the variation of the correlation functions. It is found that the diffusion constants become independent of the sampling time if it is of the order of one hundred times the relaxation time. The frequency-dependent diffusion constants are calculated from the correlation functions. The transverse diffusion constant falls monotonically with frequency for all the field strengths studied. The parallel diffusion constant has similar variation for the lower fields (50 V/cm and 75 V/cm) but it has a peak at about 44 GHz for the field of 100 V/cm. (orig.)

  13. New autocorrelation technique for the IR FEL optical pulse width measurements

    Energy Technology Data Exchange (ETDEWEB)

    Amirmadhi, F.; Brau, K.A.; Becker, C. [Vanderbilt Univ., Nashville, TN (United States)] [and others

    1995-12-31

    We have developed a new technique for the autocorrelation measurement of optical pulse width at the Vanderbilt University FEL center. This method is based on nonlinear absorption and transmission characteristics of semiconductors such as Ge, Te and InAs suitable for the wavelength range from 2 to over 6 microns. This approach, aside being simple and low cost, removes the phase matching condition that is generally required for the standard frequency doubling technique and covers a greater wavelength range per nonlinear material. In this paper we will describe the apparatus, explain the principal mechanism involved and compare data which have been acquired with both frequency doubling and two-photon absorption.

  14. A comparison of two least-squared random coefficient autoregressive models: with and without autocorrelated errors

    OpenAIRE

    Autcha Araveeporn

    2013-01-01

    This paper compares a Least-Squared Random Coefficient Autoregressive (RCA) model with a Least-Squared RCA model based on Autocorrelated Errors (RCA-AR). We looked at only the first order models, denoted RCA(1) and RCA(1)-AR(1). The efficiency of the Least-Squared method was checked by applying the models to Brownian motion and Wiener process, and the efficiency followed closely the asymptotic properties of a normal distribution. In a simulation study, we compared the performance of RCA(1) an...

  15. Autocorrelation exponent of conserved spin systems in the scaling regime following a critical quench.

    Science.gov (United States)

    Sire, Clément

    2004-09-24

    We study the autocorrelation function of a conserved spin system following a quench at the critical temperature. Defining the correlation length L(t) approximately t(1/z), we find that for times t' and t satisfying L(t')infinity limit, we show that lambda(')(c)=d+2 and phi=z/2. We give a heuristic argument suggesting that this result is, in fact, valid for any dimension d and spin vector dimension n. We present numerical simulations for the conserved Ising model in d=1 and d=2, which are fully consistent with the present theory.

  16. Microfluidic volumetric flow determination using optical coherence tomography speckle: An autocorrelation approach

    Energy Technology Data Exchange (ETDEWEB)

    De Pretto, Lucas R., E-mail: lucas.de.pretto@usp.br; Nogueira, Gesse E. C.; Freitas, Anderson Z. [Instituto de Pesquisas Energéticas e Nucleares, IPEN–CNEN/SP, Avenida Lineu Prestes, 2242, 05508-000 São Paulo (Brazil)

    2016-04-28

    Functional modalities of Optical Coherence Tomography (OCT) based on speckle analysis are emerging in the literature. We propose a simple approach to the autocorrelation of OCT signal to enable volumetric flow rate differentiation, based on decorrelation time. Our results show that this technique could distinguish flows separated by 3 μl/min, limited by the acquisition speed of the system. We further perform a B-scan of gradient flow inside a microchannel, enabling the visualization of the drag effect on the walls.

  17. Influence of the nuclear autocorrelation function on the positron production in heavy-ion collisions

    International Nuclear Information System (INIS)

    Tomoda, T.; Weidenmueller, H.A.

    1983-01-01

    The influence of a nuclear reaction on atomic positron production in heavy-ion collisions is investigated. Using statistical concepts, we describe the nuclear S matrix for a heavy-ion induced reaction as a statistically fluctuating function of energy. The positron production rate is then dependent on the autocorrelation function of this S matrix, and on the ratio of the ''direct'' versus the ''fluctuating'' part of the nuclear cross section. Numerical calculations show that in this way, current experimental results on positron production in heavy-ion collisions can be reproduced in a semiquantitative fashion

  18. Transforming the autocorrelation function of a time series to detect land cover change

    CSIR Research Space (South Africa)

    Salmon

    2015-07-01

    Full Text Available stream_source_info Salmon_2016.pdf.txt stream_content_type text/plain stream_size 1090 Content-Encoding ISO-8859-1 stream_name Salmon_2016.pdf.txt Content-Type text/plain; charset=ISO-8859-1 International Geoscience... and Remote Sensing Symposium (IEEE IGARSS), 10-15 July 2016, Beijing Transforming the autocorrelation function of a time series to detect land cover change Salmon, B.P., Kleynhans, W., Olivier, J.C. and Schwegmann, C.P. ABSTRACT Regional...

  19. Sign reversals of the output autocorrelation function for the stochastic Bernoulli-Verhulst equation

    Energy Technology Data Exchange (ETDEWEB)

    Lumi, N., E-mail: Neeme.Lumi@tlu.ee; Mankin, R., E-mail: Romi.Mankin@tlu.ee [Institute of Mathematics and Natural Sciences, Tallinn University, 29 Narva Road, 10120 Tallinn (Estonia)

    2015-10-28

    We consider a stochastic Bernoulli-Verhulst equation as a model for population growth processes. The effect of fluctuating environment on the carrying capacity of a population is modeled as colored dichotomous noise. Relying on the composite master equation an explicit expression for the stationary autocorrelation function (ACF) of population sizes is found. On the basis of this expression a nonmonotonic decay of the ACF by increasing lag-time is shown. Moreover, in a certain regime of the noise parameters the ACF demonstrates anticorrelation as well as related sign reversals at some values of the lag-time. The conditions for the appearance of this highly unexpected effect are also discussed.

  20. Simulating land-use changes by incorporating spatial autocorrelation and self-organization in CLUE-S modeling: a case study in Zengcheng District, Guangzhou, China

    Science.gov (United States)

    Mei, Zhixiong; Wu, Hao; Li, Shiyun

    2018-06-01

    The Conversion of Land Use and its Effects at Small regional extent (CLUE-S), which is a widely used model for land-use simulation, utilizes logistic regression to estimate the relationships between land use and its drivers, and thus, predict land-use change probabilities. However, logistic regression disregards possible spatial autocorrelation and self-organization in land-use data. Autologistic regression can depict spatial autocorrelation but cannot address self-organization, while logistic regression by considering only self-organization (NElogistic regression) fails to capture spatial autocorrelation. Therefore, this study developed a regression (NE-autologistic regression) method, which incorporated both spatial autocorrelation and self-organization, to improve CLUE-S. The Zengcheng District of Guangzhou, China was selected as the study area. The land-use data of 2001, 2005, and 2009, as well as 10 typical driving factors, were used to validate the proposed regression method and the improved CLUE-S model. Then, three future land-use scenarios in 2020: the natural growth scenario, ecological protection scenario, and economic development scenario, were simulated using the improved model. Validation results showed that NE-autologistic regression performed better than logistic regression, autologistic regression, and NE-logistic regression in predicting land-use change probabilities. The spatial allocation accuracy and kappa values of NE-autologistic-CLUE-S were higher than those of logistic-CLUE-S, autologistic-CLUE-S, and NE-logistic-CLUE-S for the simulations of two periods, 2001-2009 and 2005-2009, which proved that the improved CLUE-S model achieved the best simulation and was thereby effective to a certain extent. The scenario simulation results indicated that under all three scenarios, traffic land and residential/industrial land would increase, whereas arable land and unused land would decrease during 2009-2020. Apparent differences also existed in the

  1. Human Factors Risk Analyses of a Doffing Protocol for Ebola-Level Personal Protective Equipment: Mapping Errors to Contamination.

    Science.gov (United States)

    Mumma, Joel M; Durso, Francis T; Ferguson, Ashley N; Gipson, Christina L; Casanova, Lisa; Erukunuakpor, Kimberly; Kraft, Colleen S; Walsh, Victoria L; Zimring, Craig; DuBose, Jennifer; Jacob, Jesse T

    2018-03-05

    Doffing protocols for personal protective equipment (PPE) are critical for keeping healthcare workers (HCWs) safe during care of patients with Ebola virus disease. We assessed the relationship between errors and self-contamination during doffing. Eleven HCWs experienced with doffing Ebola-level PPE participated in simulations in which HCWs donned PPE marked with surrogate viruses (ɸ6 and MS2), completed a clinical task, and were assessed for contamination after doffing. Simulations were video recorded, and a failure modes and effects analysis and fault tree analyses were performed to identify errors during doffing, quantify their risk (risk index), and predict contamination data. Fifty-one types of errors were identified, many having the potential to spread contamination. Hand hygiene and removing the powered air purifying respirator (PAPR) hood had the highest total risk indexes (111 and 70, respectively) and number of types of errors (9 and 13, respectively). ɸ6 was detected on 10% of scrubs and the fault tree predicted a 10.4% contamination rate, likely occurring when the PAPR hood inadvertently contacted scrubs during removal. MS2 was detected on 10% of hands, 20% of scrubs, and 70% of inner gloves and the predicted rates were 7.3%, 19.4%, 73.4%, respectively. Fault trees for MS2 and ɸ6 contamination suggested similar pathways. Ebola-level PPE can both protect and put HCWs at risk for self-contamination throughout the doffing process, even among experienced HCWs doffing with a trained observer. Human factors methodologies can identify error-prone steps, delineate the relationship between errors and self-contamination, and suggest remediation strategies.

  2. Genome-Wide Classification and Evolutionary and Expression Analyses of Citrus MYB Transcription Factor Families in Sweet Orange

    Science.gov (United States)

    Hou, Xiao-Jin; Li, Si-Bei; Liu, Sheng-Rui; Hu, Chun-Gen; Zhang, Jin-Zhi

    2014-01-01

    MYB family genes are widely distributed in plants and comprise one of the largest transcription factors involved in various developmental processes and defense responses of plants. To date, few MYB genes and little expression profiling have been reported for citrus. Here, we describe and classify 177 members of the sweet orange MYB gene (CsMYB) family in terms of their genomic gene structures and similarity to their putative Arabidopsis orthologs. According to these analyses, these CsMYBs were categorized into four groups (4R-MYB, 3R-MYB, 2R-MYB and 1R-MYB). Gene structure analysis revealed that 1R-MYB genes possess relatively more introns as compared with 2R-MYB genes. Investigation of their chromosomal localizations revealed that these CsMYBs are distributed across nine chromosomes. Sweet orange includes a relatively small number of MYB genes compared with the 198 members in Arabidopsis, presumably due to a paralog reduction related to repetitive sequence insertion into promoter and non-coding transcribed region of the genes. Comparative studies of CsMYBs and Arabidopsis showed that CsMYBs had fewer gene duplication events. Expression analysis revealed that the MYB gene family has a wide expression profile in sweet orange development and plays important roles in development and stress responses. In addition, 337 new putative microsatellites with flanking sequences sufficient for primer design were also identified from the 177 CsMYBs. These results provide a useful reference for the selection of candidate MYB genes for cloning and further functional analysis forcitrus. PMID:25375352

  3. A method of noise reduction in heterodyne interferometric vibration metrology by combining auto-correlation analysis and spectral filtering

    Science.gov (United States)

    Hao, Hongliang; Xiao, Wen; Chen, Zonghui; Ma, Lan; Pan, Feng

    2018-01-01

    Heterodyne interferometric vibration metrology is a useful technique for dynamic displacement and velocity measurement as it can provide a synchronous full-field output signal. With the advent of cost effective, high-speed real-time signal processing systems and software, processing of the complex signals encountered in interferometry has become more feasible. However, due to the coherent nature of the laser sources, the sequence of heterodyne interferogram are corrupted by a mixture of coherent speckle and incoherent additive noise, which can severely degrade the accuracy of the demodulated signal and the optical display. In this paper, a new heterodyne interferometric demodulation method by combining auto-correlation analysis and spectral filtering is described leading to an expression for the dynamic displacement and velocity of the object under test that is significantly more accurate in both the amplitude and frequency of the vibrating waveform. We present a mathematical model of the signals obtained from interferograms that contain both vibration information of the measured objects and the noise. A simulation of the signal demodulation process is presented and used to investigate the noise from the system and external factors. The experimental results show excellent agreement with measurements from a commercial Laser Doppler Velocimetry (LDV).

  4. BetaBit: A fast generator of autocorrelated binary processes for geophysical research

    Science.gov (United States)

    Serinaldi, Francesco; Lombardo, Federico

    2017-05-01

    We introduce a fast and efficient non-iterative algorithm, called BetaBit, to simulate autocorrelated binary processes describing the occurrence of natural hazards, system failures, and other physical and geophysical phenomena characterized by persistence, temporal clustering, and low rate of occurrence. BetaBit overcomes the simulation constraints posed by the discrete nature of the marginal distributions of binary processes by using the link existing between the correlation coefficients of this process and those of the standard Gaussian processes. The performance of BetaBit is tested on binary signals with power-law and exponentially decaying autocorrelation functions (ACFs) corresponding to Hurst-Kolmogorov and Markov processes, respectively. An application to real-world sequences describing rainfall intermittency and the occurrence of strong positive phases of the North Atlantic Oscillation (NAO) index shows that BetaBit can also simulate surrogate data preserving the empirical ACF as well as signals with autoregressive moving average (ARMA) dependence structures. Extensions to cyclo-stationary processes accounting for seasonal fluctuations are also discussed.

  5. Broadband short pulse measurement by autocorrelation with a sum-frequency generation set-up

    International Nuclear Information System (INIS)

    Glotin, F.; Jaroszynski, D.; Marcouille, O.

    1995-01-01

    Previous spectral and laser pulse length measurements carried out on the CLIO FEL at wavelength λ=8.5 μm suggested that very short light pulses could be generated, about 500 fs wide (FWHM). For these measurements a Michelson interferometer with a Te crystal, as a non-linear detector, was used as a second order autocorrelation device. More recent measurements in similar conditions have confirmed that the laser pulses observed are indeed single: they are not followed by other pulses distant by the slippage length Nλ. As the single micropulse length is likely to depend on the slippage, more measurements at different wavelengths would be useful. This is not directly possible with our actual interferometer set-up, based on a phase-matched non-linear crystal. However, we can use the broadband non-linear medium provided by one of our users' experiments: Sum-Frequency Generation over surfaces. With such autocorrelation set-up, interference fringes are no more visible, but this is largely compensated by the frequency range provided. First tests at 8 μm have already been performed to validate the technic, leading to results similar to those obtained with our previous Michelson set-up

  6. A Comparison of Weights Matrices on Computation of Dengue Spatial Autocorrelation

    Science.gov (United States)

    Suryowati, K.; Bekti, R. D.; Faradila, A.

    2018-04-01

    Spatial autocorrelation is one of spatial analysis to identify patterns of relationship or correlation between locations. This method is very important to get information on the dispersal patterns characteristic of a region and linkages between locations. In this study, it applied on the incidence of Dengue Hemorrhagic Fever (DHF) in 17 sub districts in Sleman, Daerah Istimewa Yogyakarta Province. The link among location indicated by a spatial weight matrix. It describe the structure of neighbouring and reflects the spatial influence. According to the spatial data, type of weighting matrix can be divided into two types: point type (distance) and the neighbourhood area (contiguity). Selection weighting function is one determinant of the results of the spatial analysis. This study use queen contiguity based on first order neighbour weights, queen contiguity based on second order neighbour weights, and inverse distance weights. Queen contiguity first order and inverse distance weights shows that there is the significance spatial autocorrelation in DHF, but not by queen contiguity second order. Queen contiguity first and second order compute 68 and 86 neighbour list

  7. Hotspot detection using image pattern recognition based on higher-order local auto-correlation

    Science.gov (United States)

    Maeda, Shimon; Matsunawa, Tetsuaki; Ogawa, Ryuji; Ichikawa, Hirotaka; Takahata, Kazuhiro; Miyairi, Masahiro; Kotani, Toshiya; Nojima, Shigeki; Tanaka, Satoshi; Nakagawa, Kei; Saito, Tamaki; Mimotogi, Shoji; Inoue, Soichi; Nosato, Hirokazu; Sakanashi, Hidenori; Kobayashi, Takumi; Murakawa, Masahiro; Higuchi, Tetsuya; Takahashi, Eiichi; Otsu, Nobuyuki

    2011-04-01

    Below 40nm design node, systematic variation due to lithography must be taken into consideration during the early stage of design. So far, litho-aware design using lithography simulation models has been widely applied to assure that designs are printed on silicon without any error. However, the lithography simulation approach is very time consuming, and under time-to-market pressure, repetitive redesign by this approach may result in the missing of the market window. This paper proposes a fast hotspot detection support method by flexible and intelligent vision system image pattern recognition based on Higher-Order Local Autocorrelation. Our method learns the geometrical properties of the given design data without any defects as normal patterns, and automatically detects the design patterns with hotspots from the test data as abnormal patterns. The Higher-Order Local Autocorrelation method can extract features from the graphic image of design pattern, and computational cost of the extraction is constant regardless of the number of design pattern polygons. This approach can reduce turnaround time (TAT) dramatically only on 1CPU, compared with the conventional simulation-based approach, and by distributed processing, this has proven to deliver linear scalability with each additional CPU.

  8. Sustainability Efficiency Factor: Measuring Sustainability in Advanced Energy Systems through Exergy, Exergoeconomic, Life Cycle, and Economic Analyses

    Science.gov (United States)

    Boldon, Lauren

    (NHES) reference case studies to (1) introduce sustainability metrics, such as life cycle assessment, (2) demonstrate the methods behind exergy and exergoeconomic analyses, (3) provide an economic analysis of the potential for SMR development from first-of-a-kind (FOAK) to nth-of-a-kind (NOAK), thereby illustrating possible cost reductions and deployment flexibility for SMRs over large conventional nuclear reactors, (4) assess the competitive potential for incorporation of storage and hydrogen production in NHES and in regulated and deregulated electricity markets, (5) compare an SMR-hydrogen production plant to a natural gas steam methane reforming plant using the SEF, and (6) identify and review the social considerations which would support future nuclear development domestically and abroad, such as public and political/regulatory needs and challenges. The Global Warming Potential (GWP) for the SMR (300 MWth)-wind (60 MWe)-high temperature steam electrolysis (200 tons Hydrogen per day) system was calculated as approximately 874 g CO2-equivalent as part of the life cycle assessment. This is 92.6% less than the GWP estimated for steam methane reforming production of hydrogen by Spath and Mann. The unit exergetic and exergoeconomic costs were determined for each flow within the NHES system as part of the exergy/exergoeconomic cost analyses. The unit exergetic cost is lower for components yielding more meaningful work like the one exiting the SMR with a unit exergetic cost of 1.075 MW/MW. In comparison, the flow exiting the turbine has a very high unit exergetic cost of 15.31, as most of the useful work was already removed through the turning of the generator/compressor shaft. In a similar manner, the high unit exergoeconomic cost of 12.45/MW*sec is observed for the return flow to the reactors, because there is very little exergy present. The first and second law efficiencies and the exergoeconomic factors were also determined over several cases. For the first or base SMR

  9. Statistical analyses of scatterplots to identify important factors in large-scale simulations, 1: Review and comparison of techniques

    International Nuclear Information System (INIS)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-01-01

    Procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses are described and illustrated. These procedures attempt to detect increasingly complex patterns in scatterplots and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. A sequence of example analyses with a large model for two-phase fluid flow illustrates how the individual procedures can differ in the variables that they identify as having effects on particular model outcomes. The example analyses indicate that the use of a sequence of procedures is a good analysis strategy and provides some assurance that an important effect is not overlooked

  10. ON THE EFFECTS OF THE PRESENCE AND METHODS OF THE ELIMINATION HETEROSCEDASTICITY AND AUTOCORRELATION IN THE REGRESSION MODEL

    Directory of Open Access Journals (Sweden)

    Nina L. Timofeeva

    2014-01-01

    Full Text Available The article presents the methodological and technical bases for the creation of regression models that adequately reflect reality. The focus is on methods of removing residual autocorrelation in models. Algorithms eliminating heteroscedasticity and autocorrelation of the regression model residuals: reweighted least squares method, the method of Cochran-Orkutta are given. A model of "pure" regression is build, as well as to compare the effect on the dependent variable of the different explanatory variables when the latter are expressed in different units, a standardized form of the regression equation. The scheme of abatement techniques of heteroskedasticity and autocorrelation for the creation of regression models specific to the social and cultural sphere is developed.

  11. Hearing impairment, cognition and speech understanding: exploratory factor analyses of a comprehensive test battery for a group of hearing aid users, the n200 study.

    Science.gov (United States)

    Rönnberg, Jerker; Lunner, Thomas; Ng, Elaine Hoi Ning; Lidestam, Björn; Zekveld, Adriana Agatha; Sörqvist, Patrik; Lyxell, Björn; Träff, Ulf; Yumba, Wycliffe; Classon, Elisabet; Hällgren, Mathias; Larsby, Birgitta; Signoret, Carine; Pichora-Fuller, M Kathleen; Rudner, Mary; Danielsson, Henrik; Stenfelt, Stefan

    2016-11-01

    The aims of the current n200 study were to assess the structural relations between three classes of test variables (i.e. HEARING, COGNITION and aided speech-in-noise OUTCOMES) and to describe the theoretical implications of these relations for the Ease of Language Understanding (ELU) model. Participants were 200 hard-of-hearing hearing-aid users, with a mean age of 60.8 years. Forty-three percent were females and the mean hearing threshold in the better ear was 37.4 dB HL. LEVEL1 factor analyses extracted one factor per test and/or cognitive function based on a priori conceptualizations. The more abstract LEVEL 2 factor analyses were performed separately for the three classes of test variables. The HEARING test variables resulted in two LEVEL 2 factors, which we labelled SENSITIVITY and TEMPORAL FINE STRUCTURE; the COGNITIVE variables in one COGNITION factor only, and OUTCOMES in two factors, NO CONTEXT and CONTEXT. COGNITION predicted the NO CONTEXT factor to a stronger extent than the CONTEXT outcome factor. TEMPORAL FINE STRUCTURE and SENSITIVITY were associated with COGNITION and all three contributed significantly and independently to especially the NO CONTEXT outcome scores (R(2) = 0.40). All LEVEL 2 factors are important theoretically as well as for clinical assessment.

  12. Hearing impairment, cognition and speech understanding: exploratory factor analyses of a comprehensive test battery for a group of hearing aid users, the n200 study

    Science.gov (United States)

    Rönnberg, Jerker; Lunner, Thomas; Ng, Elaine Hoi Ning; Lidestam, Björn; Zekveld, Adriana Agatha; Sörqvist, Patrik; Lyxell, Björn; Träff, Ulf; Yumba, Wycliffe; Classon, Elisabet; Hällgren, Mathias; Larsby, Birgitta; Signoret, Carine; Pichora-Fuller, M. Kathleen; Rudner, Mary; Danielsson, Henrik; Stenfelt, Stefan

    2016-01-01

    Abstract Objective: The aims of the current n200 study were to assess the structural relations between three classes of test variables (i.e. HEARING, COGNITION and aided speech-in-noise OUTCOMES) and to describe the theoretical implications of these relations for the Ease of Language Understanding (ELU) model. Study sample: Participants were 200 hard-of-hearing hearing-aid users, with a mean age of 60.8 years. Forty-three percent were females and the mean hearing threshold in the better ear was 37.4 dB HL. Design: LEVEL1 factor analyses extracted one factor per test and/or cognitive function based on a priori conceptualizations. The more abstract LEVEL 2 factor analyses were performed separately for the three classes of test variables. Results: The HEARING test variables resulted in two LEVEL 2 factors, which we labelled SENSITIVITY and TEMPORAL FINE STRUCTURE; the COGNITIVE variables in one COGNITION factor only, and OUTCOMES in two factors, NO CONTEXT and CONTEXT. COGNITION predicted the NO CONTEXT factor to a stronger extent than the CONTEXT outcome factor. TEMPORAL FINE STRUCTURE and SENSITIVITY were associated with COGNITION and all three contributed significantly and independently to especially the NO CONTEXT outcome scores (R2 = 0.40). Conclusions: All LEVEL 2 factors are important theoretically as well as for clinical assessment. PMID:27589015

  13. Human factors evaluation of remote afterloading brachytherapy. Supporting analyses of human-system interfaces, procedures and practices, training and organizational practices and policies. Volume 3

    International Nuclear Information System (INIS)

    Callan, J.R.; Kelly, R.T.; Quinn, M.L.

    1995-07-01

    A human factors project on the use of nuclear by-product material to treat cancer using remotely operated afterloaders was undertaken by the Nuclear Regulatory Commission. The purpose of the project was to identify factors that contribute to human error in the system for remote afterloading brachytherapy (RAB). This report documents the findings from the second, third, fourth, and fifth phases of the project, which involved detailed analyses of four major aspects of the RAB system linked to human error: human-system interfaces; procedures and practices; training practices and policies; and organizational practices and policies, respectively. Findings based on these analyses provided factual and conceptual support for the final phase of this project, which identified factors leading to human error in RAB. The impact of those factors on RAB performance was then evaluated and prioritized in terms of safety significance, and alternative approaches for resolving safety significant problems were identified and evaluated

  14. Human factors evaluation of remote afterloading brachytherapy. Supporting analyses of human-system interfaces, procedures and practices, training and organizational practices and policies. Volume 3

    Energy Technology Data Exchange (ETDEWEB)

    Callan, J.R.; Kelly, R.T.; Quinn, M.L. [Pacific Science & Engineering Group, San Diego, CA (United States)] [and others

    1995-07-01

    A human factors project on the use of nuclear by-product material to treat cancer using remotely operated afterloaders was undertaken by the Nuclear Regulatory Commission. The purpose of the project was to identify factors that contribute to human error in the system for remote afterloading brachytherapy (RAB). This report documents the findings from the second, third, fourth, and fifth phases of the project, which involved detailed analyses of four major aspects of the RAB system linked to human error: human-system interfaces; procedures and practices; training practices and policies; and organizational practices and policies, respectively. Findings based on these analyses provided factual and conceptual support for the final phase of this project, which identified factors leading to human error in RAB. The impact of those factors on RAB performance was then evaluated and prioritized in terms of safety significance, and alternative approaches for resolving safety significant problems were identified and evaluated.

  15. Noise-tolerant instantaneous heart rate and R-peak detection using short-term autocorrelation for wearable healthcare systems.

    Science.gov (United States)

    Fujii, Takahide; Nakano, Masanao; Yamashita, Ken; Konishi, Toshihiro; Izumi, Shintaro; Kawaguchi, Hiroshi; Yoshimoto, Masahiko

    2013-01-01

    This paper describes a robust method of Instantaneous Heart Rate (IHR) and R-peak detection from noisy electrocardiogram (ECG) signals. Generally, the IHR is calculated from the R-wave interval. Then, the R-waves are extracted from the ECG using a threshold. However, in wearable bio-signal monitoring systems, noise increases the incidence of misdetection and false detection of R-peaks. To prevent incorrect detection, we introduce a short-term autocorrelation (STAC) technique and a small-window autocorrelation (SWAC) technique, which leverages the similarity of QRS complex waveforms. Simulation results show that the proposed method improves the noise tolerance of R-peak detection.

  16. Cut contribution to momentum autocorrelation function of an impurity in a classical diatomic chain

    Science.gov (United States)

    Yu, Ming B.

    2018-02-01

    A classic diatomic chain with a mass impurity is studied using the recurrence relations method. The momentum autocorrelation function of the impurity is a sum of contributions from two pairs of resonant poles and three branch cuts. The former results in cosine function and the latter in acoustic and optical branches. By use of convolution theorem, analytical expressions for the acoustic and optical branches are derived as even-order Bessel function expansions. The expansion coefficients are integrals of elliptic functions in the real axis for the acoustic branch and along a contour parallel to the imaginary axis for the optical branch, respectively. An integral is carried out for the calculation of optical branch: ∫0 ϕ dθ/√((1 - r 1 2 sin2 θ)(1 - r 2 2 sin2 θ)) = igsn -1 (sin ϕ) ( r 2 2 > r 1 2 > 1, g is a constant).

  17. Artificial fingerprint recognition by using optical coherence tomography with autocorrelation analysis

    Science.gov (United States)

    Cheng, Yezeng; Larin, Kirill V.

    2006-12-01

    Fingerprint recognition is one of the most widely used methods of biometrics. This method relies on the surface topography of a finger and, thus, is potentially vulnerable for spoofing by artificial dummies with embedded fingerprints. In this study, we applied the optical coherence tomography (OCT) technique to distinguish artificial materials commonly used for spoofing fingerprint scanning systems from the real skin. Several artificial fingerprint dummies made from household cement and liquid silicone rubber were prepared and tested using a commercial fingerprint reader and an OCT system. While the artificial fingerprints easily spoofed the commercial fingerprint reader, OCT images revealed the presence of them at all times. We also demonstrated that an autocorrelation analysis of the OCT images could be potentially used in automatic recognition systems.

  18. Search for neutrino point sources with an all-sky autocorrelation analysis in IceCube

    Energy Technology Data Exchange (ETDEWEB)

    Turcati, Andrea; Bernhard, Anna; Coenders, Stefan [TU, Munich (Germany); Collaboration: IceCube-Collaboration

    2016-07-01

    The IceCube Neutrino Observatory is a cubic kilometre scale neutrino telescope located in the Antarctic ice. Its full-sky field of view gives unique opportunities to study the neutrino emission from the Galactic and extragalactic sky. Recently, IceCube found the first signal of astrophysical neutrinos with energies up to the PeV scale, but the origin of these particles still remains unresolved. Given the observed flux, the absence of observations of bright point-sources is explainable with the presence of numerous weak sources. This scenario can be tested using autocorrelation methods. We present here the sensitivities and discovery potentials of a two-point angular correlation analysis performed on seven years of IceCube data, taken between 2008 and 2015. The test is applied on the northern and southern skies separately, using the neutrino energy information to improve the effectiveness of the method.

  19. How cosmic microwave background correlations at large angles relate to mass autocorrelations in space

    Science.gov (United States)

    Blumenthal, George R.; Johnston, Kathryn V.

    1994-01-01

    The Sachs-Wolfe effect is known to produce large angular scale fluctuations in the cosmic microwave background radiation (CMBR) due to gravitational potential fluctuations. We show how the angular correlation function of the CMBR can be expressed explicitly in terms of the mass autocorrelation function xi(r) in the universe. We derive analytic expressions for the angular correlation function and its multipole moments in terms of integrals over xi(r) or its second moment, J(sub 3)(r), which does not need to satisfy the sort of integral constraint that xi(r) must. We derive similar expressions for bulk flow velocity in terms of xi and J(sub 3). One interesting result that emerges directly from this analysis is that, for all angles theta, there is a substantial contribution to the correlation function from a wide range of distance r and that radial shape of this contribution does not vary greatly with angle.

  20. Nodule detection methods using autocorrelation features on 3D chest CT scans

    International Nuclear Information System (INIS)

    Hara, T.; Zhou, X.; Okura, S.; Fujita, H.; Kiryu, T.; Hoshi, H.

    2007-01-01

    Lung cancer screening using low dose X-ray CT scan has been an acceptable examination to detect cancers at early stage. We have been developing an automated detection scheme for lung nodules on CT scan by using second-order autocorrelation features and the initial performance for small nodules (< 10 mm) shows a high true-positive rate with less than four false-positive marks per case. In this study, an open database of lung images, LIDC (Lung Image Database Consortium), was employed to evaluate our detection scheme as an consistency test. The detection performance for solid and solitary nodules in LIDC, included in the first data set opened by the consortium, was 83% (10/12) true-positive rate with 3.3 false-positive marks per case. (orig.)

  1. Positron-electron autocorrelation function study of E-center in phosphorus-doped silicon

    International Nuclear Information System (INIS)

    Ho, K.F.; Beling, C.D.; Fung, S.; Biasini, M.; Ferro, G.; Gong, M.

    2004-01-01

    Two dimensional fourier transformed angular correlation of annihilation radiation (2D-FT-ACAR) spectra have been taken for 10 19 cm -3 phosphorus-doped Si in the as grown state and after being subjected to 1.8 MeV e - fluences of 2 x 10 18 cm -2 . In the spectra of the irradiated samples, the zero-crossing points are observed to displace outwards from the bravais lattice positions. It is suggested that this results from positrons annihilating with electrons in localized orbitals at the defect site. An attempt is made to extract just the component of the defect's positron-electron autocorrelation function that relates to the localized defect orbitals. It is argued that such an extracted real-space function may provide a suitable means for obtaining a mapping of localized defect orbitals. (orig.)

  2. Autocorrelation spectra of an air-fluidized granular system measured by NMR

    Science.gov (United States)

    Lasic, S.; Stepisnik, J.; Mohoric, A.; Sersa, I.; Planinsic, G.

    2006-09-01

    A novel insight into the dynamics of a fluidized granular system is given by a nuclear magnetic resonance method that yields the spin-echo attenuation proportional to the spectrum of the grain positional fluctuation. Measurements of the air-fluidized oil-filled spheres and mustard seeds at different degrees of fluidization and grain volume fractions provide the velocity autocorrelation that differs from the commonly anticipated exponential Enskog decay. An empiric formula, which corresponds to the model of grain caging at collisions with adjacent beads, fits well to the experimental data. Its parameters are the characteristic collision time, the free path between collisions and the cage-breaking rate or the diffusion-like constant, which decreases with increasing grain volume fraction. Mean-squared displacements calculated from the correlation spectrum clearly show transitions from ballistic, through sub-diffusion and into diffusion regimes of grain motion.

  3. Processing of pulse oximeter signals using adaptive filtering and autocorrelation to isolate perfusion and oxygenation components

    Science.gov (United States)

    Ibey, Bennett; Subramanian, Hariharan; Ericson, Nance; Xu, Weijian; Wilson, Mark; Cote, Gerard L.

    2005-03-01

    A blood perfusion and oxygenation sensor has been developed for in situ monitoring of transplanted organs. In processing in situ data, motion artifacts due to increased perfusion can create invalid oxygenation saturation values. In order to remove the unwanted artifacts from the pulsatile signal, adaptive filtering was employed using a third wavelength source centered at 810nm as a reference signal. The 810 nm source resides approximately at the isosbestic point in the hemoglobin absorption curve where the absorbance of light is nearly equal for oxygenated and deoxygenated hemoglobin. Using an autocorrelation based algorithm oxygenation saturation values can be obtained without the need for large sampling data sets allowing for near real-time processing. This technique has been shown to be more reliable than traditional techniques and proven to adequately improve the measurement of oxygenation values in varying perfusion states.

  4. Autocorrelation Study of Solar Wind Plasma and IMF Properties as Measured by the MAVEN Spacecraft

    Science.gov (United States)

    Marquette, Melissa L.; Lillis, Robert J.; Halekas, J. S.; Luhmann, J. G.; Gruesbeck, J. R.; Espley, J. R.

    2018-04-01

    It has long been a goal of the heliophysics community to understand solar wind variability at heliocentric distances other than 1 AU, especially at ˜1.5 AU due to not only the steepening of solar wind stream interactions outside 1 AU but also the number of missions available there to measure it. In this study, we use 35 months of solar wind and interplanetary magnetic field (IMF) data taken at Mars by the Mars Atmosphere and Volatile EvolutioN (MAVEN) spacecraft to conduct an autocorrelation analysis of the solar wind speed, density, and dynamic pressure, which is derived from the speed and density, as well as the IMF strength and orientation. We found that the solar wind speed is coherent, that is, has an autocorrelation coefficient above 1/e, over roughly 56 hr, while the density and pressure are coherent over smaller intervals of roughly 25 and 20 hr, respectively, and that the IMF strength is coherent over time intervals of approximately 20 hr, while the cone and clock angles are considerably less steady but still somewhat coherent up to time lags of roughly 16 hr. We also found that when the speed, density, pressure, or IMF strength is higher than average, the solar wind or IMF becomes uncorrelated more quickly, while when they are below average, it tends to be steadier. This analysis allows us to make estimates of the values of solar wind plasma and IMF parameters when they are not directly measured and provide an approximation of the error associated with that estimate.

  5. Assessing thermal comfort and energy efficiency in buildings by statistical quality control for autocorrelated data

    International Nuclear Information System (INIS)

    Barbeito, Inés; Zaragoza, Sonia; Tarrío-Saavedra, Javier; Naya, Salvador

    2017-01-01

    Highlights: • Intelligent web platform development for energy efficiency management in buildings. • Controlling and supervising thermal comfort and energy consumption in buildings. • Statistical quality control procedure to deal with autocorrelated data. • Open source alternative using R software. - Abstract: In this paper, a case study of performing a reliable statistical procedure to evaluate the quality of HVAC systems in buildings using data retrieved from an ad hoc big data web energy platform is presented. The proposed methodology based on statistical quality control (SQC) is used to analyze the real state of thermal comfort and energy efficiency of the offices of the company FRIDAMA (Spain) in a reliable way. Non-conformities or alarms, and the actual assignable causes of these out of control states are detected. The capability to meet specification requirements is also analyzed. Tools and packages implemented in the open-source R software are employed to apply the different procedures. First, this study proposes to fit ARIMA time series models to CTQ variables. Then, the application of Shewhart and EWMA control charts to the time series residuals is proposed to control and monitor thermal comfort and energy consumption in buildings. Once thermal comfort and consumption variability are estimated, the implementation of capability indexes for autocorrelated variables is proposed to calculate the degree to which standards specifications are met. According with case study results, the proposed methodology has detected real anomalies in HVAC installation, helping to detect assignable causes and to make appropriate decisions. One of the goals is to perform and describe step by step this statistical procedure in order to be replicated by practitioners in a better way.

  6. A systematic review and meta-Analyses show that carbapenem use and medical devices are the leading risk factors for carbapenem- resistant pseudomonas aeruginosa

    NARCIS (Netherlands)

    A.F. Voor (Anne); J.A. Severin (Juliëtte); E.M.E.H. Lesaffre (Emmanuel); M.C. Vos (Margreet)

    2014-01-01

    textabstractA systematic review and meta-Analyses were performed to identify the risk factors associated with carbapenem-resistant Pseudomonas aeruginosa and to identify sources and reservoirs for the pathogen. A systematic search of PubMed and Embase databases from 1 January 1987 until 27 January

  7. Factor structure of the Wechsler Intelligence Scale for Children-Fifth Edition: Exploratory factor analyses with the 16 primary and secondary subtests.

    Science.gov (United States)

    Canivez, Gary L; Watkins, Marley W; Dombrowski, Stefan C

    2016-08-01

    The factor structure of the 16 Primary and Secondary subtests of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V; Wechsler, 2014a) standardization sample was examined with exploratory factor analytic methods (EFA) not included in the WISC-V Technical and Interpretive Manual (Wechsler, 2014b). Factor extraction criteria suggested 1 to 4 factors and results favored 4 first-order factors. When this structure was transformed with the Schmid and Leiman (1957) orthogonalization procedure, the hierarchical g-factor accounted for large portions of total and common variance while the 4 first-order factors accounted for small portions of total and common variance; rendering interpretation at the factor index level less appropriate. Although the publisher favored a 5-factor model where the Perceptual Reasoning factor was split into separate Visual Spatial and Fluid Reasoning dimensions, no evidence for 5 factors was found. It was concluded that the WISC-V provides strong measurement of general intelligence and clinical interpretation should be primarily, if not exclusively, at that level. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. Measurement of factors that negatively influence the outcome of quitting smoking among patients with COPD: psychometric analyses of the Try To Quit Smoking instrument.

    Science.gov (United States)

    Lundh, Lena; Alinaghizadeh, Hassan; Törnkvist, Lena; Gilljam, Hans; Galanti, Maria Rosaria

    2014-12-01

    To test internal consistency and factor structure of a brief instrument called Trying to Quit smoking. The most effective treatment for patients with chronic obstructive pulmonary disease is to quit smoking. Constant thoughts about quitting and repeated quit attempts can generate destructive feelings and make it more difficult to quit. Development and psychometric testing of the Trying to Quit smoking scale. The Trying to Quit smoking, an instrument designed to assess pressure-filled states of mind and corresponding pressure-relief strategies, was tested among 63 Swedish patients with chronic obstructive pulmonary disease. Among these, the psychometric properties of the instrument were analysed by Exploratory Factor Analyses. Fourteen items were included in the factor analyses, loading on three factors labelled: (1) development of pressure-filled mental states; (2) use of destructive pressure-relief strategies; and (3) ambivalent thoughts when trying to quit smoking. These three factors accounted for more than 80% of the variance, performed well on the Kaiser-Meyer-Olkin (KMO) test and had high internal consistency.

  9. Measuring Women's Empowerment in Sub-Saharan Africa: Exploratory and Confirmatory Factor Analyses of the Demographic and Health Surveys

    Directory of Open Access Journals (Sweden)

    Ibitola O. Asaolu

    2018-06-01

    Full Text Available Background: Women's status and empowerment influence health, nutrition, and socioeconomic status of women and their children. Despite its benefits, however, research on women's empowerment in Sub-Saharan Africa (SSA is limited in scope and geography. Empowerment is variably defined and data for comparison across regions is often limited. The objective of the current study was to identify domains of empowerment from a widely available data source, Demographic and Health Surveys, across multiple regions in SSA.Methods: Demographic and Health Surveys from nineteen countries representing four African regions were used for the analysis. A total of 26 indicators across different dimensions (economic, socio-cultural, education, and health were used to characterize women's empowerment. Pooled data from all countries were randomly divided into two datasets—one for exploratory factor analysis (EFA and the other for Confirmatory Factor Analysis (CFA—to verify the factor structure hypothesized during EFA.Results: Four factors including attitudes toward violence, labor force participation, education, and access to healthcare were found to define women's empowerment in Central, Southern, and West Africa. However, in East Africa, only three factors were relevant: attitudes toward violence, access to healthcare ranking, and labor force participation. There was limited evidence to support household decision-making, life course, or legal status domains as components of women's empowerment.Conclusion: This foremost study advances scholarship on women's empowerment by providing a validated measure of women's empowerment for researchers and other stakeholders in health and development.

  10. Structural validity of the Wechsler Intelligence Scale for Children-Fifth Edition: Confirmatory factor analyses with the 16 primary and secondary subtests.

    Science.gov (United States)

    Canivez, Gary L; Watkins, Marley W; Dombrowski, Stefan C

    2017-04-01

    The factor structure of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V; Wechsler, 2014a) standardization sample (N = 2,200) was examined using confirmatory factor analyses (CFA) with maximum likelihood estimation for all reported models from the WISC-V Technical and Interpretation Manual (Wechsler, 2014b). Additionally, alternative bifactor models were examined and variance estimates and model-based reliability estimates (ω coefficients) were provided. Results from analyses of the 16 primary and secondary WISC-V subtests found that all higher-order CFA models with 5 group factors (VC, VS, FR, WM, and PS) produced model specification errors where the Fluid Reasoning factor produced negative variance and were thus judged inadequate. Of the 16 models tested, the bifactor model containing 4 group factors (VC, PR, WM, and PS) produced the best fit. Results from analyses of the 10 primary WISC-V subtests also found the bifactor model with 4 group factors (VC, PR, WM, and PS) produced the best fit. Variance estimates from both 16 and 10 subtest based bifactor models found dominance of general intelligence (g) in accounting for subtest variance (except for PS subtests) and large ω-hierarchical coefficients supporting general intelligence interpretation. The small portions of variance uniquely captured by the 4 group factors and low ω-hierarchical subscale coefficients likely render the group factors of questionable interpretive value independent of g (except perhaps for PS). Present CFA results confirm the EFA results reported by Canivez, Watkins, and Dombrowski (2015); Dombrowski, Canivez, Watkins, and Beaujean (2015); and Canivez, Dombrowski, and Watkins (2015). (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Pedestrian road traffic injuries in urban Peruvian children and adolescents: case control analyses of personal and environmental risk factors.

    Directory of Open Access Journals (Sweden)

    Joseph Donroe

    2008-09-01

    Full Text Available Child pedestrian road traffic injuries (RTIs are an important cause of death and disability in poorer nations, however RTI prevention strategies in those countries largely draw upon studies conducted in wealthier countries. This research investigated personal and environmental risk factors for child pedestrian RTIs relevant to an urban, developing world setting.This is a case control study of personal and environmental risk factors for child pedestrian RTIs in San Juan de Miraflores, Lima, Perú. The analysis of personal risk factors included 100 cases of serious pedestrian RTIs and 200 age and gender matched controls. Demographic, socioeconomic, and injury data were collected. The environmental risk factor study evaluated vehicle and pedestrian movement and infrastructure at the sites in which 40 of the above case RTIs occurred and 80 control sites.After adjustment, factors associated with increased risk of child pedestrian RTIs included high vehicle volume (OR 7.88, 95%CI 1.97-31.52, absent lane demarcations (OR 6.59, 95% CI 1.65-26.26, high vehicle speed (OR 5.35, 95%CI 1.55-18.54, high street vendor density (OR 1.25, 95%CI 1.01-1.55, and more children living in the home (OR 1.25, 95%CI 1.00-1.56. Protective factors included more hours/day spent in school (OR 0.52, 95%CI 0.33-0.82 and years of family residence in the same home (OR 0.97, 95%CI 0.95-0.99.Reducing traffic volumes and speeds, limiting the number of street vendors on a given stretch of road, and improving lane demarcation should be evaluated as components of child pedestrian RTI interventions in poorer countries.

  12. Temporal trend and climate factors of hemorrhagic fever with renal syndrome epidemic in Shenyang City, China

    Directory of Open Access Journals (Sweden)

    Liu Xiaodong

    2011-12-01

    Full Text Available Abstract Background Hemorrhagic fever with renal syndrome (HFRS is an important infectious disease caused by different species of hantaviruses. As a rodent-borne disease with a seasonal distribution, external environmental factors including climate factors may play a significant role in its transmission. The city of Shenyang is one of the most seriously endemic areas for HFRS. Here, we characterized the dynamic temporal trend of HFRS, and identified climate-related risk factors and their roles in HFRS transmission in Shenyang, China. Methods The annual and monthly cumulative numbers of HFRS cases from 2004 to 2009 were calculated and plotted to show the annual and seasonal fluctuation in Shenyang. Cross-correlation and autocorrelation analyses were performed to detect the lagged effect of climate factors on HFRS transmission and the autocorrelation of monthly HFRS cases. Principal component analysis was constructed by using climate data from 2004 to 2009 to extract principal components of climate factors to reduce co-linearity. The extracted principal components and autocorrelation terms of monthly HFRS cases were added into a multiple regression model called principal components regression model (PCR to quantify the relationship between climate factors, autocorrelation terms and transmission of HFRS. The PCR model was compared to a general multiple regression model conducted only with climate factors as independent variables. Results A distinctly declining temporal trend of annual HFRS incidence was identified. HFRS cases were reported every month, and the two peak periods occurred in spring (March to May and winter (November to January, during which, nearly 75% of the HFRS cases were reported. Three principal components were extracted with a cumulative contribution rate of 86.06%. Component 1 represented MinRH0, MT1, RH1, and MWV1; component 2 represented RH2, MaxT3, and MAP3; and component 3 represented MaxT2, MAP2, and MWV2. The PCR model

  13. Comparative genomic and functional analyses: unearthing the diversity and specificity of nematicidal factors in Pseudomonas putida strain 1A00316

    Science.gov (United States)

    Guo, Jing; Jing, Xueping; Peng, Wen-Lei; Nie, Qiyu; Zhai, Yile; Shao, Zongze; Zheng, Longyu; Cai, Minmin; Li, Guangyu; Zuo, Huaiyu; Zhang, Zhitao; Wang, Rui-Ru; Huang, Dian; Cheng, Wanli; Yu, Ziniu; Chen, Ling-Ling; Zhang, Jibin

    2016-01-01

    We isolated Pseudomonas putida (P. putida) strain 1A00316 from Antarctica. This bacterium has a high efficiency against Meloidogyne incognita (M. incognita) in vitro and under greenhouse conditions. The complete genome of P. putida 1A00316 was sequenced using PacBio single molecule real-time (SMRT) technology. A comparative genomic analysis of 16 Pseudomonas strains revealed that although P. putida 1A00316 belonged to P. putida, it was phenotypically more similar to nematicidal Pseudomonas fluorescens (P. fluorescens) strains. We characterized the diversity and specificity of nematicidal factors in P. putida 1A00316 with comparative genomics and functional analysis, and found that P. putida 1A00316 has diverse nematicidal factors including protein alkaline metalloproteinase AprA and two secondary metabolites, hydrogen cyanide and cyclo-(l-isoleucyl-l-proline). We show for the first time that cyclo-(l-isoleucyl-l-proline) exhibit nematicidal activity in P. putida. Interestingly, our study had not detected common nematicidal factors such as 2,4-diacetylphloroglucinol (2,4-DAPG) and pyrrolnitrin in P. putida 1A00316. The results of the present study reveal the diversity and specificity of nematicidal factors in P. putida strain 1A00316. PMID:27384076

  14. SNP analyses of growth factor genes EGF, TGF{beta}-1, and HGF reveal haplotypic association of EGF with autism

    Energy Technology Data Exchange (ETDEWEB)

    Toyoda, Takao; Thanseem, Ismail; Kawai, Masayoshi; Sekine, Yoshimoto [Department of Psychiatry and Neurology, Hamamatsu University School of Medicine, Hamamatsu 431-3192 (Japan); Nakamura, Kazuhiko; Anitha, Ayyappan; Suda, Shiro [Department of Psychiatry and Neurology, Hamamatsu University School of Medicine, Hamamatsu 431-3192 (Japan); Yamada, Kazuo [Laboratory of Molecular Psychiatry, RIKEN Brain Science Institute, Saitama (Japan); Tsujii, Masatsugu [Faculty of Sociology, Chukyo University, Toyota, Aichi (Japan); [The Osaka-Hamamatsu Joint Research Center for Child Mental Development, Hamamatsu University School of Medicine, Hamamatsu (Japan); Iwayama, Yoshimi; Hattori, Eiji; Toyota, Tomoko; Yoshikawa, Takeo [Laboratory of Molecular Psychiatry, RIKEN Brain Science Institute, Saitama (Japan); Miyachi, Taishi; Tsuchiya, Kenji; Sugihara, Gen-ichi; Matsuzaki, Hideo [The Osaka-Hamamatsu Joint Research Center for Child Mental Development, Hamamatsu University School of Medicine, Hamamatsu (Japan); Iwata, Yasuhide; Suzuki, Katsuaki [Department of Psychiatry and Neurology, Hamamatsu University School of Medicine, Hamamatsu 431-3192 (Japan); Mori, Norio [Department of Psychiatry and Neurology, Hamamatsu University School of Medicine, Hamamatsu 431-3192 (Japan); [The Osaka-Hamamatsu Joint Research Center for Child Mental Development, Graduate School of Medicine, Osaka University (Japan); Ouchi, Yasuomi [The Osaka-Hamamatsu Joint Research Center for Child Mental Development, Hamamatsu University School of Medicine, Hamamatsu (Japan); [The Positron Medical Center, Hamamatsu Medical Center, Hamamatsu (Japan); Sugiyama, Toshiro [Aichi Children' s Health and Medical Center, Obu, Aichi (Japan); Takei, Nori [The Osaka-Hamamatsu Joint Research Center for Child Mental Development, Hamamatsu University School of Medicine, Hamamatsu (Japan)

    2007-09-07

    Autism is a pervasive neurodevelopmental disorder diagnosed in early childhood. Growth factors have been found to play a key role in the cellular differentiation and proliferation of the central and peripheral nervous systems. Epidermal growth factor (EGF) is detected in several regions of the developing and adult brain, where, it enhances the differentiation, maturation, and survival of a variety of neurons. Transforming growth factor-{beta} (TGF{beta}) isoforms play an important role in neuronal survival, and the hepatocyte growth factor (HGF) has been shown to exhibit neurotrophic activity. We examined the association of EGF, TGF{beta}1, and HGF genes with autism, in a trio association study, using DNA samples from families recruited to the Autism Genetic Resource Exchange; 252 trios with a male offspring scored for autism were selected for the study. Transmission disequilibrium test revealed significant haplotypic association of EGF with autism. No significant SNP or haplotypic associations were observed for TGF{beta}1 or HGF. Given the role of EGF in brain and neuronal development, we suggest a possible role of EGF in the pathogenesis of autism.

  15. [Prevalence and factors associated with intimate partner abuse in female users of public health services in Mexico: a comparative analyses].

    Science.gov (United States)

    Ávila-Burgos, Leticia; Valdez-Santiagob, Rosario; Barroso-Quiab, Abigail; Híjar, Martha; Rojas, Rosalba; Del Río-Zolezzi, Aurora

    2014-01-01

    To analyze the evolution of the prevalence in intimate partner violence during the years 2003 and 2006 in Mexico, identifying factors associated with its severity, comparing our results with findings from 2003. Data from the Encuesta Nacional de Violencia contra las Mujeres (ENVIM 2006) was used; it has urban-rural national representation of female users of Mexican public health services. A total of 22,318 women above 14 years of age were interviewed. A multinomial logistic regression model was adjusted. The dependent variable was the Index of Intimate Partner Abuse. Intimate partner abuse increased 17% in comparison to the year 2003. Women's personal history of childhood abuse (ORA= 5.12, 95% CI4.15-6.30) and rape (ORA = 3.5, 95% CI = 2.66-4.62) were the most important women's factors that were found associated with severe violence. Male partner's daily alcohol consumption increased eleven fold the possibility of severe violence; higher disagreement with traditional female gender roles and higher education of both partners were protective factors. Factors associated with violence and their severities were consistent with findings reported in 2003. Intimate partner violence is a highly prevalent social problem which requires comprehensive strategies supporting empowerment of women through higher education, early detection and care of those battered, as well as structured interventions to prevent violence in future generations.

  16. SNP analyses of growth factor genes EGF, TGFβ-1, and HGF reveal haplotypic association of EGF with autism

    International Nuclear Information System (INIS)

    Toyoda, Takao; Nakamura, Kazuhiko; Yamada, Kazuo; Thanseem, Ismail; Anitha, Ayyappan; Suda, Shiro; Tsujii, Masatsugu; Iwayama, Yoshimi; Hattori, Eiji; Toyota, Tomoko; Miyachi, Taishi; Iwata, Yasuhide; Suzuki, Katsuaki; Matsuzaki, Hideo; Kawai, Masayoshi; Sekine, Yoshimoto; Tsuchiya, Kenji; Sugihara, Gen-ichi; Ouchi, Yasuomi; Sugiyama, Toshiro; Takei, Nori; Yoshikawa, Takeo; Mori, Norio

    2007-01-01

    Autism is a pervasive neurodevelopmental disorder diagnosed in early childhood. Growth factors have been found to play a key role in the cellular differentiation and proliferation of the central and peripheral nervous systems. Epidermal growth factor (EGF) is detected in several regions of the developing and adult brain, where, it enhances the differentiation, maturation, and survival of a variety of neurons. Transforming growth factor-β (TGFβ) isoforms play an important role in neuronal survival, and the hepatocyte growth factor (HGF) has been shown to exhibit neurotrophic activity. We examined the association of EGF, TGFβ1, and HGF genes with autism, in a trio association study, using DNA samples from families recruited to the Autism Genetic Resource Exchange; 252 trios with a male offspring scored for autism were selected for the study. Transmission disequilibrium test revealed significant haplotypic association of EGF with autism. No significant SNP or haplotypic associations were observed for TGFβ1 or HGF. Given the role of EGF in brain and neuronal development, we suggest a possible role of EGF in the pathogenesis of autism

  17. Statistical analyses of scatterplots to identify important factors in large-scale simulations, 2: robustness of techniques

    International Nuclear Information System (INIS)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-01-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (i) Type I errors are unavoidable, (ii) Type II errors can occur when inappropriate analysis procedures are used, (iii) physical explanations should always be sought for why statistical procedures identify variables as being important, and (iv) the identification of important variables tends to be stable for independent Latin hypercube samples

  18. Evaluating the coefficients of autocorrelation in a series of annual run-off of the Far East rivers

    Energy Technology Data Exchange (ETDEWEB)

    Sakharyuk, A V

    1981-01-01

    An evaluation is made of the coefficients of autocorrelation in series of annual river run-off based on group analysis using data on the distribution law of sampling correlation coefficients of temporal series subordinate to the III type Pearson's distribution.

  19. Comparison of multipoint linkage analyses for quantitative traits in the CEPH data: parametric LOD scores, variance components LOD scores, and Bayes factors.

    Science.gov (United States)

    Sung, Yun Ju; Di, Yanming; Fu, Audrey Q; Rothstein, Joseph H; Sieh, Weiva; Tong, Liping; Thompson, Elizabeth A; Wijsman, Ellen M

    2007-01-01

    We performed multipoint linkage analyses with multiple programs and models for several gene expression traits in the Centre d'Etude du Polymorphisme Humain families. All analyses provided consistent results for both peak location and shape. Variance-components (VC) analysis gave wider peaks and Bayes factors gave fewer peaks. Among programs from the MORGAN package, lm_multiple performed better than lm_markers, resulting in less Markov-chain Monte Carlo (MCMC) variability between runs, and the program lm_twoqtl provided higher LOD scores by also including either a polygenic component or an additional quantitative trait locus.

  20. Building-related symptoms among U.S. office workers and risks factors for moisture and contamination: Preliminary analyses of U.S. EPA BASE Data

    Energy Technology Data Exchange (ETDEWEB)

    Mendell, Mark J.; Cozen, Myrna

    2002-09-01

    The authors assessed relationships between health symptoms in office workers and risk factors related to moisture and contamination, using data collected from a representative sample of U.S. office buildings in the U.S. EPA BASE study. Methods: Analyses assessed associations between three types of weekly, workrelated symptoms-lower respiratory, mucous membrane, and neurologic-and risk factors for moisture or contamination in these office buildings. Multivariate logistic regression models were used to estimate the strength of associations for these risk factors as odds ratios (ORs) adjusted for personal-level potential confounding variables related to demographics, health, job, and workspace. A number of risk factors were associated (e.g., 95% confidence limits excluded 1.0) significantly with small to moderate increases in one or more symptom outcomes. Significantly elevated ORs for mucous membrane symptoms were associated with the following risk factors: presence of humidification system in good condition versus none (OR = 1.4); air handler inspection annually versus daily (OR = 1.6); current water damage in the building (OR = 1.2); and less than daily vacuuming in study space (OR = 1.2). Significantly elevated ORs for lower respiratory symptoms were associated with: air handler inspection annually versus daily (OR = 2.0); air handler inspection less than daily but at least semi-annually (OR=1.6); less than daily cleaning of offices (1.7); and less than daily vacuuming of the study space (OR = 1.4). Only two statistically significant risk factors for neurologic symptoms were identified: presence of any humidification system versus none (OR = 1.3); and less than daily vacuuming of the study space (OR = 1.3). Dirty cooling coils, dirty or poorly draining drain pans, and standing water near outdoor air intakes, evaluated by inspection, were not identified as risk factors in these analyses, despite predictions based on previous findings elsewhere, except that very

  1. Field test comparison of an autocorrelation technique for determining grain size using a digital 'beachball' camera versus traditional methods

    Science.gov (United States)

    Barnard, P.L.; Rubin, D.M.; Harney, J.; Mustain, N.

    2007-01-01

    This extensive field test of an autocorrelation technique for determining grain size from digital images was conducted using a digital bed-sediment camera, or 'beachball' camera. Using 205 sediment samples and >1200 images from a variety of beaches on the west coast of the US, grain size ranging from sand to granules was measured from field samples using both the autocorrelation technique developed by Rubin [Rubin, D.M., 2004. A simple autocorrelation algorithm for determining grain size from digital images of sediment. Journal of Sedimentary Research, 74(1): 160-165.] and traditional methods (i.e. settling tube analysis, sieving, and point counts). To test the accuracy of the digital-image grain size algorithm, we compared results with manual point counts of an extensive image data set in the Santa Barbara littoral cell. Grain sizes calculated using the autocorrelation algorithm were highly correlated with the point counts of the same images (r2 = 0.93; n = 79) and had an error of only 1%. Comparisons of calculated grain sizes and grain sizes measured from grab samples demonstrated that the autocorrelation technique works well on high-energy dissipative beaches with well-sorted sediment such as in the Pacific Northwest (r2 ??? 0.92; n = 115). On less dissipative, more poorly sorted beaches such as Ocean Beach in San Francisco, results were not as good (r2 ??? 0.70; n = 67; within 3% accuracy). Because the algorithm works well compared with point counts of the same image, the poorer correlation with grab samples must be a result of actual spatial and vertical variability of sediment in the field; closer agreement between grain size in the images and grain size of grab samples can be achieved by increasing the sampling volume of the images (taking more images, distributed over a volume comparable to that of a grab sample). In all field tests the autocorrelation method was able to predict the mean and median grain size with ???96% accuracy, which is more than

  2. Bioinformatics Analyses of the Role of Vascular Endothelial Growth Factor in Patients with Non-Small Cell Lung Cancer.

    Directory of Open Access Journals (Sweden)

    Ying Wang

    Full Text Available This study was aimed to identify the expression pattern of vascular endothelial growth factor (VEGF in non-small cell lung cancer (NSCLC and to explore its potential correlation with the progression of NSCLC.Gene expression profile GSE39345 was downloaded from the Gene Expression Omnibus database. Twenty healthy controls and 32 NSCLC samples before chemotherapy were analyzed to identify the differentially expressed genes (DEGs. Then pathway enrichment analysis of the DEGs was performed and protein-protein interaction networks were constructed. Particularly, VEGF genes and the VEGF signaling pathway were analyzed. The sub-network was constructed followed by functional enrichment analysis.Total 1666 up-regulated and 1542 down-regulated DEGs were identified. The down-regulated DEGs were mainly enriched in the pathways associated with cancer. VEGFA and VEGFB were found to be the initiating factor of VEGF signaling pathway. In addition, in the epidermal growth factor receptor (EGFR, VEGFA and VEGFB associated sub-network, kinase insert domain receptor (KDR, fibronectin 1 (FN1, transforming growth factor beta induced (TGFBI and proliferating cell nuclear antigen (PCNA were found to interact with at least two of the three hub genes. The DEGs in this sub-network were mainly enriched in Gene Ontology terms related to cell proliferation.EGFR, KDR, FN1, TGFBI and PCNA may interact with VEGFA to play important roles in NSCLC tumorigenesis. These genes and corresponding proteins may have the potential to be used as the targets for either diagnosis or treatment of patients with NSCLC.

  3. Food intake patterns and cardiovascular risk factors in Japanese adults: analyses from the 2012 National Health and nutrition survey, Japan

    OpenAIRE

    Htun, Nay Chi; Suga, Hitomi; Imai, Shino; Shimizu, Wakana; Takimoto, Hidemi

    2017-01-01

    Background There is an increasing global interest in the role of Japanese diet as a possible explanation for the nation?s healthy diet, which contributes to the world?s highest life-expectancy enjoyed in Japan. However, nationwide studies on current food intake status among general Japanese population have not been established yet. This study examined the association between food intake patterns and cardiovascular risk factors (CVRF) such as waist circumference (WC), body mass index (BMI), bl...

  4. Modeling the potential risk factors of bovine viral diarrhea prevalence in Egypt using univariable and multivariable logistic regression analyses

    Directory of Open Access Journals (Sweden)

    Abdelfattah M. Selim

    2018-03-01

    Full Text Available Aim: The present cross-sectional study was conducted to determine the seroprevalence and potential risk factors associated with Bovine viral diarrhea virus (BVDV disease in cattle and buffaloes in Egypt, to model the potential risk factors associated with the disease using logistic regression (LR models, and to fit the best predictive model for the current data. Materials and Methods: A total of 740 blood samples were collected within November 2012-March 2013 from animals aged between 6 months and 3 years. The potential risk factors studied were species, age, sex, and herd location. All serum samples were examined with indirect ELIZA test for antibody detection. Data were analyzed with different statistical approaches such as Chi-square test, odds ratios (OR, univariable, and multivariable LR models. Results: Results revealed a non-significant association between being seropositive with BVDV and all risk factors, except for species of animal. Seroprevalence percentages were 40% and 23% for cattle and buffaloes, respectively. OR for all categories were close to one with the highest OR for cattle relative to buffaloes, which was 2.237. Likelihood ratio tests showed a significant drop of the -2LL from univariable LR to multivariable LR models. Conclusion: There was an evidence of high seroprevalence of BVDV among cattle as compared with buffaloes with the possibility of infection in different age groups of animals. In addition, multivariable LR model was proved to provide more information for association and prediction purposes relative to univariable LR models and Chi-square tests if we have more than one predictor.

  5. Nonintrusive Finger-Vein Recognition System Using NIR Image Sensor and Accuracy Analyses According to Various Factors

    Directory of Open Access Journals (Sweden)

    Tuyen Danh Pham

    2015-07-01

    Full Text Available Biometrics is a technology that enables an individual person to be identified based on human physiological and behavioral characteristics. Among biometrics technologies, face recognition has been widely used because of its advantages in terms of convenience and non-contact operation. However, its performance is affected by factors such as variation in the illumination, facial expression, and head pose. Therefore, fingerprint and iris recognitions are preferred alternatives. However, the performance of the former can be adversely affected by the skin condition, including scarring and dryness. In addition, the latter has the disadvantages of high cost, large system size, and inconvenience to the user, who has to align their eyes with the iris camera. In an attempt to overcome these problems, finger-vein recognition has been vigorously researched, but an analysis of its accuracies according to various factors has not received much attention. Therefore, we propose a nonintrusive finger-vein recognition system using a near infrared (NIR image sensor and analyze its accuracies considering various factors. The experimental results obtained with three databases showed that our system can be operated in real applications with high accuracy; and the dissimilarity of the finger-veins of different people is larger than that of the finger types and hands.

  6. Nonintrusive Finger-Vein Recognition System Using NIR Image Sensor and Accuracy Analyses According to Various Factors.

    Science.gov (United States)

    Pham, Tuyen Danh; Park, Young Ho; Nguyen, Dat Tien; Kwon, Seung Yong; Park, Kang Ryoung

    2015-07-13

    Biometrics is a technology that enables an individual person to be identified based on human physiological and behavioral characteristics. Among biometrics technologies, face recognition has been widely used because of its advantages in terms of convenience and non-contact operation. However, its performance is affected by factors such as variation in the illumination, facial expression, and head pose. Therefore, fingerprint and iris recognitions are preferred alternatives. However, the performance of the former can be adversely affected by the skin condition, including scarring and dryness. In addition, the latter has the disadvantages of high cost, large system size, and inconvenience to the user, who has to align their eyes with the iris camera. In an attempt to overcome these problems, finger-vein recognition has been vigorously researched, but an analysis of its accuracies according to various factors has not received much attention. Therefore, we propose a nonintrusive finger-vein recognition system using a near infrared (NIR) image sensor and analyze its accuracies considering various factors. The experimental results obtained with three databases showed that our system can be operated in real applications with high accuracy; and the dissimilarity of the finger-veins of different people is larger than that of the finger types and hands.

  7. Factors associated with patients' choice of physician in the Korean population: Database analyses of a tertiary hospital.

    Directory of Open Access Journals (Sweden)

    Kidong Kim

    Full Text Available This study aimed to determine the factors influencing patients' choice of physician at the first visit through database analysis of a tertiary hospital in South Korea. We collected data on the first treatments performed by physicians who had treated patients for at least 3 consecutive years over 10 years (from 2003 to 2012 from the database of Seoul National University's affiliated tertiary hospital. Ultimately, we obtained data on 524,012 first treatments of 319,004 patients performed by 115 physicians. Variables including physicians' age and medical school and patients' age were evaluated as influencing factors for the number of first treatments performed by each physician in each year using a Poisson regression through generalized estimating equations with a log link. The number of first treatments decreased over the study period. Notably, the relative risk for first treatments was lower among older physicians than among younger physicians (relative risk 0.96; 95% confidence interval 0.95 to 0.98. Physicians graduating from Seoul National University (SNU also had a higher risk for performing first treatments than did those not from SNU (relative risk 1.58; 95% confidence interval 1.18 to 2.10. Finally, relative risk was also higher among older patients than among younger patients (relative risk 1.03; 95% confidence interval 1.01 to 1.04. This study systematically demonstrated that physicians' age, whether the physician graduated from the highest-quality university, and patients' age all related to patients' choice of physician at the first visit in a tertiary university hospital. These findings might be due to Korean cultural factors.

  8. Analyses of Helsinki 2012 European Athletics Championships injury and illness surveillance to discuss elite athletes risk factors.

    Science.gov (United States)

    Edouard, Pascal; Depiesse, Frédéric; Branco, Pedro; Alonso, Juan-Manuel

    2014-09-01

    To further analyze newly incurred injuries and illnesses (I&Is) during Athletics International Championships to discuss risk factors. Prospective recording of newly occurred injuries and illnesses. The 2012 European Athletics (EA) Championships in Helsinki, Finland. National team and local organizing committee physicians and physiotherapists and 1342 registered athletes. Incidence and characteristics of new injuries and illnesses. Ninety-three percent of athletes were covered by medical teams, with a response rate of 91%. One hundred thirty-three injuries were reported (incidence of 98.4 injuries per 1000 registered athletes). Sixty-two injuries (47%) resulted in time loss from sport. The most common diagnosis was hamstring strain (11.4% of injuries and 21% of time-loss injuries). Injury risk was higher in males and increased with age. The highest incidences of injuries were found in combined events and middle- and long-distance events. Twenty-seven illnesses were reported (4.0 illnesses per 1000 athlete days). The most common diagnoses were upper respiratory tract infection (33.3%) and gastroenteritis/diarrhea (25.9%). During outdoor EA Championships, injury and illness incidences were slightly lower and injury characteristics were comparable with those during outdoor World Athletics Championships. During elite athletics Championships, gender (male), age (older than 30 years), finals, and some events (combined events and middle- and long-distance races) seem to be injury risk factors. Illness risk factors remain unclear. As in previous recommendations, preventive interventions should focus on overuse injuries, hamstring strains, and adequate rehabilitation of previous injuries, decreasing risk of infectious diseases transmission, appropriate event scheduling, sports clothes, and heat acclimatization.

  9. Epidemiologic Analyses of Risk Factors for Bone Loss and Recovery Related to Long-Duration Space Flight

    Science.gov (United States)

    Sibonga, Jean; Amin, Shreyasee

    2010-01-01

    AIM 1: To investigate the risk of microgravity exposure on long-term changes in bone health and fracture risk. compare data from crew members ("observed") with what would be "expected" from Rochester Bone Health Study. AIM 2: To provide a summary of current evidence available on potential risk factors for bone loss, recovery & fracture following long-duration space flight. integrative review of all data pre, in-, and post-flight across disciplines (cardiovascular, nutrition, muscle, etc.) and their relation to bone loss and recovery

  10. INCORPORATION OF HUMAN FACTORS ENGINEERING ANALYSES AND TOOLS INTO THE DESIGN PROCESS FOR DIGITAL CONTROL ROOM UPGRADES

    International Nuclear Information System (INIS)

    O'HARA, J.M.; BROWN, W.

    2004-01-01

    Many nuclear power plants are modernizing with digital instrumentation and control systems and computer-based human-system interfaces (HSIs). The purpose of this paper is to summarize the human factors engineering (HFE) activities that can help to ensure that the design meets personnel needs. HFE activities should be integrated into the design process as a regular part of the engineering effort of a plant modification. The HFE activities will help ensure that human performance issues are addressed, that new technology supports task performance, and that the HSIs are designed in a manner that is compatible with human physiological, cognitive and social characteristics

  11. Identification of novel risk factors for community-acquired Clostridium difficile infection using spatial statistics and geographic information system analyses.

    Directory of Open Access Journals (Sweden)

    Deverick J Anderson

    Full Text Available The rate of community-acquired Clostridium difficile infection (CA-CDI is increasing. While receipt of antibiotics remains an important risk factor for CDI, studies related to acquisition of C. difficile outside of hospitals are lacking. As a result, risk factors for exposure to C. difficile in community settings have been inadequately studied.To identify novel environmental risk factors for CA-CDI.We performed a population-based retrospective cohort study of patients with CA-CDI from 1/1/2007 through 12/31/2014 in a 10-county area in central North Carolina. 360 Census Tracts in these 10 counties were used as the demographic Geographic Information System (GIS base-map. Longitude and latitude (X, Y coordinates were generated from patient home addresses and overlaid to Census Tracts polygons using ArcGIS; ArcView was used to assess "hot-spots" or clusters of CA-CDI. We then constructed a mixed hierarchical model to identify environmental variables independently associated with increased rates of CA-CDI.A total of 1,895 unique patients met our criteria for CA-CDI. The mean patient age was 54.5 years; 62% were female and 70% were Caucasian. 402 (21% patient addresses were located in "hot spots" or clusters of CA-CDI (p<0.001. "Hot spot" census tracts were scattered throughout the 10 counties. After adjusting for clustering and population density, age ≥ 60 years (p = 0.03, race (<0.001, proximity to a livestock farm (0.01, proximity to farming raw materials services (0.02, and proximity to a nursing home (0.04 were independently associated with increased rates of CA-CDI.Our study is the first to use spatial statistics and mixed models to identify important environmental risk factors for acquisition of C. difficile and adds to the growing evidence that farm practices may put patients at risk for important drug-resistant infections.

  12. On the 2nd order autocorrelation of an XUV attosecond pulse train

    International Nuclear Information System (INIS)

    Tzallas, P.; Benis, E.; Nikolopoulos, L.A.A.; Tsakiris, G.D.; Witte, K.; Charalambidis, P

    2005-01-01

    Full text: We present the first direct measurement of sub-fs light bunching that has been achieved, extending well established fs optical metrology to XUV as pulses. A mean train pulse duration of 780 as has been extracted through a 2 nd order autocorrelation approach, utilizing a nonlinear effect that is induced solely by the XUV radiation to be characterized. The approach is based on (i) a bisected spherical mirror XUV wavefront divider used as an autocorrelator and (ii) the two photon ionization of atomic He by a superposition of the 7 th to the 15 th harmonic of a Ti:sapph laser. The measured temporal mean width is more than twice its Fourier transform limited (FTL) value, in contrast to the as train pulse durations measured through other approaches, which where found much closer to the FTL values. We have investigated, and discuss here the origin of this discrepancy. An assessment of the validity of the 2 nd order AC approach for the broad band XUV radiation of as pulses is implemented through ab initio calculations (solution of the 3D TDSE of He in the presence of the superposition of the harmonic superposition) modeling the spectral and temporal response of the two-XUV-photon He ionization detector employed. It is found that both the spectral and temporal response are not affecting the measured duration. The mean width of the as train bursts is estimated from the spectral phases of the individual harmonics as they result from the rescattering model, taking into account the spatially modulated temporal width of the radiation due to the spatiotemporal intensity distribution of the driving field during the harmonic generation process. The measured value is found in reasonable agreement with the estimated duration. The method used for the 2 nd order AC in itself initiates further XUV-pump-XUV-probe studies of sub-fs-scale dynamics and at the same time becomes highly pertinent in connection with nonlinear experiments using XUV free - electron laser sources. Refs

  13. Risk-based transfer responses to climate change, simulated through autocorrelated stochastic methods

    Science.gov (United States)

    Kirsch, B.; Characklis, G. W.

    2009-12-01

    Maintaining municipal water supply reliability despite growing demands can be achieved through a variety of mechanisms, including supply strategies such as temporary transfers. However, much of the attention on transfers has been focused on market-based transfers in the western United States largely ignoring the potential for transfers in the eastern U.S. The different legal framework of the eastern and western U.S. leads to characteristic differences between their respective transfers. Western transfers tend to be agricultural-to-urban and involve raw, untreated water, with the transfer often involving a simple change in the location and/or timing of withdrawals. Eastern transfers tend to be contractually established urban-to-urban transfers of treated water, thereby requiring the infrastructure to transfer water between utilities. Utilities require the tools to be able to evaluate transfer decision rules and the resulting expected future transfer behavior. Given the long-term planning horizons of utilities, potential changes in hydrologic patterns due to climate change must be considered. In response, this research develops a method for generating a stochastic time series that reproduces the historic autocorrelation and can be adapted to accommodate future climate scenarios. While analogous in operation to an autoregressive model, this method reproduces the seasonal autocorrelation structure, as opposed to assuming the strict stationarity produced by an autoregressive model. Such urban-to-urban transfers are designed to be rare, transient events used primarily during times of severe drought, and incorporating Monte Carlo techniques allows for the development of probability distributions of likely outcomes. This research evaluates a system risk-based, urban-to-urban transfer agreement between three utilities in the Triangle region of North Carolina. Two utilities maintain their own surface water supplies in adjoining watersheds and look to obtain transfers via

  14. Transcriptome analyses identify five transcription factors differentially expressed in the hypothalamus of post- versus prepubertal Brahman heifers.

    Science.gov (United States)

    Fortes, M R S; Nguyen, L T; Weller, M M D C A; Cánovas, A; Islas-Trejo, A; Porto-Neto, L R; Reverter, A; Lehnert, S A; Boe-Hansen, G B; Thomas, M G; Medrano, J F; Moore, S S

    2016-09-01

    Puberty onset is a developmental process influenced by genetic determinants, environment, and nutrition. Mutations and regulatory gene networks constitute the molecular basis for the genetic determinants of puberty onset. The emerging knowledge of these genetic determinants presents opportunities for innovation in the breeding of early pubertal cattle. This paper presents new data on hypothalamic gene expression related to puberty in (Brahman) in age- and weight-matched heifers. Six postpubertal heifers were compared with 6 prepubertal heifers using whole-genome RNA sequencing methodology for quantification of global gene expression in the hypothalamus. Five transcription factors (TF) with potential regulatory roles in the hypothalamus were identified in this experiment: , , , , and . These TF genes were significantly differentially expressed in the hypothalamus of postpubertal versus prepubertal heifers and were also identified as significant according to the applied regulatory impact factor metric ( cancer and developmental processes. Mutations in were associated with puberty in humans. Mutations in these TF, together with other genetic determinants previously discovered, could be used in genomic selection to predict the genetic merit of cattle (i.e., the likelihood of the offspring presenting earlier than average puberty for Brahman). Knowledge of key mutations involved in genetic traits is an advantage for genomic prediction because it can increase its accuracy.

  15. Integration of finite element analysis and design of experiments to analyse the geometrical factors in bi-layered tube hydroforming

    International Nuclear Information System (INIS)

    Alaswad, A.; Olabi, A.G.; Benyounis, K.Y.

    2011-01-01

    Tube hydroforming (THF) is a type of unconventional metal forming process in which high fluid pressure and axial feed are used to deform a tube blank in the desired shape. Bi-layered tube hydroforming is suitable to produce bi-layered joints to be used in special applications such as aerospace, oil production and nuclear power plants. In this work, a finite element study along with response surface methodology (RSM) for design of experiment (DOE) has been used to construct models for three responses namely: bulge height, thickness reduction, and wrinkle height as a function of geometrical factors for X shape bi-layered tube hydroforming. A finite element model was built and experimentally validated. The models developed using finite element analysis (FEA) and RSM was found to be educated. The factors effect and their interactions on the three responses were determined and discussed. Such integration was proved to be a successful technique that can be used to predict the geometry of the hydroformed part.

  16. Indoor Environmental Risk Factors for Occupant Symptoms in 100U.S. Office Buildings: Summary of Three Analyses from the EPA BASEStudy

    Energy Technology Data Exchange (ETDEWEB)

    Mendell, M.J.; Lei-Gomez, Q.; Cozen, M.; Brightman, H.S.; Apte,M.; Erdmann, C.A.; Brunner, G.; Girman, J.R.

    2006-02-01

    This paper summarizes three analyses of data on building-related environmental factors and occupant symptoms collected from 100 representative large U.S. office buildings. Using multivariate logistic regression models, we found increased occupant symptoms associated with a number of building-related factors, including lower ventilation rates even at the current guideline levels, lack of scheduled cleaning for air-conditioning drain pans and cooling coils, poor condition of cooling coils, poorly maintained humidification systems, and lower outdoor air intake height. Some expected relationships were not found, and several findings were opposite of expected. Although requiring replication, these findings suggest preventive actions to reduce occupant symptoms in office buildings.

  17. Auto-correlation based intelligent technique for complex waveform presentation and measurement

    International Nuclear Information System (INIS)

    Rana, K P S; Singh, R; Sayann, K S

    2009-01-01

    Waveform acquisition and presentation forms the heart of many measurement systems. Particularly, data acquisition and presentation of repeating complex signals like sine sweep and frequency-modulated signals introduces the challenge of waveform time period estimation and live waveform presentation. This paper presents an intelligent technique, for waveform period estimation of both the complex and simple waveforms, based on the normalized auto-correlation method. The proposed technique is demonstrated using LabVIEW based intensive simulations on several simple and complex waveforms. Implementation of the technique is successfully demonstrated using LabVIEW based virtual instrumentation. Sine sweep vibration waveforms are successfully presented and measured for electrodynamic shaker system generated vibrations. The proposed method is also suitable for digital storage oscilloscope (DSO) triggering, for complex signals acquisition and presentation. This intelligence can be embodied into the DSO, making it an intelligent measurement system, catering wide varieties of the waveforms. The proposed technique, simulation results, robustness study and implementation results are presented in this paper.

  18. Simultaneous measurement of particle velocity and size based on gray difference and autocorrelation

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The gray of two images of a same particle taken by a digital camera with different exposure times is different too. Based on the gray difference of particle images in a double-exposed photo and autocorrelation processing of digital images,this paper proposes a method for measuring particle velocities and sizes simultaneously. This paper also introduces the theoretical foundation of this method,the process of particle imaging and image processing,and the simultaneous measurement of velocity and size of a low speed flow field with 35 μm and 75 μm standard particles. The graphical measurement results can really reflect the flow characteristics of the flow field. In addition,although the measured velocity and size histograms of these two kinds of standard particles are slightly wider than the theoretical ones,they are all still similar to the normal distribution,and the peak velocities and diameters of the histograms are consistent with the default values. Therefore,this measurement method is capable of providing moderate measurement accuracy,and it can be further developed for high-speed flow field measurements.

  19. High-Responsivity Graphene-Boron Nitride Photodetector and Autocorrelator in a Silicon Photonic Integrated Circuit.

    Science.gov (United States)

    Shiue, Ren-Jye; Gao, Yuanda; Wang, Yifei; Peng, Cheng; Robertson, Alexander D; Efetov, Dmitri K; Assefa, Solomon; Koppens, Frank H L; Hone, James; Englund, Dirk

    2015-11-11

    Graphene and other two-dimensional (2D) materials have emerged as promising materials for broadband and ultrafast photodetection and optical modulation. These optoelectronic capabilities can augment complementary metal-oxide-semiconductor (CMOS) devices for high-speed and low-power optical interconnects. Here, we demonstrate an on-chip ultrafast photodetector based on a two-dimensional heterostructure consisting of high-quality graphene encapsulated in hexagonal boron nitride. Coupled to the optical mode of a silicon waveguide, this 2D heterostructure-based photodetector exhibits a maximum responsivity of 0.36 A/W and high-speed operation with a 3 dB cutoff at 42 GHz. From photocurrent measurements as a function of the top-gate and source-drain voltages, we conclude that the photoresponse is consistent with hot electron mediated effects. At moderate peak powers above 50 mW, we observe a saturating photocurrent consistent with the mechanisms of electron-phonon supercollision cooling. This nonlinear photoresponse enables optical on-chip autocorrelation measurements with picosecond-scale timing resolution and exceptionally low peak powers.

  20. RNAseq Analyses Identify Tumor Necrosis Factor-Mediated Inflammation as a Major Abnormality in ALS Spinal Cord.

    Directory of Open Access Journals (Sweden)

    David G Brohawn

    Full Text Available ALS is a rapidly progressive, devastating neurodegenerative illness of adults that produces disabling weakness and spasticity arising from death of lower and upper motor neurons. No meaningful therapies exist to slow ALS progression, and molecular insights into pathogenesis and progression are sorely needed. In that context, we used high-depth, next generation RNA sequencing (RNAseq, Illumina to define gene network abnormalities in RNA samples depleted of rRNA and isolated from cervical spinal cord sections of 7 ALS and 8 CTL samples. We aligned >50 million 2X150 bp paired-end sequences/sample to the hg19 human genome and applied three different algorithms (Cuffdiff2, DEseq2, EdgeR for identification of differentially expressed genes (DEG's. Ingenuity Pathways Analysis (IPA and Weighted Gene Co-expression Network Analysis (WGCNA identified inflammatory processes as significantly elevated in our ALS samples, with tumor necrosis factor (TNF found to be a major pathway regulator (IPA and TNFα-induced protein 2 (TNFAIP2 as a major network "hub" gene (WGCNA. Using the oPOSSUM algorithm, we analyzed transcription factors (TF controlling expression of the nine DEG/hub genes in the ALS samples and identified TF's involved in inflammation (NFkB, REL, NFkB1 and macrophage function (NR1H2::RXRA heterodimer. Transient expression in human iPSC-derived motor neurons of TNFAIP2 (also a DEG identified by all three algorithms reduced cell viability and induced caspase 3/7 activation. Using high-density RNAseq, multiple algorithms for DEG identification, and an unsupervised gene co-expression network approach, we identified significant elevation of inflammatory processes in ALS spinal cord with TNF as a major regulatory molecule. Overexpression of the DEG TNFAIP2 in human motor neurons, the population most vulnerable to die in ALS, increased cell death and caspase 3/7 activation. We propose that therapies targeted to reduce inflammatory TNFα signaling may be

  1. Quantitative DNA methylation analyses reveal stage dependent DNA methylation and association to clinico-pathological factors in breast tumors

    International Nuclear Information System (INIS)

    Klajic, Jovana; Tost, Jörg; Kristensen, Vessela N; Fleischer, Thomas; Dejeux, Emelyne; Edvardsen, Hege; Warnberg, Fredrik; Bukholm, Ida; Lønning, Per Eystein; Solvang, Hiroko; Børresen-Dale, Anne-Lise

    2013-01-01

    Aberrant DNA methylation of regulatory genes has frequently been found in human breast cancers and correlated to clinical outcome. In the present study we investigate stage specific changes in the DNA methylation patterns in order to identify valuable markers to understand how these changes affect breast cancer progression. Quantitative DNA methylation analyses of 12 candidate genes ABCB1, BRCCA1, CDKN2A, ESR1, GSTP1, IGF2, MGMT, HMLH1, PPP2R2B, PTEN, RASSF1A and FOXC1 was performed by pyrosequencing a series of 238 breast cancer tissue samples from DCIS to invasive tumors stage I to IV. Significant differences in methylation levels between the DCIS and invasive stage II tumors were observed for six genes RASSF1A, CDKN2A, MGMT, ABCB1, GSTP1 and FOXC1. RASSF1A, ABCB1 and GSTP1 showed significantly higher methylation levels in late stage compared to the early stage breast carcinoma. Z-score analysis revealed significantly lower methylation levels in DCIS and stage I tumors compared with stage II, III and IV tumors. Methylation levels of PTEN, PPP2R2B, FOXC1, ABCB1 and BRCA1 were lower in tumors harboring TP53 mutations then in tumors with wild type TP53. Z-score analysis showed that TP53 mutated tumors had significantly lower overall methylation levels compared to tumors with wild type TP53. Methylation levels of RASSF1A, PPP2R2B, GSTP1 and FOXC1 were higher in ER positive vs. ER negative tumors and methylation levels of PTEN and CDKN2A were higher in HER2 positive vs. HER2 negative tumors. Z-score analysis also showed that HER2 positive tumors had significantly higher z-scores of methylation compared to the HER2 negative tumors. Univariate survival analysis identifies methylation status of PPP2R2B as significant predictor of overall survival and breast cancer specific survival. In the present study we report that the level of aberrant DNA methylation is higher in late stage compared with early stage of invasive breast cancers and DCIS for genes mentioned above

  2. Analyses of patterns-of-failure and prognostic factors according to radiation fields in early-stage Hodgkin lymphoma

    International Nuclear Information System (INIS)

    Krebs, Lorraine; Guillerm, Sophie; Menard, Jean; Hennequin, Christophe; Quero, Laurent; Amorin, Sandy; Brice, Pauline

    2017-01-01

    Doses and volumes of radiation therapy (RT) for early stages of Hodgkin lymphoma (HL) have been reduced over the last 30 years. Combined modality therapy (CMT) is currently the standard treatment for most patients with early-stage HL. The aim of this study was to analyze the site of relapse after RT according to the extent of radiation fields. Between 1987 and 2011, 427 patients were treated at our institution with RT ± chemotherapy for stage-I/II HL. Among these, 65 patients who experienced a relapse were retrospectively analyzed. Most patients had nodular sclerosis histology (86 %) and stage-II disease (75.9 %). Bulky disease was present in 21 % and 56 % of patients belonged to the unfavorable risk group according to European Organization for Research and Treatment of Cancer (EORTC)/The Lymphoma Study Association (LYSA) definitions. CMT was delivered to 91 % of patients. All patients received RT with doses ranging from 20 to 45 Gy (mean = 34 ± 5.3 Gy). The involved-field RT technique was used in 59 % of patients. The mean time between diagnosis and relapse was 4.2 years (range 0.3-24.5). Out-of-field relapses were suffered by 53 % of patients. Relapses occurred more frequently at out-of-field sites in patients with a favorable disease status, whereas in-field relapses were associated with bulky mediastinal disease. Relapses occurred later for favorable compared with the unfavorable risk group (3.5 vs. 2.9 years, p = 0.5). From multivariate analyses, neither RT dose nor RT field size were predictive for an in-field relapse (p = 0.25 and p = 0.8, respectively), only bulky disease was predictive (p = 0.018). In patients with bulky disease, RT dose and RT field size were not predictive for an in-field relapse. In this subgroup of patients, chemotherapy should be intensified. We confirmed the bad prognosis of early relapses. (orig.) [de

  3. Possible Factors Promoting Car Evacuation in the 2011 Tohoku Tsunami Revealed by Analysing a Large-Scale Questionnaire Survey in Kesennuma City

    Directory of Open Access Journals (Sweden)

    Fumiyasu Makinoshima

    2017-11-01

    Full Text Available Excessive car evacuation can cause severe traffic jams that can lead to large numbers of casualties during tsunami disasters. Investigating the possible factors that lead to unnecessary car evacuation can ensure smoother tsunami evacuations and mitigate casualty damages in future tsunami events. In this study, we quantitatively investigated the possible factors that promote car evacuation, including both necessary and unnecessary usages, by statistically analysing a large amount of data on actual tsunami evacuation behaviours surveyed in Kesennuma, where devastating damage occurred during the 2011 Tohoku Tsunami. A straightforward statistical analysis revealed a high percentage of car evacuations (approx. 50%; however, this fraction includes a high number of unnecessary usage events that were distinguished based on mode choice reasons. In addition, a binary logistic regression was conducted to quantitatively evaluate the effects of several factors and to identify the dominant factor that affected evacuation mode choice. The regression results suggested that the evacuation distance was the dominant factor for choosing car evacuation relative to other factors, such as age and sex. The cross-validation test of the regression model demonstrated that the considered factors were useful for decision making and the prediction of evacuation mode choice in the target area.

  4. Psychometric evaluation of the Oswestry Disability Index in patients with chronic low back pain: factor and Mokken analyses.

    Science.gov (United States)

    Lee, Chin-Pang; Fu, Tsai-Sheng; Liu, Chia-Yih; Hung, Ching-I

    2017-10-03

    Disputes exist regarding the psychometric properties of the Oswestry Disability Index (ODI). The present study was to examine the reliability, validity, and dimensionality of a Chinese version of the ODI version 2.1 in a sample of 225 adult orthopedic outpatients with chronic low back pain [mean age (SD): 40.7 (11.4) years]. We conducted reliability analysis, exploratory bifactor analysis, confirmatory factor analysis, and Mokken scale analysis of the ODI. To validate the ODI, we used the Short-Form 36 questionnaire (SF-36) and visual analog scale (VAS). The reliability, and discriminant and construct validities of the ODI was good. The fit statistics of the unidimensional model of the ODI were inadequate. The ODI was a weak Mokken scale (H s  = 0.31). The ODI was a reliable and valid scale suitable for measurement of disability in patients with low back pain. But the ODI seemed to be multidimensional that was against the use of the raw score of the ODI as a measurement of disability.

  5. Analyses of patterns-of-failure and prognostic factors according to radiation fields in early-stage Hodgkin lymphoma

    Energy Technology Data Exchange (ETDEWEB)

    Krebs, Lorraine; Guillerm, Sophie; Menard, Jean; Hennequin, Christophe; Quero, Laurent [Saint Louis Hospital, Radiation Oncology Department, Paris (France); Amorin, Sandy; Brice, Pauline [Saint Louis Hospital, AP-HP, Hematooncology Department, Paris (France)

    2017-02-15

    Doses and volumes of radiation therapy (RT) for early stages of Hodgkin lymphoma (HL) have been reduced over the last 30 years. Combined modality therapy (CMT) is currently the standard treatment for most patients with early-stage HL. The aim of this study was to analyze the site of relapse after RT according to the extent of radiation fields. Between 1987 and 2011, 427 patients were treated at our institution with RT ± chemotherapy for stage-I/II HL. Among these, 65 patients who experienced a relapse were retrospectively analyzed. Most patients had nodular sclerosis histology (86 %) and stage-II disease (75.9 %). Bulky disease was present in 21 % and 56 % of patients belonged to the unfavorable risk group according to European Organization for Research and Treatment of Cancer (EORTC)/The Lymphoma Study Association (LYSA) definitions. CMT was delivered to 91 % of patients. All patients received RT with doses ranging from 20 to 45 Gy (mean = 34 ± 5.3 Gy). The involved-field RT technique was used in 59 % of patients. The mean time between diagnosis and relapse was 4.2 years (range 0.3-24.5). Out-of-field relapses were suffered by 53 % of patients. Relapses occurred more frequently at out-of-field sites in patients with a favorable disease status, whereas in-field relapses were associated with bulky mediastinal disease. Relapses occurred later for favorable compared with the unfavorable risk group (3.5 vs. 2.9 years, p = 0.5). From multivariate analyses, neither RT dose nor RT field size were predictive for an in-field relapse (p = 0.25 and p = 0.8, respectively), only bulky disease was predictive (p = 0.018). In patients with bulky disease, RT dose and RT field size were not predictive for an in-field relapse. In this subgroup of patients, chemotherapy should be intensified. We confirmed the bad prognosis of early relapses. (orig.) [German] Waehrend der letzten 30 Jahre wurden die Strahlentherapie-(RT-)Dosis und die RT-Volumina fuer die Behandlung der Fruehstadien

  6. Applying of Factor Analyses for Determination of Trace Elements Distribution in Water from River Vardar and Its Tributaries, Macedonia/Greece

    Science.gov (United States)

    Popov, Stanko Ilić; Stafilov, Trajče; Šajn, Robert; Tănăselia, Claudiu; Bačeva, Katerina

    2014-01-01

    A systematic study was carried out to investigate the distribution of fifty-six elements in the water samples from river Vardar (Republic of Macedonia and Greece) and its major tributaries. The samples were collected from 27 sampling sites. Analyses were performed by mass spectrometry with inductively coupled plasma (ICP-MS) and atomic emission spectrometry with inductively coupled plasma (ICP-AES). Cluster and R mode factor analysis (FA) was used to identify and characterise element associations and four associations of elements were determined by the method of multivariate statistics. Three factors represent the associations of elements that occur in the river water naturally while Factor 3 represents an anthropogenic association of the elements (Cd, Ga, In, Pb, Re, Tl, Cu, and Zn) introduced in the river waters from the waste waters from the mining and metallurgical activities in the country. PMID:24587756

  7. Applying of Factor Analyses for Determination of Trace Elements Distribution in Water from River Vardar and Its Tributaries, Macedonia/Greece

    Directory of Open Access Journals (Sweden)

    Stanko Ilić Popov

    2014-01-01

    Full Text Available A systematic study was carried out to investigate the distribution of fifty-six elements in the water samples from river Vardar (Republic of Macedonia and Greece and its major tributaries. The samples were collected from 27 sampling sites. Analyses were performed by mass spectrometry with inductively coupled plasma (ICP-MS and atomic emission spectrometry with inductively coupled plasma (ICP-AES. Cluster and R mode factor analysis (FA was used to identify and characterise element associations and four associations of elements were determined by the method of multivariate statistics. Three factors represent the associations of elements that occur in the river water naturally while Factor 3 represents an anthropogenic association of the elements (Cd, Ga, In, Pb, Re, Tl, Cu, and Zn introduced in the river waters from the waste waters from the mining and metallurgical activities in the country.

  8. Building a three-dimensional model of CYP2C9 inhibition using the Autocorrelator: an autonomous model generator.

    Science.gov (United States)

    Lardy, Matthew A; Lebrun, Laurie; Bullard, Drew; Kissinger, Charles; Gobbi, Alberto

    2012-05-25

    In modern day drug discovery campaigns, computational chemists have to be concerned not only about improving the potency of molecules but also reducing any off-target ADMET activity. There are a plethora of antitargets that computational chemists may have to consider. Fortunately many antitargets have crystal structures deposited in the PDB. These structures are immediately useful to our Autocorrelator: an automated model generator that optimizes variables for building computational models. This paper describes the use of the Autocorrelator to construct high quality docking models for cytochrome P450 2C9 (CYP2C9) from two publicly available crystal structures. Both models result in strong correlation coefficients (R² > 0.66) between the predicted and experimental determined log(IC₅₀) values. Results from the two models overlap well with each other, converging on the same scoring function, deprotonated charge state, and predicted the binding orientation for our collection of molecules.

  9. Autocorrelation descriptor improvements for QSAR: 2DA_Sign and 3DA_Sign

    Science.gov (United States)

    Sliwoski, Gregory; Mendenhall, Jeffrey; Meiler, Jens

    2016-03-01

    Quantitative structure-activity relationship (QSAR) is a branch of computer aided drug discovery that relates chemical structures to biological activity. Two well established and related QSAR descriptors are two- and three-dimensional autocorrelation (2DA and 3DA). These descriptors encode the relative position of atoms or atom properties by calculating the separation between atom pairs in terms of number of bonds (2DA) or Euclidean distance (3DA). The sums of all values computed for a given small molecule are collected in a histogram. Atom properties can be added with a coefficient that is the product of atom properties for each pair. This procedure can lead to information loss when signed atom properties are considered such as partial charge. For example, the product of two positive charges is indistinguishable from the product of two equivalent negative charges. In this paper, we present variations of 2DA and 3DA called 2DA_Sign and 3DA_Sign that avoid information loss by splitting unique sign pairs into individual histograms. We evaluate these variations with models trained on nine datasets spanning a range of drug target classes. Both 2DA_Sign and 3DA_Sign significantly increase model performance across all datasets when compared with traditional 2DA and 3DA. Lastly, we find that limiting 3DA_Sign to maximum atom pair distances of 6 Å instead of 12 Å further increases model performance, suggesting that conformational flexibility may hinder performance with longer 3DA descriptors. Consistent with this finding, limiting the number of bonds in 2DA_Sign from 11 to 5 fails to improve performance.

  10. Periodicity in the autocorrelation function as a mechanism for regularly occurring zero crossings or extreme values of a Gaussian process.

    Science.gov (United States)

    Wilson, Lorna R M; Hopcraft, Keith I

    2017-12-01

    The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.

  11. An asymptotic theory for cross-correlation between auto-correlated sequences and its application on neuroimaging data.

    Science.gov (United States)

    Zhou, Yunyi; Tao, Chenyang; Lu, Wenlian; Feng, Jianfeng

    2018-04-20

    Functional connectivity is among the most important tools to study brain. The correlation coefficient, between time series of different brain areas, is the most popular method to quantify functional connectivity. Correlation coefficient in practical use assumes the data to be temporally independent. However, the time series data of brain can manifest significant temporal auto-correlation. A widely applicable method is proposed for correcting temporal auto-correlation. We considered two types of time series models: (1) auto-regressive-moving-average model, (2) nonlinear dynamical system model with noisy fluctuations, and derived their respective asymptotic distributions of correlation coefficient. These two types of models are most commonly used in neuroscience studies. We show the respective asymptotic distributions share a unified expression. We have verified the validity of our method, and shown our method exhibited sufficient statistical power for detecting true correlation on numerical experiments. Employing our method on real dataset yields more robust functional network and higher classification accuracy than conventional methods. Our method robustly controls the type I error while maintaining sufficient statistical power for detecting true correlation in numerical experiments, where existing methods measuring association (linear and nonlinear) fail. In this work, we proposed a widely applicable approach for correcting the effect of temporal auto-correlation on functional connectivity. Empirical results favor the use of our method in functional network analysis. Copyright © 2018. Published by Elsevier B.V.

  12. Periodicity in the autocorrelation function as a mechanism for regularly occurring zero crossings or extreme values of a Gaussian process

    Science.gov (United States)

    Wilson, Lorna R. M.; Hopcraft, Keith I.

    2017-12-01

    The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.

  13. Geostatistical prediction of microbial water quality throughout a stream network using meteorology, land cover, and spatiotemporal autocorrelation.

    Science.gov (United States)

    Holcomb, David Andrew; Messier, Kyle P; Serre, Marc L; Rowny, Jakob G; Stewart, Jill R

    2018-06-11

    Predictive modeling is promising as an inexpensive tool to assess water quality. We developed geostatistical predictive models of microbial water quality that empirically modelled spatiotemporal autocorrelation in measured fecal coliform (FC) bacteria concentrations to improve prediction. We compared five geostatistical models featuring different autocorrelation structures, fit to 676 observations from 19 locations in North Carolina's Jordan Lake watershed using meteorological and land cover predictor variables. Though stream distance metrics (with and without flow-weighting) failed to improve prediction over the Euclidean distance metric, incorporating temporal autocorrelation substantially improved prediction over the space-only models. We predicted FC throughout the stream network daily for one year, designating locations "impaired", "unimpaired", or "unassessed" if the probability of exceeding the state standard was >90%, 10% but <90%, respectively. We could assign impairment status to more of the stream network on days any FC were measured, suggesting frequent sample-based monitoring remains necessary, though implementing spatiotemporal predictive models may reduce the number of concurrent sampling locations required to adequately assess water quality. Together, these results suggest that prioritizing sampling at different times and conditions using geographically sparse monitoring networks is adequate to build robust and informative geostatistical models of water quality impairment.

  14. Factors predicting the development of pressure ulcers in an at-risk population who receive standardized preventive care: secondary analyses of a multicentre randomised controlled trial.

    Science.gov (United States)

    Demarre, Liesbet; Verhaeghe, Sofie; Van Hecke, Ann; Clays, Els; Grypdonck, Maria; Beeckman, Dimitri

    2015-02-01

    To identify predictive factors associated with the development of pressure ulcers in patients at risk who receive standardized preventive care. Numerous studies have examined factors that predict risk for pressure ulcer development. Only a few studies identified risk factors associated with pressure ulcer development in hospitalized patients receiving standardized preventive care. Secondary analyses of data collected in a multicentre randomized controlled trial. The sample consisted of 610 consecutive patients at risk for pressure ulcer development (Braden Score Pressure ulcers in category II-IV were significantly associated with non-blanchable erythema, urogenital disorders and higher body temperature. Predictive factors significantly associated with superficial pressure ulcers were admission to an internal medicine ward, incontinence-associated dermatitis, non-blanchable erythema and a lower Braden score. Superficial sacral pressure ulcers were significantly associated with incontinence-associated dermatitis. Despite the standardized preventive measures they received, hospitalized patients with non-blanchable erythema, urogenital disorders and a higher body temperature were at increased risk for developing pressure ulcers. Improved identification of at-risk patients can be achieved by taking into account specific predictive factors. Even if preventive measures are in place, continuous assessment and tailoring of interventions is necessary in all patients at risk. Daily skin observation can be used to continuously monitor the effectiveness of the intervention. © 2014 John Wiley & Sons Ltd.

  15. Analyses of the influencing factors of soil microbial functional gene diversity in tropical rainforest based on GeoChip 5.0.

    Science.gov (United States)

    Cong, Jing; Liu, Xueduan; Lu, Hui; Xu, Han; Li, Yide; Deng, Ye; Li, Diqiang; Zhang, Yuguang

    2015-09-01

    To examine soil microbial functional gene diversity and causative factors in tropical rainforests, we used a microarray-based metagenomic tool named GeoChip 5.0 to profile it. We found that high microbial functional gene diversity and different soil microbial metabolic potential for biogeochemical processes were considered to exist in tropical rainforest. Soil available nitrogen was the most associated with soil microbial functional gene structure. Here, we mainly describe the experiment design, the data processing, and soil biogeochemical analyses attached to the study in details, which could be published on BMC microbiology Journal in 2015, whose raw data have been deposited in NCBI's Gene Expression Omnibus (accession number GSE69171).

  16. Accounting for and predicting the influence of spatial autocorrelation in water quality modeling

    Science.gov (United States)

    Miralha, L.; Kim, D.

    2017-12-01

    Although many studies have attempted to investigate the spatial trends of water quality, more attention is yet to be paid to the consequences of considering and ignoring the spatial autocorrelation (SAC) that exists in water quality parameters. Several studies have mentioned the importance of accounting for SAC in water quality modeling, as well as the differences in outcomes between models that account for and ignore SAC. However, the capacity to predict the magnitude of such differences is still ambiguous. In this study, we hypothesized that SAC inherently possessed by a response variable (i.e., water quality parameter) influences the outcomes of spatial modeling. We evaluated whether the level of inherent SAC is associated with changes in R-Squared, Akaike Information Criterion (AIC), and residual SAC (rSAC), after accounting for SAC during modeling procedure. The main objective was to analyze if water quality parameters with higher Moran's I values (inherent SAC measure) undergo a greater increase in R² and a greater reduction in both AIC and rSAC. We compared a non-spatial model (OLS) to two spatial regression approaches (spatial lag and error models). Predictor variables were the principal components of topographic (elevation and slope), land cover, and hydrological soil group variables. We acquired these data from federal online sources (e.g. USGS). Ten watersheds were selected, each in a different state of the USA. Results revealed that water quality parameters with higher inherent SAC showed substantial increase in R² and decrease in rSAC after performing spatial regressions. However, AIC values did not show significant changes. Overall, the higher the level of inherent SAC in water quality variables, the greater improvement of model performance. This indicates a linear and direct relationship between the spatial model outcomes (R² and rSAC) and the degree of SAC in each water quality variable. Therefore, our study suggests that the inherent level of

  17. The differential impact of scientific quality, bibliometric factors, and social media activity on the influence of systematic reviews and meta-analyses about psoriasis.

    Science.gov (United States)

    Ruano, Juan; Aguilar-Luque, Macarena; Gómez-Garcia, Francisco; Alcalde Mellado, Patricia; Gay-Mimbrera, Jesus; Carmona-Fernandez, Pedro J; Maestre-López, Beatriz; Sanz-Cabanillas, Juan Luís; Hernández Romero, José Luís; González-Padilla, Marcelino; Vélez García-Nieto, Antonio; Isla-Tejera, Beatriz

    2018-01-01

    Researchers are increasingly using on line social networks to promote their work. Some authors have suggested that measuring social media activity can predict the impact of a primary study (i.e., whether or not an article will be highly cited). However, the influence of variables such as scientific quality, research disclosures, and journal characteristics on systematic reviews and meta-analyses has not yet been assessed. The present study aims to describe the effect of complex interactions between bibliometric factors and social media activity on the impact of systematic reviews and meta-analyses about psoriasis (PROSPERO 2016: CRD42016053181). Methodological quality was assessed using the Assessing the Methodological Quality of Systematic Reviews (AMSTAR) tool. Altmetrics, which consider Twitter, Facebook, and Google+ mention counts as well as Mendeley and SCOPUS readers, and corresponding article citation counts from Google Scholar were obtained for each article. Metadata and journal-related bibliometric indices were also obtained. One-hundred and sixty-four reviews with available altmetrics information were included in the final multifactorial analysis, which showed that social media and impact factor have less effect than Mendeley and SCOPUS readers on the number of cites that appear in Google Scholar. Although a journal's impact factor predicted the number of tweets (OR, 1.202; 95% CI, 1.087-1.049), the years of publication and the number of Mendeley readers predicted the number of citations in Google Scholar (OR, 1.033; 95% CI, 1.018-1.329). Finally, methodological quality was related neither with bibliometric influence nor social media activity for systematic reviews. In conclusion, there seems to be a lack of connectivity between scientific quality, social media activity, and article usage, thus predicting scientific success based on these variables may be inappropriate in the particular case of systematic reviews.

  18. The differential impact of scientific quality, bibliometric factors, and social media activity on the influence of systematic reviews and meta-analyses about psoriasis

    Science.gov (United States)

    Gómez-Garcia, Francisco; Alcalde Mellado, Patricia; Gay-Mimbrera, Jesus; Carmona-Fernandez, Pedro J.; Maestre-López, Beatriz; Sanz-Cabanillas, Juan Luís; Hernández Romero, José Luís; González-Padilla, Marcelino; Vélez García-Nieto, Antonio; Isla-Tejera, Beatriz

    2018-01-01

    Researchers are increasingly using on line social networks to promote their work. Some authors have suggested that measuring social media activity can predict the impact of a primary study (i.e., whether or not an article will be highly cited). However, the influence of variables such as scientific quality, research disclosures, and journal characteristics on systematic reviews and meta-analyses has not yet been assessed. The present study aims to describe the effect of complex interactions between bibliometric factors and social media activity on the impact of systematic reviews and meta-analyses about psoriasis (PROSPERO 2016: CRD42016053181). Methodological quality was assessed using the Assessing the Methodological Quality of Systematic Reviews (AMSTAR) tool. Altmetrics, which consider Twitter, Facebook, and Google+ mention counts as well as Mendeley and SCOPUS readers, and corresponding article citation counts from Google Scholar were obtained for each article. Metadata and journal-related bibliometric indices were also obtained. One-hundred and sixty-four reviews with available altmetrics information were included in the final multifactorial analysis, which showed that social media and impact factor have less effect than Mendeley and SCOPUS readers on the number of cites that appear in Google Scholar. Although a journal’s impact factor predicted the number of tweets (OR, 1.202; 95% CI, 1.087–1.049), the years of publication and the number of Mendeley readers predicted the number of citations in Google Scholar (OR, 1.033; 95% CI, 1.018–1.329). Finally, methodological quality was related neither with bibliometric influence nor social media activity for systematic reviews. In conclusion, there seems to be a lack of connectivity between scientific quality, social media activity, and article usage, thus predicting scientific success based on these variables may be inappropriate in the particular case of systematic reviews. PMID:29377889

  19. A novel auto-correlation function method and FORTRAN codes for the determination of the decay ratio in BWR stability analysis

    International Nuclear Information System (INIS)

    Behringer, K.

    2001-08-01

    A novel auto-correlation function (ACF) method has been investigated for determining the oscillation frequency and the decay ratio in BWR stability analyses. The report describes not only the method but also documents comprehensively the used and developed FORTRAN codes. The neutron signals are band-pass filtered to separate the oscillation peak in the power spectral density (PSD) from background. Two linear second-order oscillation models are considered. The ACF of each model, corrected for signal filtering and with the inclusion of a background term under the peak in the PSD, is then least-squares fitted to the ACF estimated on the previously filtered neutron signals, in order to determine the oscillation frequency and the decay ratio. The procedures of filtering and ACF estimation use fast Fourier transform techniques with signal segmentation. Gliding 'short-time' ACF estimates along a signal record allow the evaluation of uncertainties. Some numerical results are given which have been obtained from neutron signal data offered by the recent Forsmark I and Forsmark II NEA benchmark project. They are compared with those from other benchmark participants using different other analysis methods. (author)

  20. A novel auto-correlation function method and FORTRAN codes for the determination of the decay ratio in BWR stability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Behringer, K

    2001-08-01

    A novel auto-correlation function (ACF) method has been investigated for determining the oscillation frequency and the decay ratio in BWR stability analyses. The report describes not only the method but also documents comprehensively the used and developed FORTRAN codes. The neutron signals are band-pass filtered to separate the oscillation peak in the power spectral density (PSD) from background. Two linear second-order oscillation models are considered. The ACF of each model, corrected for signal filtering and with the inclusion of a background term under the peak in the PSD, is then least-squares fitted to the ACF estimated on the previously filtered neutron signals, in order to determine the oscillation frequency and the decay ratio. The procedures of filtering and ACF estimation use fast Fourier transform techniques with signal segmentation. Gliding 'short-time' ACF estimates along a signal record allow the evaluation of uncertainties. Some numerical results are given which have been obtained from neutron signal data offered by the recent Forsmark I and Forsmark II NEA benchmark project. They are compared with those from other benchmark participants using different other analysis methods. (author)

  1. Analyses of the influencing factors of soil microbial functional gene diversity in tropical rainforest based on GeoChip 5.0

    Directory of Open Access Journals (Sweden)

    Jing Cong

    2015-09-01

    Full Text Available To examine soil microbial functional gene diversity and causative factors in tropical rainforests, we used a microarray-based metagenomic tool named GeoChip 5.0 to profile it. We found that high microbial functional gene diversity and different soil microbial metabolic potential for biogeochemical processes were considered to exist in tropical rainforest. Soil available nitrogen was the most associated with soil microbial functional gene structure. Here, we mainly describe the experiment design, the data processing, and soil biogeochemical analyses attached to the study in details, which could be published on BMC microbiology Journal in 2015, whose raw data have been deposited in NCBI's Gene Expression Omnibus (accession number GSE69171.

  2. Alcohol Consumption as a Risk Factor for Acute and Chronic Pancreatitis: A Systematic Review and a Series of Meta-analyses.

    Science.gov (United States)

    Samokhvalov, Andriy V; Rehm, Jürgen; Roerecke, Michael

    2015-12-01

    Pancreatitis is a highly prevalent medical condition associated with a spectrum of endocrine and exocrine pancreatic insufficiencies. While high alcohol consumption is an established risk factor for pancreatitis, its relationship with specific types of pancreatitis and a potential threshold have not been systematically examined. We conducted a systematic literature search for studies on the association between alcohol consumption and pancreatitis based on PRISMA guidelines. Non-linear and linear random-effect dose-response meta-analyses using restricted cubic spline meta-regressions and categorical meta-analyses in relation to abstainers were conducted. Seven studies with 157,026 participants and 3618 cases of pancreatitis were included into analyses. The dose-response relationship between average volume of alcohol consumption and risk of pancreatitis was monotonic with no evidence of non-linearity for chronic pancreatitis (CP) for both sexes (p = 0.091) and acute pancreatitis (AP) in men (p = 0.396); it was non-linear for AP in women (p = 0.008). Compared to abstention, there was a significant decrease in risk (RR = 0.76, 95%CI: 0.60-0.97) of AP in women below the threshold of 40 g/day. No such association was found in men (RR = 1.1, 95%CI: 0.69-1.74). The RR for CP at 100 g/day was 6.29 (95%CI: 3.04-13.02). The dose-response relationships between alcohol consumption and risk of pancreatitis were monotonic for CP and AP in men, and non-linear for AP in women. Alcohol consumption below 40 g/day was associated with reduced risk of AP in women. Alcohol consumption beyond this level was increasingly detrimental for any type of pancreatitis. The work was financially supported by a grant from the National Institute on Alcohol Abuse and Alcoholism (R21AA023521) to the last author.

  3. Analysing temporal variability of particulate matter and possible contributing factors over Mahabaleshwar, a high-altitude station in Western Ghats, India

    Science.gov (United States)

    Leena, P. P.; Vijayakumar, K.; Anilkumar, V.; Pandithurai, G.

    2017-11-01

    Airborne particulate matter (PM) plays a vital role on climate change as well as human health. In the present study, temporal variability associated with mass concentrations of PM10, PM2.5, and PM1.0 were analysed using ground observations from Mahabaleswar (1348 m AMSL, 17.56 0N, 73.4 0E), a high-altitude station in the Western Ghats, India from June 2012 to May 2013. Concentrations of PM10, PM2.5, and PM1.0 showed strong diurnal, monthly, seasonal and weekday-weekend trends. The seasonal variation of PM1.0 and PM2.5 has showed highest concentrations during winter season compared to monsoon and pre-monsoon, but in the case of PM10 it showed highest concentrations in pre-monsoon season. Similarly, slightly higher PM concentrations were observed during weekends compared to weekdays. In addition, possible contributing factors to this temporal variability has been analysed based on the variation of secondary pollutants such as NO2, SO2, CO and O3 and long range transport of dust.

  4. Socioeconomic position, lifestyle factors and age at natural menopause: a systematic review and meta-analyses of studies across six continents

    Science.gov (United States)

    Schoenaker, Danielle AJM; Jackson, Caroline A; Rowlands, Jemma V; Mishra, Gita D

    2014-01-01

    Background: Age at natural menopause (ANM) is considered a marker of biological ageing and is increasingly recognized as a sentinel for chronic disease risk in later life. Socioeconomic position (SEP) and lifestyle factors are thought to be associated with ANM. Methods: We performed a systematic review and meta-analyses to determine the overall mean ANM, and the effect of SEP and lifestyle factors on ANM by calculating the weighted mean difference (WMD) and pooling adjusted hazard ratios. We explored heterogeneity using meta-regression and also included unpublished findings from the Australian Longitudinal Study on Women’s Health. Results: We identified 46 studies across 24 countries. Mean ANM was 48.8 years [95% confidence interval (CI): 48.3, 49.2], with between-study heterogeneity partly explained by geographical region. ANM was lowest among African, Latin American, Asian and Middle Eastern countries and highest in Europe and Australia, followed by the USA. Education was associated with later ANM (WMD middle vs low education 0.30, 95% CI: 0.10, 0.51; high vs low education 0.64, 95% CI 0.26, 1.02). A similar dose-response relationship was also observed for occupation. Smoking was associated with a 1-year reduction of ANM (WMD: -0.91, 95% CI: –1.34, –0.48). Being overweight and moderate/high physical activity were modestly associated with later ANM, but findings were less conclusive. Conclusions: ANM varies across populations, partly due to differences across geographical regions. SEP and some lifestyle factors are associated with ANM, but further research is needed to examine the impact of the associations between risk factors and ANM on future health outcomes. PMID:24771324

  5. A single-shot nonlinear autocorrelation approach for time-resolved physics in the vacuum ultraviolet spectral range

    International Nuclear Information System (INIS)

    Rompotis, Dimitrios

    2016-02-01

    In this work, a single-shot temporal metrology scheme operating in the vacuum-extreme ultraviolet spectral range has been designed and experimentally implemented. Utilizing an anti-collinear geometry, a second-order intensity autocorrelation measurement of a vacuum ultraviolet pulse can be performed by encoding temporal delay information on the beam propagation coordinate. An ion-imaging time-of-flight spectrometer, offering micrometer resolution has been set-up for this purpose. This instrument enables the detection of a magnified image of the spatial distribution of ions exclusively generated by direct two-photon absorption in the combined counter-propagating pulse focus and thus obtain the second-order intensity autocorrelation measurement on a single-shot basis. Additionally, an intense VUV light source based on high-harmonic generation has been experimentally realized. It delivers intense sub-20 fs Ti:Sa fifth-harmonic pulses utilizing a loose-focusing geometry in a long Ar gas cell. The VUV pulses centered at 161.8 nm reach pulse energies of 1.1 μJ per pulse, while the corresponding pulse duration is measured with a second-order, fringe-resolved autocorrelation scheme to be 18 ± 1 fs on average. Non-resonant, two-photon ionization of Kr and Xe and three-photon ionization of Ne verify the fifth-harmonic pulse intensity and indicate the feasibility of multi-photon VUV pump/VUV probe studies of ultrafast atomic and molecular dynamics. Finally, the extended functionally of the counter-propagating pulse metrology approach is demonstrated by a single-shot VUV pump/VUV probe experiment aiming at the investigation of ultrafast dissociation dynamics of O 2 excited in the Schumann-Runge continuum at 162 nm.

  6. Risk and Protective Factors for Intimate Partner Violence Against Women: Systematic Review and Meta-analyses of Prospective-Longitudinal Studies.

    Science.gov (United States)

    Yakubovich, Alexa R; Stöckl, Heidi; Murray, Joseph; Melendez-Torres, G J; Steinert, Janina I; Glavin, Calla E Y; Humphreys, David K

    2018-07-01

    The estimated lifetime prevalence of physical or sexual intimate partner violence (IPV) is 30% among women worldwide. Understanding risk and protective factors is essential for designing effective prevention strategies. To quantify the associations between prospective-longitudinal risk and protective factors and IPV and identify evidence gaps. We conducted systematic searches in 16 databases including MEDLINE and PsycINFO from inception to June 2016. The study protocol is registered with PROSPERO (CRD42016039213). We included published and unpublished studies available in English that prospectively analyzed any risk or protective factor(s) for self-reported IPV victimization among women and controlled for at least 1 other variable. Three reviewers were involved in study screening. One reviewer extracted estimates of association and study characteristics from each study and 2 reviewers independently checked a random subset of extractions. We assessed study quality with the Cambridge Quality Checklists. When studies investigated the same risk or protective factor using similar measures, we computed pooled odds ratios (ORs) by using random-effects meta-analyses. We summarized heterogeneity with I 2 and τ 2 . We synthesized all estimates of association, including those not meta-analyzed, by using harvest plots to illustrate evidence gaps and trends toward negative or positive associations. Of 18 608 studies identified, 60 were included and 35 meta-analyzed. Most studies were based in the United States. The strongest evidence for modifiable risk factors for IPV against women were unplanned pregnancy (OR = 1.66; 95% confidence interval [CI] = 1.20, 1.31) and having parents with less than a high-school education (OR = 1.55; 95% CI = 1.10, 2.17). Being older (OR = 0.96; 95% CI = 0.93, 0.98) or married (OR = 0.93; 95% CI = 0.87, 0.99) were protective. To our knowledge, this is the first systematic, meta-analytic review of all risk and

  7. The analyses of risk factors for COPD in the Li ethnic group in Hainan, People’s Republic of China

    Directory of Open Access Journals (Sweden)

    Ding YP

    2015-11-01

    Full Text Available Yipeng Ding,1,* Junxu Xu,2,* Jinjian Yao,1 Yu Chen,2 Ping He,1 Yanhong Ouyang,1 Huan Niu,1 Zhongjie Tian,1 Pei Sun1 1Department of Emergency, People’s Hospital of Hainan Province, 2Department of Respiratory, The Third People’s Hospital of Haikou, Haikou, Hainan, People’s Republic of China *These authors contributed equally to this work Objective: To study the risk factors for chronic obstructive pulmonary disease (COPD in Li population in Hainan province, People’s Republic of China.Methods: Li people above 40 years of age from Hainan were chosen by stratified random cluster sampling between 2012 and 2014. All participants were interviewed with a home-visiting questionnaire, and spirometry was performed on all eligible participants. Patients with airflow limitation (forced expiratory volume in 1 second [FEV1]/forced vital capacity [FVC] <0.70 were further examined by postbronchodilator spirometry, and those with a postbronchodilator FEV1/FVC <0.70 was diagnosed with COPD. The information of physical condition and history, smoking intensity, smoking duration, second-hand smoking, education, job category, monthly household income, working years, residential environment, primary fuel for cooking and heating (biomass fuel including wood, crop residues, dung, and charcoal, or modern fuel such as natural gas, liquefied petroleum gas, electricity, and solar energy, ventilated kitchen, heating methods, air pollution, recurrent respiratory infections, family history of respiratory diseases, cough incentives, and allergies of COPD and non-COPD subjects was analyzed by univariate and multivariate logistic regression models to identify correlated risk factors for COPD.Results: Out of the 5,463 Li participants, a total of 277 COPD cases were identified by spirometry, and 307 healthy subjects were randomly selected as controls. Univariate logistic regression analyses showed that older people (65 years and above, low body mass index (BMI, biomass smoke

  8. Analyses of potential factors affecting survival of juvenile salmonids volitionally passing through turbines at McNary and John Day Dams, Columbia River

    Science.gov (United States)

    Beeman, John; Hansel, Hal; Perry, Russell; Hockersmith, Eric; Sandford, Ben

    2011-01-01

    This report describes analyses of data from radio- or acoustic-tagged juvenile salmonids passing through hydro-dam turbines to determine factors affecting fish survival. The data were collected during a series of studies designed to estimate passage and survival probabilities at McNary (2002-09) and John Day (2002-03) Dams on the Columbia River during controlled experiments of structures or operations at spillways. Relatively few tagged fish passed turbines in any single study, but sample sizes generally were adequate for our analyses when data were combined from studies using common methods over a series of years. We used information-theoretic methods to evaluate biological, operational, and group covariates by creating models fitting linear (all covariates) or curvilinear (operational covariates only) functions to the data. Biological covariates included tag burden, weight, and water temperature; operational covariates included spill percentage, total discharge, hydraulic head, and turbine unit discharge; and group covariates included year, treatment, and photoperiod. Several interactions between the variables also were considered. Support of covariates by the data was assessed by comparing the Akaike Information Criterion of competing models. The analyses were conducted because there was a lack of information about factors affecting survival of fish passing turbines volitionally and the data were available from past studies. The depth of acclimation, tag size relative to fish size (tag burden), turbine unit discharge, and area of entry into the turbine intake have been shown to affect turbine passage survival of juvenile salmonids in other studies. This study indicates that turbine passage survival of the study fish was primarily affected by biological covariates rather than operational covariates. A negative effect of tag burden was strongly supported in data from yearling Chinook salmon at John Day and McNary dams, but not for subyearling Chinook salmon or

  9. Assessing public speaking fear with the short form of the Personal Report of Confidence as a Speaker scale: confirmatory factor analyses among a French-speaking community sample.

    Science.gov (United States)

    Heeren, Alexandre; Ceschi, Grazia; Valentiner, David P; Dethier, Vincent; Philippot, Pierre

    2013-01-01

    The main aim of this study was to assess the reliability and structural validity of the French version of the 12-item version of the Personal Report of Confidence as Speaker (PRCS), one of the most promising measurements of public speaking fear. A total of 611 French-speaking volunteers were administered the French versions of the short PRCS, the Liebowitz Social Anxiety Scale, the Fear of Negative Evaluation scale, as well as the Trait version of the Spielberger State-Trait Anxiety Inventory and the Beck Depression Inventory-II, which assess the level of anxious and depressive symptoms, respectively. Regarding its structural validity, confirmatory factor analyses indicated a single-factor solution, as implied by the original version. Good scale reliability (Cronbach's alpha = 0.86) was observed. The item discrimination analysis suggested that all the items contribute to the overall scale score reliability. The French version of the short PRCS showed significant correlations with the Liebowitz Social Anxiety Scale (r = 0.522), the Fear of Negative Evaluation scale (r = 0.414), the Spielberger State-Trait Anxiety Inventory (r = 0.516), and the Beck Depression Inventory-II (r = 0.361). The French version of the short PRCS is a reliable and valid measure for the evaluation of the fear of public speaking among a French-speaking sample. These findings have critical consequences for the measurement of psychological and pharmacological treatment effectiveness in public speaking fear among a French-speaking sample.

  10. Fibroblast Growth Factor Receptor 3 (FGFR3–Analyses of the S249C Mutation and Protein Expression in Primary Cervical Carcinomas

    Directory of Open Access Journals (Sweden)

    Haiyan Dai

    2001-01-01

    Full Text Available Fibroblast growth factor receptor 3 (FGFR3 seems to play an inhibitory role in bone development, as activating mutations in the gene underlie disorders such as achondroplasia and thanatophoric dysplasia. Findings from multiple myeloma (MM indicate that FGFR3 also can act as an oncogene, and mutation of codon 249 in the fibroblast growth factor receptor 3 (FGFR3 gene was recently detected in 3/12 primary cervical carcinomas. We have analysed 91 cervical carcinomas for this specific S249C mutation using amplification created restriction site methodology (ACRS, and detected no mutations. Immunohistochemistry was performed on 73 of the tumours. Reduced protein staining was seen in 43 (58.8% samples. Six of the tumours (8.2% revealed increased protein staining compared with normal cervical tissue. These patients had a better prognosis than those with reduced or normal levels, although not statistically significant. This report weakens the hypothesis of FGFR3 as an oncogene of importance in cervical carcinomas.

  11. Logit Model of Analysing the Factors Affecting the Adoption of Goat Raising Activity by Farmers in the Non-pastoral Centre Region of Cameroon

    Directory of Open Access Journals (Sweden)

    Folefack, AJZ.

    2018-01-01

    Full Text Available Three years after the beginning of a goat project in the Centre region of Cameroon, the engagement of farmers in this activity has been timid. As this region is not a traditional pastoral zone, farmers have not yet incorporated the crop-livestock integration into their habits. Hence, this paper uses a logistic regression approach in order to analyse the factors affecting the adoption of goat raising activity by farmers of this locality. The computed odds ratio indicate that the practice of goat raising activity is significantly influenced by the farmer's age, gender, farming experience, practice of other livestock activities, frequency of contact with extension agents, access to credit and farm income. However, being a goat raiser does not depend on the farmer's marital status, education, farm size, household size, membership into a common initiative group. The study therefore recommends that the government authorities should give more attention to significant factors so as to popularize the goat raising activity in this region.

  12. Three Dimensional Parametric Analyses on Effect of Fibre Orientation for Stress Concentration Factor in Fibrous Composite Cantilever Plate with Central Circular Hole under Transverse Loading

    Directory of Open Access Journals (Sweden)

    Nitin Jain

    2012-10-01

    Full Text Available Normal 0 false false false EN-IN X-NONE X-NONE ABSTRACT: A number of analytical and numerical techniques are available for the two dimensional study of stress concentration around the hole(s in isotropic and composite plates subjected to in-plane or transverse loading conditions. The information on the techniques for three dimensional analyses of stress concentration factor (SCF around the hole in isotropic and composite plates subjected to transverse loading conditions is, however, limited. The present work emphasizes on the effect of fibre orientation (q on the stress concentration factor in fibrous composite plates with central circular hole under transverse static loading condition. The work is carried out for cantilever fibrous composite plates. The effects of thickness -to- width (T/A and diameter-to-width (D/A ratios upon SCF at different fibre orientation are studied. Plates of four different composite materials were considered for hole analysis in order to determine the sensitivity of SCF with elastic constants. Deflections in transverse direction were calculated and analysed. All results are presented in graphical form and discussed. The finite element formulation and its analysis were carried out using ANSYS package.ABSTRAK: Terdapat pelbagai teknik analitikal dan numerical untuk kajian tumpuan tegasan dua dimensi di sekeliling lubang-lubang dalam komposit isotropik dan plat pada satah atau keadaan bebanan melintang. Bagaimanapun, maklumat mengenai kaedah analisis tiga dimensi untuk faktor ketumpatan tegasan (SCF sekitar lubang dalam komposit isotropik dan plat pada keadaan bebanan melintang adalah terhad. Kertas ini menekankan kesan orientasi gentian (q pada faktor tumpuan tegasan dalam komposit plat bergentian dengan lubang berpusat di bawah keadaan bebanan melintang. Kajian ini dilkukan untuk cantilever plat komposit bergentian. Kesan ketebalan terhadap kelebaran plat (T/A dan diameter terhadap kelebaran komposit (D/A dengan SCF

  13. Assessing public speaking fear with the short form of the Personal Report of Confidence as a Speaker scale: confirmatory factor analyses among a French-speaking community sample

    Directory of Open Access Journals (Sweden)

    Heeren A

    2013-05-01

    Full Text Available Alexandre Heeren,1,2 Grazia Ceschi,3 David P Valentiner,4 Vincent Dethier,1 Pierre Philippot11Université Catholique de Louvain, Louvain-la-Neuve, Belgium; 2National Fund for Scientific Research, Brussels, Belgium; 3Department of Psychology, University of Geneva, Geneva, Switzerland; 4Department of Psychology, Northern Illinois University, DeKalb, IL, USABackground: The main aim of this study was to assess the reliability and structural validity of the French version of the 12-item version of the Personal Report of Confidence as Speaker (PRCS, one of the most promising measurements of public speaking fear.Methods: A total of 611 French-speaking volunteers were administered the French versions of the short PRCS, the Liebowitz Social Anxiety Scale, the Fear of Negative Evaluation scale, as well as the Trait version of the Spielberger State-Trait Anxiety Inventory and the Beck Depression Inventory-II, which assess the level of anxious and depressive symptoms, respectively.Results: Regarding its structural validity, confirmatory factor analyses indicated a single-factor solution, as implied by the original version. Good scale reliability (Cronbach’s alpha = 0.86 was observed. The item discrimination analysis suggested that all the items contribute to the overall scale score reliability. The French version of the short PRCS showed significant correlations with the Liebowitz Social Anxiety Scale (r = 0.522, the Fear of Negative Evaluation scale (r = 0.414, the Spielberger State-Trait Anxiety Inventory (r = 0.516, and the Beck Depression Inventory-II (r = 0.361.Conclusion: The French version of the short PRCS is a reliable and valid measure for the evaluation of the fear of public speaking among a French-speaking sample. These findings have critical consequences for the measurement of psychological and pharmacological treatment effectiveness in public speaking fear among a French-speaking sample.Keywords: social phobia, public speaking, confirmatory

  14. Time dependent auto-correlation, autospectrum and decay ratio estimation of transient signals in JET soft X-ray records

    International Nuclear Information System (INIS)

    Por, G.

    1999-08-01

    A program package was developed to estimate the time dependent auto-correlation function (ACF) from the time signals of soft X-ray records taken along the various lines-of-sights in JET-SHOTS, and also to estimate the time dependent Decay Ratio (DR) from that. On the basis of ACF the time dependent auto-power spectral density (APSD) was also calculated. The steps and objectives of this work were: eliminating the white detection noise, trends and slow variation from the time signals, since ordinary methods can give good estimate of the time dependent ACF and DR only for 'nearly' stationary signals, developing an automatic algorithm for finding the maxima and minima of ACF, since they are the basis for DR estimation, evaluating and testing different DR estimators for JET-SHOT, with the aim of finding parts of the signals, where the oscillating character is strong, estimating time dependent ACF and APSD that can follow the relatively fast variation in the time signal. The methods that we have developed for data processing of transient signals are: White detection noise removal and preparation for trend removal - weak components, white detection noise and high frequency components are filtered from the signal using the so-called soft-threshold wavelet filter. Removal of trends and slow variation - Three-point differentiation of the pre-filtered signal is used to remove trends and slow variation. Here we made use of the DERIV function of IDL program language. This leads to a filtered signal that has zero mean value in each time step. Calculation of the time dependent ACF - The signal treated by the two previous steps is used as the input. Calculated ACF value is added in each new time step, but the previously accumulated ACF value is multiplied by a weighting factor. Thus the new sample has 100% contribution, while the contributions from the previous samples are forgotten quickly. DR calculation - DR is a measure of the decay of oscillating ACF. This parameter was shown

  15. Real-time autocorrelator for fluorescence correlation spectroscopy based on graphical-processor-unit architecture: method, implementation, and comparative studies

    Science.gov (United States)

    Laracuente, Nicholas; Grossman, Carl

    2013-03-01

    We developed an algorithm and software to calculate autocorrelation functions from real-time photon-counting data using the fast, parallel capabilities of graphical processor units (GPUs). Recent developments in hardware and software have allowed for general purpose computing with inexpensive GPU hardware. These devices are more suited for emulating hardware autocorrelators than traditional CPU-based software applications by emphasizing parallel throughput over sequential speed. Incoming data are binned in a standard multi-tau scheme with configurable points-per-bin size and are mapped into a GPU memory pattern to reduce time-expensive memory access. Applications include dynamic light scattering (DLS) and fluorescence correlation spectroscopy (FCS) experiments. We ran the software on a 64-core graphics pci card in a 3.2 GHz Intel i5 CPU based computer running Linux. FCS measurements were made on Alexa-546 and Texas Red dyes in a standard buffer (PBS). Software correlations were compared to hardware correlator measurements on the same signals. Supported by HHMI and Swarthmore College

  16. The Effect of Nonzero Autocorrelation Coefficients on the Distributions of Durbin-Watson Test Estimator: Three Autoregressive Models

    Directory of Open Access Journals (Sweden)

    Mei-Yu LEE

    2014-11-01

    Full Text Available This paper investigates the effect of the nonzero autocorrelation coefficients on the sampling distributions of the Durbin-Watson test estimator in three time-series models that have different variance-covariance matrix assumption, separately. We show that the expected values and variances of the Durbin-Watson test estimator are slightly different, but the skewed and kurtosis coefficients are considerably different among three models. The shapes of four coefficients are similar between the Durbin-Watson model and our benchmark model, but are not the same with the autoregressive model cut by one-lagged period. Second, the large sample case shows that the three models have the same expected values, however, the autoregressive model cut by one-lagged period explores different shapes of variance, skewed and kurtosis coefficients from the other two models. This implies that the large samples lead to the same expected values, 2(1 – ρ0, whatever the variance-covariance matrix of the errors is assumed. Finally, comparing with the two sample cases, the shape of each coefficient is almost the same, moreover, the autocorrelation coefficients are negatively related with expected values, are inverted-U related with variances, are cubic related with skewed coefficients, and are U related with kurtosis coefficients.

  17. Velocity auto-correlation and hot-electron diffusion constant in GaAs and InP

    International Nuclear Information System (INIS)

    Deb Roy, M.

    1982-01-01

    Auto-correlation functions of the fluctuations in the electron velocities transverse and parallel to the applied electric field are calculated by the Monte Carlo method for GaAs and InP at three different values of field strength which are around three times the threshold field for negative differential mobility in each case. From these the frequency-dependent diffusion coefficients transverse and parallel to the applied field and the figure of merit for noise performance when used in a microwave amplifying device are determined. The results indicate that the transverse auto-correlation function Csub(t)(s) falls nearly exponentially to zero with increasing interval s while the parallel function Csub(p)(s) falls sharply, attains a minimum and then rises towards zero. In each case a higher field gives a higher rate of fall and makes the correlation functions zero within a shorter interval. The transverses diffusion coefficient falls monotonically with the frequency but the parallel diffusion coefficient generally starts with a low value at low frequencies, rises to a maximum and then falls. InP, with a larger separation between the central and the satellite valleys, has a higher value of the low frequency transverse diffusion coefficient and a lower value of its parallel counterpart. The noise performance of microwave semiconductor amplifying devices depends mainly on the low frequency parallel diffusion constant and consequently devices made out of materials like InP with a large separation between valleys are likely to have better noise characteristics. (orig.)

  18. Description of OPRA: A Danish database designed for the analyses of risk factors associated with 30-day hospital readmission of people aged 65+ years.

    Science.gov (United States)

    Pedersen, Mona K; Nielsen, Gunnar L; Uhrenfeldt, Lisbeth; Rasmussen, Ole S; Lundbye-Christensen, Søren

    2017-08-01

    To describe the construction of the Older Person at Risk Assessment (OPRA) database, the ability to link this database with existing data sources obtained from Danish nationwide population-based registries and to discuss its research potential for the analyses of risk factors associated with 30-day hospital readmission. We reviewed Danish nationwide registries to obtain information on demographic and social determinants as well as information on health and health care use in a population of hospitalised older people. The sample included all people aged 65+ years discharged from Danish public hospitals in the period from 1 January 2007 to 30 September 2010. We used personal identifiers to link and integrate the data from all events of interest with the outcome measures in the OPRA database. The database contained records of the patients, admissions and variables of interest. The cohort included 1,267,752 admissions for 479,854 unique people. The rate of 30-day all-cause acute readmission was 18.9% ( n=239,077) and the overall 30-day mortality was 5.0% ( n=63,116). The OPRA database provides the possibility of linking data on health and life events in a population of people moving into retirement and ageing. Construction of the database makes it possible to outline individual life and health trajectories over time, transcending organisational boundaries within health care systems. The OPRA database is multi-component and multi-disciplinary in orientation and has been prepared to be used in a wide range of subgroup analyses, including different outcome measures and statistical methods.

  19. Anti-Vascular Endothelial Growth Factor Comparative Effectiveness Trial for Diabetic Macular Edema: Additional Efficacy Post Hoc Analyses of a Randomized Clinical Trial.

    Science.gov (United States)

    Jampol, Lee M; Glassman, Adam R; Bressler, Neil M; Wells, John A; Ayala, Allison R

    2016-12-01

    Post hoc analyses from the Diabetic Retinopathy Clinical Research Network randomized clinical trial comparing aflibercept, bevacizumab, and ranibizumab for diabetic macular edema (DME) might influence interpretation of study results. To provide additional outcomes comparing 3 anti-vascular endothelial growth factor (VEGF) agents for DME. Post hoc analyses performed from May 3, 2016, to June 21, 2016, of a randomized clinical trial performed from August 22, 2012, to September 23, 2015, of 660 participants comparing 3 anti-VEGF treatments in eyes with center-involved DME causing vision impairment. Randomization to intravitreous aflibercept (2.0 mg), bevacizumab (1.25 mg), or ranibizumab (0.3 mg) administered up to monthly based on a structured retreatment regimen. Focal/grid laser treatment was added after 6 months for the treatment of persistent DME. Change in visual acuity (VA) area under the curve and change in central subfield thickness (CST) within subgroups based on whether an eye received laser treatment for DME during the study. Post hoc analyses were performed for 660 participants (mean [SD] age, 61 [10] years; 47% female, 65% white, 16% black or African American, 16% Hispanic, and 3% other). For eyes with an initial VA of 20/50 or worse, VA improvement was greater with aflibercept than the other agents at 1 year but superior only to bevacizumab at 2 years. Mean (SD) letter change in VA over 2 years (area under curve) was greater with aflibercept (+17.1 [9.7]) than with bevacizumab (+12.1 [9.4]; 95% CI, +1.6 to +7.3; P grid laser treatment was performed for DME, the only participants to have a substantial reduction in mean CST between 1 and 2 years were those with a baseline VA of 20/50 or worse receiving bevacizumab and laser treatment (mean [SD], -55 [108] µm; 95% CI, -82 to -28 µm; P grid laser treatment, ceiling and floor effects, or both may account for mean thickness reductions noted only in bevacizumab-treated eyes between 1 and 2 years

  20. Associations between retinol-binding protein 4 and cardiometabolic risk factors and subclinical atherosclerosis in recently postmenopausal women: cross-sectional analyses from the KEEPS study

    Directory of Open Access Journals (Sweden)

    Huang Gary

    2012-07-01

    Full Text Available Abstract Background The published literature regarding the relationships between retinol-binding protein 4 (RBP4 and cardiometabolic risk factors and subclinical atherosclerosis is conflicting, likely due, in part, to limitations of frequently used RBP4 assays. Prior large studies have not utilized the gold-standard western blot analysis of RBP4 levels. Methods Full-length serum RBP4 levels were measured by western blot in 709 postmenopausal women screened for the Kronos Early Estrogen Prevention Study. Cross-sectional analyses related RBP4 levels to cardiometabolic risk factors, carotid artery intima-media thickness (CIMT, and coronary artery calcification (CAC. Results The mean age of women was 52.9 (± 2.6 years, and the median RBP4 level was 49.0 (interquartile range 36.9-61.5 μg/mL. Higher RBP4 levels were weakly associated with higher triglycerides (age, race, and smoking-adjusted partial Spearman correlation coefficient = 0.10; P = 0.01, but were unrelated to blood pressure, cholesterol, C-reactive protein, glucose, insulin, and CIMT levels (all partial Spearman correlation coefficients ≤0.06, P > 0.05. Results suggested a curvilinear association between RBP4 levels and CAC, with women in the bottom and upper quartiles of RBP4 having higher odds of CAC (odds ratio [95% confidence interval] 2.10 [1.07-4.09], 2.00 [1.02-3.92], 1.64 [0.82-3.27] for the 1st, 3rd, and 4th RBP4 quartiles vs. the 2nd quartile. However, a squared RBP4 term in regression modeling was non-significant (P = 0.10. Conclusions In these healthy, recently postmenopausal women, higher RBP4 levels were weakly associated with elevations in triglycerides and with CAC, but not with other risk factors or CIMT. These data using the gold standard of RBP4 methodology only weakly support the possibility that perturbations in RBP4 homeostasis may be an additional risk factor for subclinical coronary atherosclerosis. Trial registration ClinicalTrials.gov number NCT

  1. A multi-group confirmatory factor analyses of the LupusPRO between southern California and Filipino samples of patients with systemic lupus erythematosus.

    Science.gov (United States)

    Azizoddin, D R; Olmstead, R; Cost, C; Jolly, M; Ayeroff, J; Racaza, G; Sumner, L A; Ormseth, S; Weisman, M; Nicassio, P M

    2017-08-01

    Introduction Systemic lupus erythematosus (SLE) leads to a range of biopsychosocial health outcomes through an unpredictable and complex disease path. The LupusPRO is a comprehensive, self-report measure developed specifically for populations with SLE, which assesses both health-related quality of life and non-health related quality of life. Given its increasingly widespread use, additional research is needed to evaluate the psychometric integrity of the LupusPRO across diverse populations. The objectives of this study were to evaluate the performance of the LupusPRO in two divergent patient samples and the model fit between both samples. Methods Two diverse samples with SLE included 136 patients from an ethnically-diverse, urban region in southern California and 100 from an ethnically-homogenous, rural region in Manila, Philippines. All patients met the ACR classification criteria for SLE. Confirmatory factor analysis (CFAs) were conducted in each sample separately and combined to provide evidence of the factorial integrity of the 12 subscales in the LupusPRO. Results Demographic analyses indicated significant differences in age, disease activity and duration, education, income, insurance, and medication use between groups. Results of the separate CFAs indicated moderate fit to the data for the hypothesized 12-factor model for both the Manila and southern California groups, respectively [χ 2 (794) = 1283.32, p < 0.001, Comparative Fit Index (CFI) = 0.793; χ 2 (794) =1398.44, p < 0.001, CFI = 0.858]. When the factor structures of the LupusPRO in the southern California and Manila groups were constrained to be equal between the two groups, findings revealed that the factor structures of measured variables fit the two groups reasonably well [χ 2  (1697) = 2950.413, df = 1697, p < 0.000; CFI = 0.811]. After removing seven constraints and eight correlations suggested by the Lagrange multiplier test, the model fit improved

  2. Analyses of PWR spent fuel composition using SCALE and SWAT code systems to find correction factors for criticality safety applications adopting burnup credit

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Hee Sung; Suyama, Kenya; Mochizuki, Hiroki; Okuno, Hiroshi; Nomura, Yasushi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-01-01

    The isotopic composition calculations were performed for 26 spent fuel samples from the Obrigheim PWR reactor and 55 spent fuel samples from 7 PWR reactors using the SAS2H module of the SCALE4.4 code system with 27, 44 and 238 group cross-section libraries and the SWAT code system with the 107 group cross-section library. For the analyses of samples from the Obrigheim PWR reactor, geometrical models were constructed for each of SCALE4.4/SAS2H and SWAT. For the analyses of samples from 7 PWR reactors, the geometrical model already adopted in the SCALE/SAS2H was directly converted to the model of SWAT. The four kinds of calculation results were compared with the measured data. For convenience, the ratio of the measured to calculated values was used as a parameter. When the ratio is less than unity, the calculation overestimates the measurement, and the ratio becomes closer to unity, they have a better agreement. For many important nuclides for burnup credit criticality safety evaluation, the four methods applied in this study showed good coincidence with measurements in general. More precise observations showed, however: (1) Less unity ratios were found for Pu-239 and -241 for selected 16 samples out of the 26 samples from the Obrigheim reactor (10 samples were deselected because their burnups were measured with Cs-137 non-destructive method, less reliable than Nd-148 method the rest 16 samples were measured with); (2) Larger than unity ratios were found for Am-241 and Cm-242 for both the 16 and 55 samples; (3) Larger than unity ratios were found for Sm-149 for the 55 samples; (4) SWAT was generally accompanied by larger ratios than those of SAS2H with some exceptions. Based on the measured-to-calculated ratios for 71 samples of a combined set in which 16 selected samples and 55 samples were included, the correction factors that should be multiplied to the calculated isotopic compositions were generated for a conservative estimate of the neutron multiplication factor

  3. The Development of Protein Microarrays and Their Applications in DNA-Protein and Protein-Protein Interaction Analyses of Arabidopsis Transcription Factors

    Science.gov (United States)

    Gong, Wei; He, Kun; Covington, Mike; Dinesh-Kumar, S. P.; Snyder, Michael; Harmer, Stacey L.; Zhu, Yu-Xian; Deng, Xing Wang

    2009-01-01

    We used our collection of Arabidopsis transcription factor (TF) ORFeome clones to construct protein microarrays containing as many as 802 TF proteins. These protein microarrays were used for both protein-DNA and protein-protein interaction analyses. For protein-DNA interaction studies, we examined AP2/ERF family TFs and their cognate cis-elements. By careful comparison of the DNA-binding specificity of 13 TFs on the protein microarray with previous non-microarray data, we showed that protein microarrays provide an efficient and high throughput tool for genome-wide analysis of TF-DNA interactions. This microarray protein-DNA interaction analysis allowed us to derive a comprehensive view of DNA-binding profiles of AP2/ERF family proteins in Arabidopsis. It also revealed four TFs that bound the EE (evening element) and had the expected phased gene expression under clock-regulation, thus providing a basis for further functional analysis of their roles in clock regulation of gene expression. We also developed procedures for detecting protein interactions using this TF protein microarray and discovered four novel partners that interact with HY5, which can be validated by yeast two-hybrid assays. Thus, plant TF protein microarrays offer an attractive high-throughput alternative to traditional techniques for TF functional characterization on a global scale. PMID:19802365

  4. Item Response Theory Modeling and Categorical Regression Analyses of the Five-Factor Model Rating Form: A Study on Italian Community-Dwelling Adolescent Participants and Adult Participants.

    Science.gov (United States)

    Fossati, Andrea; Widiger, Thomas A; Borroni, Serena; Maffei, Cesare; Somma, Antonella

    2017-06-01

    To extend the evidence on the reliability and construct validity of the Five-Factor Model Rating Form (FFMRF) in its self-report version, two independent samples of Italian participants, which were composed of 510 adolescent high school students and 457 community-dwelling adults, respectively, were administered the FFMRF in its Italian translation. Adolescent participants were also administered the Italian translation of the Borderline Personality Features Scale for Children-11 (BPFSC-11), whereas adult participants were administered the Italian translation of the Triarchic Psychopathy Measure (TriPM). Cronbach α values were consistent with previous findings; in both samples, average interitem r values indicated acceptable internal consistency for all FFMRF scales. A multidimensional graded item response theory model indicated that the majority of FFMRF items had adequate discrimination parameters; information indices supported the reliability of the FFMRF scales. Both categorical (i.e., item-level) and scale-level regression analyses suggested that the FFMRF scores may predict a nonnegligible amount of variance in the BPFSC-11 total score in adolescent participants, and in the TriPM scale scores in adult participants.

  5. Insights into the phylogeny of Northern Hemisphere Armillaria: Neighbor-net and Bayesian analyses of translation elongation factor 1-α gene sequences.

    Science.gov (United States)

    Klopfenstein, Ned B; Stewart, Jane E; Ota, Yuko; Hanna, John W; Richardson, Bryce A; Ross-Davis, Amy L; Elías-Román, Rubén D; Korhonen, Kari; Keča, Nenad; Iturritxa, Eugenia; Alvarado-Rosales, Dionicio; Solheim, Halvor; Brazee, Nicholas J; Łakomy, Piotr; Cleary, Michelle R; Hasegawa, Eri; Kikuchi, Taisei; Garza-Ocañas, Fortunato; Tsopelas, Panaghiotis; Rigling, Daniel; Prospero, Simone; Tsykun, Tetyana; Bérubé, Jean A; Stefani, Franck O P; Jafarpour, Saeideh; Antonín, Vladimír; Tomšovský, Michal; McDonald, Geral I; Woodward, Stephen; Kim, Mee-Sook

    2017-01-01

    Armillaria possesses several intriguing characteristics that have inspired wide interest in understanding phylogenetic relationships within and among species of this genus. Nuclear ribosomal DNA sequence-based analyses of Armillaria provide only limited information for phylogenetic studies among widely divergent taxa. More recent studies have shown that translation elongation factor 1-α (tef1) sequences are highly informative for phylogenetic analysis of Armillaria species within diverse global regions. This study used Neighbor-net and coalescence-based Bayesian analyses to examine phylogenetic relationships of newly determined and existing tef1 sequences derived from diverse Armillaria species from across the Northern Hemisphere, with Southern Hemisphere Armillaria species included for reference. Based on the Bayesian analysis of tef1 sequences, Armillaria species from the Northern Hemisphere are generally contained within the following four superclades, which are named according to the specific epithet of the most frequently cited species within the superclade: (i) Socialis/Tabescens (exannulate) superclade including Eurasian A. ectypa, North American A. socialis (A. tabescens), and Eurasian A. socialis (A. tabescens) clades; (ii) Mellea superclade including undescribed annulate North American Armillaria sp. (Mexico) and four separate clades of A. mellea (Europe and Iran, eastern Asia, and two groups from North America); (iii) Gallica superclade including Armillaria Nag E (Japan), multiple clades of A. gallica (Asia and Europe), A. calvescens (eastern North America), A. cepistipes (North America), A. altimontana (western USA), A. nabsnona (North America and Japan), and at least two A. gallica clades (North America); and (iv) Solidipes/Ostoyae superclade including two A. solidipes/ostoyae clades (North America), A. gemina (eastern USA), A. solidipes/ostoyae (Eurasia), A. cepistipes (Europe and Japan), A. sinapina (North America and Japan), and A. borealis

  6. Determination of modulation transfer function of a printer by measuring the autocorrelation of the transmission function of a printed Ronchi grating

    International Nuclear Information System (INIS)

    Madanipour, Khosro; Tavassoly, Mohammad T.

    2009-01-01

    We show theoretically and verify experimentally that the modulation transfer function (MTF) of a printing system can be determined by measuring the autocorrelation of a printed Ronchi grating. In practice, two similar Ronchi gratings are printed on two transparencies and the transparencies are superimposed with parallel grating lines. Then, the gratings are uniformly illuminated and the transmitted light from a large section is measured versus the displacement of one grating with respect to the other in a grating pitch interval. This measurement provides the required autocorrelation function for determination of the MTF

  7. Quantifying uncertainty in soot volume fraction estimates using Bayesian inference of auto-correlated laser-induced incandescence measurements

    Science.gov (United States)

    Hadwin, Paul J.; Sipkens, T. A.; Thomson, K. A.; Liu, F.; Daun, K. J.

    2016-01-01

    Auto-correlated laser-induced incandescence (AC-LII) infers the soot volume fraction (SVF) of soot particles by comparing the spectral incandescence from laser-energized particles to the pyrometrically inferred peak soot temperature. This calculation requires detailed knowledge of model parameters such as the absorption function of soot, which may vary with combustion chemistry, soot age, and the internal structure of the soot. This work presents a Bayesian methodology to quantify such uncertainties. This technique treats the additional "nuisance" model parameters, including the soot absorption function, as stochastic variables and incorporates the current state of knowledge of these parameters into the inference process through maximum entropy priors. While standard AC-LII analysis provides a point estimate of the SVF, Bayesian techniques infer the posterior probability density, which will allow scientists and engineers to better assess the reliability of AC-LII inferred SVFs in the context of environmental regulations and competing diagnostics.

  8. A New Multi-Gaussian Auto-Correlation Function for the Modeling of Realistic Shot Peened Random Rough Surfaces

    International Nuclear Information System (INIS)

    Hassan, W.; Blodgett, M.

    2006-01-01

    Shot peening is the primary surface treatment used to create a uniform, consistent, and reliable sub-surface compressive residual stress layer in aero engine components. A by-product of the shot peening process is random surface roughness that can affect the measurements of the resulting residual stresses and therefore impede their NDE assessment. High frequency eddy current conductivity measurements have the potential to assess these residual stresses in Ni-base super alloys. However, the effect of random surface roughness is expected to become significant in the desired measurement frequency range of 10 to 100 MHz. In this paper, a new Multi-Gaussian (MG) auto-correlation function is proposed for modeling the resulting pseudo-random rough profiles. Its use in the calculation of the Apparent Eddy Current Conductivity (AECC) loss due to surface roughness is demonstrated. The numerical results presented need to be validated with experimental measurements

  9. The Green-Kubo formula, autocorrelation function and fluctuation spectrum for finite Markov chains with continuous time

    Energy Technology Data Exchange (ETDEWEB)

    Chen Yong; Chen Xi; Qian Minping [School of Mathematical Sciences, Peking University, Beijing 100871 (China)

    2006-03-17

    A general form of the Green-Kubo formula, which describes the fluctuations pertaining to all the steady states whether equilibrium or non-equilibrium, for a system driven by a finite Markov chain with continuous time (briefly, MC) {l_brace}{xi}{sub t}{r_brace}, is shown. The equivalence of different forms of the Green-Kubo formula is exploited. We also look at the differences in terms of the autocorrelation function and the fluctuation spectrum between the equilibrium state and the non-equilibrium steady state. Also, if the MC is in the non-equilibrium steady state, we can always find a complex function {psi}, such that the fluctuation spectrum of {l_brace}{phi}({xi}{sub t}){r_brace} is non-monotonous in [0, + {infinity})

  10. FREQUENCY ANALYSIS OF RLE-BLOCKS REPETITIONS IN THE SERIES OF BINARY CODES WITH OPTIMAL MINIMAX CRITERION OF AUTOCORRELATION FUNCTION

    Directory of Open Access Journals (Sweden)

    A. A. Kovylin

    2013-01-01

    Full Text Available The article describes the problem of searching for binary pseudo-random sequences with quasi-ideal autocorrelation function, which are to be used in contemporary communication systems, including mobile and wireless data transfer interfaces. In the synthesis of binary sequences sets, the target set is manning them based on the minimax criterion by which a sequence is considered to be optimal according to the intended application. In the course of the research the optimal sequences with order of up to 52 were obtained; the analysis of Run Length Encoding was carried out. The analysis showed regularities in the distribution of series number of different lengths in the codes that are optimal on the chosen criteria, which would make it possible to optimize the searching process for such codes in the future.

  11. The Green-Kubo formula, autocorrelation function and fluctuation spectrum for finite Markov chains with continuous time

    International Nuclear Information System (INIS)

    Chen Yong; Chen Xi; Qian Minping

    2006-01-01

    A general form of the Green-Kubo formula, which describes the fluctuations pertaining to all the steady states whether equilibrium or non-equilibrium, for a system driven by a finite Markov chain with continuous time (briefly, MC) {ξ t }, is shown. The equivalence of different forms of the Green-Kubo formula is exploited. We also look at the differences in terms of the autocorrelation function and the fluctuation spectrum between the equilibrium state and the non-equilibrium steady state. Also, if the MC is in the non-equilibrium steady state, we can always find a complex function ψ, such that the fluctuation spectrum of {φ(ξ t )} is non-monotonous in [0, + ∞)

  12. Conifer R2R3-MYB transcription factors: sequence analyses and gene expression in wood-forming tissues of white spruce (Picea glauca

    Directory of Open Access Journals (Sweden)

    Grima-Pettenati Jacqueline

    2007-03-01

    Full Text Available Abstract Background Several members of the R2R3-MYB family of transcription factors act as regulators of lignin and phenylpropanoid metabolism during wood formation in angiosperm and gymnosperm plants. The angiosperm Arabidopsis has over one hundred R2R3-MYBs genes; however, only a few members of this family have been discovered in gymnosperms. Results We isolated and characterised full-length cDNAs encoding R2R3-MYB genes from the gymnosperms white spruce, Picea glauca (13 sequences, and loblolly pine, Pinus taeda L. (five sequences. Sequence similarities and phylogenetic analyses placed the spruce and pine sequences in diverse subgroups of the large R2R3-MYB family, although several of the sequences clustered closely together. We searched the highly variable C-terminal region of diverse plant MYBs for conserved amino acid sequences and identified 20 motifs in the spruce MYBs, nine of which have not previously been reported and three of which are specific to conifers. The number and length of the introns in spruce MYB genes varied significantly, but their positions were well conserved relative to angiosperm MYB genes. Quantitative RTPCR of MYB genes transcript abundance in root and stem tissues revealed diverse expression patterns; three MYB genes were preferentially expressed in secondary xylem, whereas others were preferentially expressed in phloem or were ubiquitous. The MYB genes expressed in xylem, and three others, were up-regulated in the compression wood of leaning trees within 76 hours of induction. Conclusion Our survey of 18 conifer R2R3-MYB genes clearly showed a gene family structure similar to that of Arabidopsis. Three of the sequences are likely to play a role in lignin metabolism and/or wood formation in gymnosperm trees, including a close homolog of the loblolly pine PtMYB4, shown to regulate lignin biosynthesis in transgenic tobacco.

  13. Authorship characteristics of orthodontic randomized controlled trials, systematic reviews, and meta-analyses in non-orthodontic journals with impact factor.

    Science.gov (United States)

    Alqaydi, Ahlam R; Kanavakis, Georgios; Naser-Ud-Din, Shazia; Athanasiou, Athanasios E

    2017-12-08

    This study was conducted to explore authorship characteristics and publication trends of all orthodontic randomized controlled trials (RCTs), systematic reviews (SRs), and meta-analyses (MAs) published in non-orthodontic journals with impact factor (IF). Appropriate research strategies were developed to search for all articles published until December 2015, without restrictions regarding language or publication status. The initial search generated 4524 results, but after application of the inclusion criteria, the final number of articles was reduced to 274 (SRs: 152; MAs: 36; and RCTs: 86). Various authorship characteristics were recorded for each article. Frequency distributions for all parameters were explored with Pearson chi-square for independence at the 0.05 level of significance. More than half of the included publications were SRs (55.5 per cent), followed by RCTs (31.4 per cent) and MAs (13.1 per cent); one hundred seventy-eight (65 per cent) appeared in dental journals and 96 (35 per cent) were published in non-dental journals. The last decade was significantly more productive than the period before 2006, with 236 (86.1 per cent) articles published between 2006 and 2015. European countries produced 51.5 per cent of the total number of publications, followed by Asia (18.6 per cent) and North America (USA and Canada; 16.8 per cent). Studies published in journals without IF were not included. Level-1 evidence orthodontic literature published in non-orthodontic journals has significantly increased during 2006-15. This indicates a larger interest of other specialty journals in orthodontic related studies and a trend for orthodontic authors to publish their work in journals with impact in broader fields of dentistry and medicine. © The Author(s) 2017. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com

  14. Metanálisis: Relación entre factores psicosociales en el trabajo y absentismo laboral Meta-analyses: Relation between psychosocial factors in the work and labour absenteeism

    Directory of Open Access Journals (Sweden)

    Josep Mª Molina Aragonés

    2010-09-01

    era relevante para ser incluidos. Control: El gráfico Forest (Fig. 2 muestra el resultado del metanálisis: el riesgo relativo de sufrir un episodio de absentismo es estadísticamente significativo, con un valor de 1,36 (CI: 1,02-1,82 (Tabla 2. Demanda: El riesgo de sufrir un episodio de absentismo no es valorable, con un valor de 1,01 (IC: 0,91-1,11. (Tabla 3. Si bien la demanda, como dimensión propia de estos factores psicosociales, no parece una variable relacionada o que influencie el absentismo laboral, el control si que se encuentra asociado a este, de manera reiterada y consistente.Introduction: In accordance with the model of demand-control, the overhead labour demand, the low control on itself and in a very special way the combination of both, it would suppose an important risk for health. The balance between demand and control depends, just as this model, on the organization of the work and not on the individual characteristics of each person, although, of course, the influence of the working psychosocial environment can be, and in fact is, moderated by the characteristics of the individual answer. Objectives: The study's objective was to analyse in a systematic way those studies that related the effects over absenteeism that the psychosocial factors have constituted in the enterprises, using as a main element of assessment, the model of demand-control of Karasek, and to make a meta-analyses to evaluated the relation between both of them. Methods: There were identified publications from the electronics data bases Medline (2004 to July 2009, Embase (2004 to March 2009, PsycInfo (2004 to July 2009 and in the Bookshop Cochrane (2004 to July 2009, without restrictions motivated by language. The keyboards used were absenteeism, sickness absence, psychosocial, occupational and combinations of them that were chosen initially by its inclusion on the meta-analyses. Additionally the appointments mentioned were reviewed in the selected originals to detect some other

  15. Treatment factors influencing survival in pancreatic carcinoma; Der Einfluss der Therapie auf das Ueberleben von Patienten mit Pankreaskarzinom. Eine Analyse von Einzelfaktoren

    Energy Technology Data Exchange (ETDEWEB)

    Warszawski, N.; Warszawski, A.; Schneider, B.M.; Roettinger, E.M. [Ulm Univ. (Germany). Abt. Radiologie 2 (Strahlentherapie); Link, K.H.; Gansauge, F. [Ulm Univ. (Germany). Abt. fuer Allgemeinchirurgie; Lutz, M.P. [Ulm Univ. (Germany). Abt. Innere Medizin 1

    1999-07-01

    Purpose: To identify the impact of treatment factors on overall survival in patients with pancreatic carcinoma. Patients and methods: We performed a follow-up study on 38 patients with adenocarcinoma of the pancreas treated from 1984 to 1998. 18/38 patients were resected. Irradiated volume included the primary tumor (or tumor bed) and regional lymph nodes. Thirty-seven patients received in addition chemotherapy consisting of mitoxantrone, 5-fluorouracil and cis-platin, either i.v. (14/38) or i.a. (23/38). The influence of treatment related factors on the overall survival was tested. Biologically effective dose was calculated by the linear-quadratic model ({alpha}/{beta}=25 Gy) and by losing 0.85 Gy per day starting accelerated repopulation at day 28. Results: Treatment factors influencing overall survival were resection (p=0.02), overall treatment time (p=0.03) and biologically effective dose (p<0.002). Total dose and kind of chemotherapy had no significant influence. Treatment volume had a negative correlation (r=-0.5, p=0.06) with overall survival, without any correlation between tumor size, tumor stage, and treatment volume. In multivariate analysis only biologically effective dose remained significant (p=0.02). Conclusions: Among with surgery, biologically effective dose strongly influences overall survival in patients treated for pancreatic carcinoma. Treatment volume should be kept as small as possible and all efforts should be made to avoid treatments splits in radiation therapy. (orig.) [Deutsch] Ziel: Behandlungsfaktoren zu identifizieren, die einen Einfluss auf das Ueberleben von Patienten mit Pankreaskarzinom haben. Patienten und Methode: In einer nichtrandomisierten Studie wurden 38 Patienten ausgewertet, die von 1984 bis 1998 wegen eines Adenokarzinoms des Pankreas behandelt worden waren. Bei 18/38 Patienten war eine Resektion vorgenommen worden. Das Bestrahlungsvolumen beinhaltete den Primaertumor bzw. das Tumorbett und die regionaeren Lymphknoten

  16. C-reactive protein as a risk factor for coronary heart disease: a systematic review and meta-analyses for the U.S. Preventive Services Task Force.

    Science.gov (United States)

    Buckley, David I; Fu, Rongwei; Freeman, Michele; Rogers, Kevin; Helfand, Mark

    2009-10-06

    C-reactive protein (CRP) may help to refine global risk assessment for coronary heart disease (CHD), particularly among persons who are at intermediate risk on the basis of traditional risk factors alone. To assist the U.S. Preventive Services Task Force (USPSTF) in determining whether CRP should be incorporated into guidelines for CHD risk assessment. MEDLINE search of English-language articles (1966 to November 2007), supplemented by reference lists of reviews, pertinent studies, editorials, and Web sites and by expert suggestions. Prospective cohort, case-cohort, and nested case-control studies relevant to the independent predictive ability of CRP when used in intermediate-risk persons. Included studies were reviewed according to predefined criteria, and the quality of each study was rated. The validity of the body of evidence and the net benefit or harm of using CRP for CHD risk assessment were evaluated. The combined magnitude of effect was determined by meta-analysis. The body of evidence is of good quality, consistency, and applicability. For good studies that adjusted for all Framingham risk variables, the summary estimate of relative risk for incident CHD was 1.58 (95% CI, 1.37 to 1.83) for CRP levels greater than 3.0 mg/L compared with levels less than 1.0 mg/L. Analyses from 4 large cohorts were consistent in finding evidence that including CRP improves risk stratification among initially intermediate-risk persons. C-reactive protein has desirable test characteristics, and good data exist on the prevalence of elevated CRP levels in intermediate-risk persons. Limited evidence links changes in CRP level to primary prevention of CHD events. Study methods for measuring Framingham risk variables and other covariates varied. Ethnic and racial minority populations were poorly represented in most studies, limiting generalizability. Few studies directly assessed the effect of CRP on risk reclassification in intermediate-risk persons. Strong evidence indicates

  17. Molecular and functional analyses of novel anti-lipopolysaccharide factors in giant river prawn (Macrobrachium rosenbergii, De Man) and their expression responses under pathogen and temperature exposure.

    Science.gov (United States)

    Srisapoome, Prapansak; Klongklaew, Nawanith; Areechon, Nontawith; Wongpanya, Ratree

    2018-06-15

    Anti-lipopolysaccharide factor (ALF) is an immune-related protein that is crucially involved in immune defense mechanisms against invading pathogens in crustaceans. In the current study, three different ALFs of giant river prawn (Mr-ALF3, Mr-ALF8 and Mr-ALF9) were discovered. Based on sequence analysis, Mr-ALF3 and Mr-ALF9 were identified as new members of ALFs in crustaceans (groups F and G, respectively). Structurally, each newly identified Mr-ALF contained three α-helices packed against a four-stranded β-sheet bearing the LPS-binding motif, which usually binds to the cell wall components of bacteria. Tissue expression analysis using quantitative real-time RT-PCR (qRT-PCR) demonstrated that Mr-ALF3 was expressed in most tissues, and the highest expression was in the heart and hemocytes. The Mr-ALF8 gene was highly expressed in the heart, hemocytes, midgut, hepatopacreas and hindgut, respectively, while the Mr-ALF9 gene was modestly expressed in the heart and hemocytes, respectively. The transcriptional responses of the Mr-ALFs to Aeromonas hydrophila and hot/cold temperatures were investigated by qRT-PCR in the gills, hepatopancreas and hemocytes. We found that all Mr-ALFs were clearly suppressed in all tested tissues when the experimental prawns were exposed to extreme temperatures (25 and 35 °C). Moreover, the expression levels of these genes were significantly induced in all examined tissues by 2 different concentrations of A. hydrophila (1 × 10 6 and 1 × 10 9  CFU/ml), particularly 12 and 96 h after the injection. Finally, binding activity analysis of LPS-motif peptides of each Mr-ALF revealed that the LPS peptide of Mr-ALF3 exhibited the strongest adhesion to two pathogenic Gram-negative bacteria, A. hydrophila and Vibrio harveyi, and the non-pathogenic Gram-positive Bacillus megaterium. The results also showed that the Mr-ALF8 and Mr-ALF9 peptides had mild antimicrobial effects against similar tested bacteria. Based on information

  18. An attempt at solving the problem of autocorrelation associated with use of mean approach for pooling cross-section and time series in regression modelling

    International Nuclear Information System (INIS)

    Nuamah, N.N.N.N.

    1990-12-01

    The paradoxical nature of results of the mean approach in pooling cross-section and time series data has been identified to be caused by the presence in the normal equations of phenomena such as autocovariances, multicollinear covariances, drift covariances and drift multicollinear covariances. This paper considers the problem of autocorrelation and suggests ways of solving it. (author). 4 refs

  19. Long-time tails of the velocity autocorrelation function in 2D and 3D lattice gas cellular automata: a test of mode-coupling theory

    NARCIS (Netherlands)

    Hoef, M.A. van der; Frenkel, D.

    1990-01-01

    We report simulations of the velocity autocorrelation function (VACF) of a tagged particle in two- and three-dimensional lattice-gas cellular automata, using a new technique that is about a million times more efficient than the conventional techniques. The simulations clearly show the algebraic

  20. Analyse Factorielle d'une Batterie de Tests de Comprehension Orale et Ecrite (Factor Analysis of a Battery of Tests of Listening and Reading Comprehension). Melanges Pedagogiques, 1971.

    Science.gov (United States)

    Lonchamp, F.

    This is a presentation of the results of a factor analysis of a battery of tests intended to measure listening and reading comprehension in English as a second language. The analysis sought to answer the following questions: (1) whether the factor analysis method yields results when applied to tests which are not specifically designed for this…

  1. [Sociology as a Major Factor for the Psychiatrie-Enquete in the Federal Republic of Germany - Results from Expert Interviews and Document Analyses].

    Science.gov (United States)

    Söhner, Felicitas; Fangerau, Heiner; Becker, Thomas

    2018-05-01

    This paper examines the influence of sociology as a discipline on the Psychiatrie-Enquete by analysing interviews with expert (psychiatrist, psychologist, sociologist etc.) witnesses of the Enquete process and by analysing pertinent documents. 24 interviews were conducted and analysed using qualitative secondary analysis. Sociological texts and research results influenced the professional development of psychiatrists at the time. Cross-talk between psychiatry and sociology developed through seminal sociological analyses of psychiatric institutions and the interest taken in medical institutions in a number of sociological texts. Inter-disciplinary joint studies (of sociologists and psychiatrists) affected the research interest and professional behaviour of psychiatrists involved in the process on the way to the Psychiatrie-Enquete. Tenacity of psychiatrists' systems of opinion was dissolved by impulses from the sociological thought community. The forms of contact between the psychiatric and the sociological thought collective which we could reconstruct are an example of the evolution of knowledge and practice through transdisciplinary communication. © Georg Thieme Verlag KG Stuttgart · New York.

  2. Insights into the phylogeny of Northern Hemisphere Armillaria: Neighbor-net and Bayesian analyses of translation elongation factor 1-α gene sequences

    Science.gov (United States)

    Ned B. Klopfenstein; Jane E. Stewart; Yuko Ota; John W. Hanna; Bryce A. Richardson; Amy L. Ross-Davis; Ruben D. Elias-Roman; Kari Korhonen; Nenad Keca; Eugenia Iturritxa; Dionicio Alvarado-Rosales; Halvor Solheim; Nicholas J. Brazee; Piotr Lakomy; Michelle R. Cleary; Eri Hasegawa; Taisei Kikuchi; Fortunato Garza-Ocanas; Panaghiotis Tsopelas; Daniel Rigling; Simone Prospero; Tetyana Tsykun; Jean A. Berube; Franck O. P. Stefani; Saeideh Jafarpour; Vladimir Antonin; Michal Tomsovsky; Geral I. McDonald; Stephen Woodward; Mee-Sook Kim

    2017-01-01

    Armillaria possesses several intriguing characteristics that have inspired wide interest in understanding phylogenetic relationships within and among species of this genus. Nuclear ribosomal DNA sequence–based analyses of Armillaria provide only limited information for phylogenetic studies among widely divergent taxa. More recent studies have shown that translation...

  3. Environmental Correlation and Spatial Autocorrelation of Soil Properties in Keller Peninsula, Maritime Antarctica

    Directory of Open Access Journals (Sweden)

    André Geraldo de Lima Moraes

    2018-01-01

    Full Text Available ABSTRACT: The pattern of variation in soil and landform properties in relation to environmental covariates are closely related to soil type distribution. The aim of this study was to apply digital soil mapping techniques to analysis of the pattern of soil property variation in relation to environmental covariates under periglacial conditions at Keller Peninsula, Maritime Antarctica. We considered the hypothesis that covariates normally used for environmental correlation elsewhere can be adequately employed in periglacial areas in Maritime Antarctica. For that purpose, 138 soil samples from 47 soil sites were collected for analysis of soil chemical and physical properties. We tested the correlation between soil properties (clay, potassium, sand, organic carbon, and pH and environmental covariates. The environmental covariates selected were correlated with soil properties according to the terrain attributes of the digital elevation model (DEM. The models evaluated were linear regression, ordinary kriging, and regression kriging. The best performance was obtained using normalized height as a covariate, with an R2 of 0.59 for sand. In contrast, the lowest R2 of 0.15 was obtained for organic carbon, also using the regression kriging method. Overall, results indicate that, despite the predominant periglacial conditions, the environmental covariates normally used for digital terrain mapping of soil properties worldwide can be successfully employed for understanding the main variations in soil properties and soil-forming factors in this region.

  4. Auto-correlation in the motor/imaginary human EEG signals: A vision about the FDFA fluctuations.

    Directory of Open Access Journals (Sweden)

    Gilney Figueira Zebende

    Full Text Available In this paper we analyzed, by the FDFA root mean square fluctuation (rms function, the motor/imaginary human activity produced by a 64-channel electroencephalography (EEG. We utilized the Physionet on-line databank, a publicly available database of human EEG signals, as a standardized reference database for this study. Herein, we report the use of detrended fluctuation analysis (DFA method for EEG analysis. We show that the complex time series of the EEG exhibits characteristic fluctuations depending on the analyzed channel in the scalp-recorded EEG. In order to demonstrate the effectiveness of the proposed technique, we analyzed four distinct channels represented here by F332, F637 (frontal region of the head and P349, P654 (parietal region of the head. We verified that the amplitude of the FDFA rms function is greater for the frontal channels than for the parietal. To tabulate this information in a better way, we define and calculate the difference between FDFA (in log scale for the channels, thus defining a new path for analysis of EEG signals. Finally, related to the studied EEG signals, we obtain the auto-correlation exponent, αDFA by DFA method, that reveals self-affinity at specific time scale. Our results shows that this strategy can be applied to study the human brain activity in EEG processing.

  5. A hierarchical model of daily stream temperature using air-water temperature synchronization, autocorrelation, and time lags

    Directory of Open Access Journals (Sweden)

    Benjamin H. Letcher

    2016-02-01

    Full Text Available Water temperature is a primary driver of stream ecosystems and commonly forms the basis of stream classifications. Robust models of stream temperature are critical as the climate changes, but estimating daily stream temperature poses several important challenges. We developed a statistical model that accounts for many challenges that can make stream temperature estimation difficult. Our model identifies the yearly period when air and water temperature are synchronized, accommodates hysteresis, incorporates time lags, deals with missing data and autocorrelation and can include external drivers. In a small stream network, the model performed well (RMSE = 0.59°C, identified a clear warming trend (0.63 °C decade−1 and a widening of the synchronized period (29 d decade−1. We also carefully evaluated how missing data influenced predictions. Missing data within a year had a small effect on performance (∼0.05% average drop in RMSE with 10% fewer days with data. Missing all data for a year decreased performance (∼0.6 °C jump in RMSE, but this decrease was moderated when data were available from other streams in the network.

  6. Genetic evolution, plasticity, and bet-hedging as adaptive responses to temporally autocorrelated fluctuating selection: A quantitative genetic model.

    Science.gov (United States)

    Tufto, Jarle

    2015-08-01

    Adaptive responses to autocorrelated environmental fluctuations through evolution in mean reaction norm elevation and slope and an independent component of the phenotypic variance are analyzed using a quantitative genetic model. Analytic approximations expressing the mutual dependencies between all three response modes are derived and solved for the joint evolutionary outcome. Both genetic evolution in reaction norm elevation and plasticity are favored by slow temporal fluctuations, with plasticity, in the absence of microenvironmental variability, being the dominant evolutionary outcome for reasonable parameter values. For fast fluctuations, tracking of the optimal phenotype through genetic evolution and plasticity is limited. If residual fluctuations in the optimal phenotype are large and stabilizing selection is strong, selection then acts to increase the phenotypic variance (bet-hedging adaptive). Otherwise, canalizing selection occurs. If the phenotypic variance increases with plasticity through the effect of microenvironmental variability, this shifts the joint evolutionary balance away from plasticity in favor of genetic evolution. If microenvironmental deviations experienced by each individual at the time of development and selection are correlated, however, more plasticity evolves. The adaptive significance of evolutionary fluctuations in plasticity and the phenotypic variance, transient evolution, and the validity of the analytic approximations are investigated using simulations. © 2015 The Author(s). Evolution © 2015 The Society for the Study of Evolution.

  7. Conceptual aspects: analyses law, ethical, human, technical, social factors of development ICT, e-learning and intercultural development in different countries setting out the previous new theoretical model and preliminary findings

    NARCIS (Netherlands)

    Kommers, Petrus A.M.; Smyrnova-Trybulska, Eugenia; Morze, Natalia; Issa, Tomayess; Issa, Theodora

    2015-01-01

    This paper, prepared by an international team of authors focuses on the conceptual aspects: analyses law, ethical, human, technical, social factors of ICT development, e-learning and intercultural development in different countries, setting out the previous and new theoretical model and preliminary

  8. Beck Depression Inventory-II: Factor Analyses with Three Groups of Midlife Women of African Descent in the Midwest, the South, and the U.S. Virgin Islands.

    Science.gov (United States)

    Gary, Faye A; Yarandi, Hossein; Evans, Edris; Still, Carolyn; Mickels, Prince; Hassan, Mona; Campbell, Doris; Conic, Ruzica

    2018-03-01

    This research encompasses a factor analysis of the Beck Depression Inventory-II (BDI-II), which involves three groups of midlife women of African descent who reside in the Midwest, the South, and the U.S. Virgin Islands. The purpose of the study was to determine the factor structure of the BDI-II when administered to a sample of women aged 40-65 of African descent who reside in the three distinct geographical regions of the United States. A correlational, descriptive design was used, and 536 women of African descent were invited to participate in face-to-face interviews that transpired in community settings. Results of the factor analysis revealed a two-factor explanation. Factor one included symptoms such as punishment feelings and pessimism (cognitive), and the second factor included symptoms such as tiredness and loss of energy (somatic-affective). The application of the Beck Depression Inventory-II among the three groups of women generated specific information about each group and common findings across the groups. Knowledge gained from the research could help to guide specific intervention programs for the three groups of women, and explicate the common approaches that could be used for the three groups.

  9. Design Guidelines and Criteria for User/Operator Transactions with Battlefield Automated Systems. Volume III-A. Human Factors Analyses of User/ Operator Transactions with TACFIRE - The Tactical Fire Direction System

    Science.gov (United States)

    1981-02-01

    7. Reseaarch Product 81-26 - DESIGN GUIDELINES AND CRITERIA FOR USER/ I;. I’OPERATOR TRANSACTIONS WITH BATTLEFIELD AUTOMIATED SYSTEMS I’ /HVtAN...FACTORS XWLYSES :’F K~R/ OPERATOR TRANSACTIONS WTHT TACFIRE - THE TACTICAL FIRE DiRECTION SY2T3EM A HUMAN FACTORS TECHNICAL AREA L~h~h K L-J 1’ U~~i~ ll...Battlefield Auto- Inter : Oct 1979-Feb 1981 mated Systems Volume III-A: Human Factors 4t C/ Analyses of User/Operator Transactions with 6. PERFORMING

  10. Analysis of stress intensity factors for a new mechanical corrosion specimen; Analyse du facteur d`intensite de contrainte pour une nouvelle eprouvette de mecanique corrosion

    Energy Technology Data Exchange (ETDEWEB)

    Rassineux, B; Crouzet, D; Le Hong, S

    1996-03-01

    Electricite de France is conducting a research program to determine corrosion cracking rates in the steam generators Alloy 600 tubes of the primary system. The objective is to correlate the cracking rates with the specimen stress intensity factor K{sub I}. One of the samples selected for the purpose of this study is the longitudinal notched specimen TEL (TEL: ``Tubulaire a Entailles Longitudinales``). This paper presents the analysis of the stress intensity factor and its experimental validation. The stress intensity factor has been evaluated for different loads using 3D finite element calculations with the Hellen-Parks and G({theta}) methods. Both crack initiation and propagation are considered. As an assessment of the method, the numerical simulations are in good agreement with the fatigue crack growth rates measured experimentally for TEL and compact tension (CT) specimens. (authors). 8 refs., 6 figs., 2 tabs.

  11. Quantitative and mixed analyses to identify factors that affect cervical cancer screening uptake among lesbian and bisexual women and transgender men.

    Science.gov (United States)

    Johnson, Michael J; Mueller, Martina; Eliason, Michele J; Stuart, Gail; Nemeth, Lynne S

    2016-12-01

    The purposes of this study were to measure the prevalence of, and identify factors associated with, cervical cancer screening among a sample of lesbian, bisexual and queer women, and transgender men. Past research has found that lesbian, bisexual and queer women underuse cervical screening service. Because deficient screening remains the most significant risk factor for cervical cancer, it is essential to understand the differences between routine and nonroutine screeners. A convergent-parallel mixed methods design. A convenience sample of 21- to 65-year-old lesbian and bisexual women and transgender men were recruited in the USA from August-December 2014. Quantitative data were collected via a 48-item Internet questionnaire (N = 226), and qualitative data were collected through in-depth telephone interviews (N = 20) and open-ended questions on the Internet questionnaire. Seventy-three per cent of the sample was routine cervical screeners. The results showed that a constellation of factors influence the use of cervical cancer screening among lesbian, bisexual and queer women. Some of those factors overlap with the general female population, whereas others are specific to the lesbian, bisexual or queer identity. Routine screeners reported feeling more welcome in the health care setting, while nonroutine screeners reported more discrimination related to their sexual orientation and gender expression. Routine screeners were also more likely to 'out' to their provider. The quantitative and qualitative factors were also compared and contrasted. Many of the factors identified in this study to influence cervical cancer screening relate to the health care environment and to interactions between the patient and provider. Nurses should be involved with creating welcoming environments for lesbian, bisexual and queer women and their partners. Moreover, nurses play a large role in patient education and should promote self-care behaviours among lesbian women and transgender

  12. Analyzing big data in social media: Text and network analyses of an eating disorder forum.

    Science.gov (United States)

    Moessner, Markus; Feldhege, Johannes; Wolf, Markus; Bauer, Stephanie

    2018-05-10

    Social media plays an important role in everyday life of young people. Numerous studies claim negative effects of social media and media in general on eating disorder risk factors. Despite the availability of big data, only few studies have exploited the possibilities so far in the field of eating disorders. Methods for data extraction, computerized content analysis, and network analysis will be introduced. Strategies and methods will be exemplified for an ad-hoc dataset of 4,247 posts and 34,118 comments by 3,029 users of the proed forum on Reddit. Text analysis with latent Dirichlet allocation identified nine topics related to social support and eating disorder specific content. Social network analysis describes the overall communication patterns, and could identify community structures and most influential users. A linear network autocorrelation model was applied to estimate associations in language among network neighbors. The supplement contains R code for data extraction and analyses. This paper provides an introduction to investigating social media data, and will hopefully stimulate big data social media research in eating disorders. When applied in real-time, the methods presented in this manuscript could contribute to improving the safety of ED-related online communication. © 2018 Wiley Periodicals, Inc.

  13. Semi-quantitative and simulation analyses of effects of {gamma} rays on determination of calibration factors of PET scanners with point-like {sup 22}Na sources

    Energy Technology Data Exchange (ETDEWEB)

    Hasegawa, Tomoyuki [School of Allied Health Sciences, Kitasato University, 1-15-1, Kitasato, Minamiku, Sagamihara, Kanagawa, 252-0373 (Japan); Sato, Yasushi [National Institute of Advanced Industrial Science and Technology, 1-1-1, Umezono, Tsukuba, Ibaraki, 305-8568 (Japan); Oda, Keiichi [Tokyo Metropolitan Institute of Gerontology, 1-1, Nakamachi, Itabashi, Tokyo, 173-0022 (Japan); Wada, Yasuhiro [RIKEN Center for Molecular Imaging Science, 6-7-3, Minamimachi, Minatoshima, Chuo, Kobe, Hyogo, 650-0047 (Japan); Murayama, Hideo [National Institute of Radiological Sciences, 4-9-1, Anagawa, Inage, Chiba, 263-8555 (Japan); Yamada, Takahiro, E-mail: hasegawa@kitasato-u.ac.jp [Japan Radioisotope Association, 2-28-45, Komagome, Bunkyo-ku, Tokyo, 113-8941 (Japan)

    2011-09-21

    The uncertainty of radioactivity concentrations measured with positron emission tomography (PET) scanners ultimately depends on the uncertainty of the calibration factors. A new practical calibration scheme using point-like {sup 22}Na radioactive sources has been developed. The purpose of this study is to theoretically investigate the effects of the associated 1.275 MeV {gamma} rays on the calibration factors. The physical processes affecting the coincidence data were categorized in order to derive approximate semi-quantitative formulae. Assuming the design parameters of some typical commercial PET scanners, the effects of the {gamma} rays as relative deviations in the calibration factors were evaluated by semi-quantitative formulae and a Monte Carlo simulation. The relative deviations in the calibration factors were less than 4%, depending on the details of the PET scanners. The event losses due to rejecting multiple coincidence events of scattered {gamma} rays had the strongest effect. The results from the semi-quantitative formulae and the Monte Carlo simulation were consistent and were useful in understanding the underlying mechanisms. The deviations are considered small enough to correct on the basis of precise Monte Carlo simulation. This study thus offers an important theoretical basis for the validity of the calibration method using point-like {sup 22}Na radioactive sources.

  14. Perceptual interaction between carrier periodicity and amplitude modulation in broadband stimuli: A comparison of the autocorrelation and modulation-filterbank model

    DEFF Research Database (Denmark)

    Stein, A.; Ewert, Stephan; Wiegrebe, L.

    2005-01-01

    , autocorrelation is applied. Considering the large overlap in pitch and modulation perception, this is not parsimonious. Two experiments are presented to investigate the interaction between carrier periodicity, which produces strong pitch sensations, and envelope periodicity using broadband stimuli. Results show......Recent temporal models of pitch and amplitude modulation perception converge on a relatively realistic implementation of cochlear processing followed by a temporal analysis of periodicity. However, for modulation perception, a modulation filterbank is applied whereas for pitch perception...

  15. Multivariable analysis of clinical influence factors on liver enhancement of Gd-EOB-DTPA-enhanced 3T MRI; Multivariable Analyse klinischer Einflussfaktoren auf die Signalintensitaet bei Gd-EOB-DTPA 3T-MRT der Leber

    Energy Technology Data Exchange (ETDEWEB)

    Verloh, N.; Haimerl, M.; Stroszczynski, C.; Fellner, C.; Wiggermann, P. [University Hospital Regensburg (Germany). Dept. of Radiology; Zeman, F. [University Hospital Regensburg (Germany). Center for Clinical Trials; Teufel, A. [University Hospital Regensburg (Germany). Dept. of Gastroenterology; Lang, S. [University Hospital Regensburg (Germany). Dept. of Surgery

    2015-01-15

    The purpose of this study was to identify clinical factors influencing Gd-EOB-DTPA liver uptake in patients with healthy liver parenchyma. A total of 124 patients underwent contrast-enhanced MRI with a hepatocyte-specific contrast agent at 3T. T1-weighted volume interpolated breath-hold examination (VIBE) sequences with fat suppression were acquired before and 20 minutes after contrast injection. The relative enhancement (RE) between plain and contrast-enhanced signal intensity was calculated. Simple and multiple linear regression analyses were performed to evaluate clinical factors influencing the relative enhancement. Patients were subdivided into three groups according to their relative liver enhancement (HRE, RE ≥ 100 %; MRE, 100 % > RE > 50 %; NRE, RE ≤ 50 %) and were analyzed according to the relevant risk factors. Simple regression analyses revealed patient age, transaminases (AST, ALT, GGT), liver, spleen and delta-liver volume (the difference between the volumetrically measured liver volume and the estimated liver volume based on body weight) as significant factors influencing relative enhancement. In the multiple analysis the transaminase AST, spleen and delta liver volume remained significant factors influencing relative enhancement. Delta liver volume showed a significant difference between all analyzed groups. Liver enhancement in the hepatobiliary phase depends on a variety of factors. Body weight-adapted administration of Gd-EOB-DTPA may lead to inadequate liver enhancement after 20 minutes especially when the actual liver volume differs from the expected volume.

  16. Wave packet autocorrelation functions for quantum hard-disk and hard-sphere billiards in the high-energy, diffraction regime.

    Science.gov (United States)

    Goussev, Arseni; Dorfman, J R

    2006-07-01

    We consider the time evolution of a wave packet representing a quantum particle moving in a geometrically open billiard that consists of a number of fixed hard-disk or hard-sphere scatterers. Using the technique of multiple collision expansions we provide a first-principle analytical calculation of the time-dependent autocorrelation function for the wave packet in the high-energy diffraction regime, in which the particle's de Broglie wavelength, while being small compared to the size of the scatterers, is large enough to prevent the formation of geometric shadow over distances of the order of the particle's free flight path. The hard-disk or hard-sphere scattering system must be sufficiently dilute in order for this high-energy diffraction regime to be achievable. Apart from the overall exponential decay, the autocorrelation function exhibits a generally complicated sequence of relatively strong peaks corresponding to partial revivals of the wave packet. Both the exponential decay (or escape) rate and the revival peak structure are predominantly determined by the underlying classical dynamics. A relation between the escape rate, and the Lyapunov exponents and Kolmogorov-Sinai entropy of the counterpart classical system, previously known for hard-disk billiards, is strengthened by generalization to three spatial dimensions. The results of the quantum mechanical calculation of the time-dependent autocorrelation function agree with predictions of the semiclassical periodic orbit theory.

  17. Can spatial autocorrelation method be applied to arbitrary array shape; Kukan jiko sokanho no nin`i array eno tekiyo kanosei

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, H; Iwamoto, K; Saito, T; Tachibana, M [Iwate University, Iwate (Japan). Faculty of Engineering

    1997-05-27

    Methods to learn underground structures by utilizing the dispersion phenomenon of surface waves contained in microtremors include the frequency-wave number analysis method (the F-K method) and the spatial autocorrelation method (the SAC method). Despite the fact that the SAC method is capable of exploring structures at greater depths, the method is not utilized because of its stringent restriction in arrangement of seismometers during observation that they must be arranged evenly on the same circumference. In order to eliminate this restriction in the SAC method, a research group in the Hokuriku University has proposed an expanded spatial autocorrelation (ESAC) method. Using the concept of the ESAC method as its base, a method was realized to improve phase velocity estimation by making a simulation on an array shifted to the radius direction. As a result of the discussion, it was found that the proposed improvement method can be applied to places where waves come from a number of directions, such as urban areas. If the improvement method can be applied, the spatial autocorrelation function needs not be even in the circumferential direction. In other words, the SAC method can be applied to arbitrary arrays. 1 ref., 7 figs.

  18. The regulatory mechanism of fruit ripening revealed by analyses of direct targets of the tomato MADS-box transcription factor RIPENING INHIBITOR

    Science.gov (United States)

    Fujisawa, Masaki; Ito, Yasuhiro

    2013-01-01

    The developmental process of ripening is unique to fleshy fruits and a key factor in fruit quality. The tomato (Solanum lycopersicum) MADS-box transcription factor RIPENING INHIBITOR (RIN), one of the earliest-acting ripening regulators, is required for broad aspects of ripening, including ethylene-dependent and -independent pathways. However, our knowledge of direct RIN target genes has been limited, considering the broad effects of RIN on ripening. In a recent work published in The Plant Cell, we identified 241 direct RIN target genes by chromatin immunoprecipitation coupled with DNA microarray (ChIP-chip) and transcriptome analysis. Functional classification of the targets revealed that RIN participates in the regulation of many biological processes including well-known ripening processes such as climacteric ethylene production and lycopene accumulation. In addition, we found that ethylene is required for the full expression of RIN and several RIN-targeting transcription factor genes at the ripening stage. Here, based on our recently published findings and additional data, we discuss the ripening processes regulated by RIN and the interplay between RIN and ethylene. PMID:23518588

  19. [The approaches to factors which cause medication error--from the analyses of many near-miss cases related to intravenous medication which nurses experienced].

    Science.gov (United States)

    Kawamura, H

    2001-03-01

    Given the complexity of the intravenous medication process, systematic thinking is essential to reduce medication errors. Two thousand eight hundred cases of 'Hiyari-Hatto' were analyzed. Eight important factors which cause intravenous medication error were clarified as a result. In the following I summarize the systematic approach for each factor. 1. Failed communication of information: illegible handwritten orders, and inaccurate verbal orders and copying cause medication error. Rules must be established to prevent miscommunication. 2. Error-prone design of the hardware: Look-alike packaging and labeling of drugs and the poor design of infusion pumps cause errors. The human-hardware interface should be improved by error-resistant design by manufacturers. 3. Patient names similar to simultaneously operating surgical procedures and interventions: This factor causes patient misidentification. Automated identification devices should be introduced into health care settings. 4. Interruption in the middle of tasks: The efficient assignment of medical work and business work should be made. 5. Inaccurate mixing procedure and insufficient mixing space: Mixing procedures must be standardized and the layout of the working space must be examined. 6. Time pressure: Mismatch between workload and manpower should be improved by reconsidering the work to be done. 7. Lack of information about high alert medications: The pharmacist should play a greater role in the medication process overall. 8. Poor knowledge and skill of recent graduates: Training methods and tools to prevent medication errors must be developed.

  20. Three Dimensional Parametric Analyses of Stress Concentration Factor and Its Mitigation in Isotropic and Orthotropic Plate with Central Circular Hole Under Axial In-Plane Loading

    Science.gov (United States)

    Nagpal, Shubhrata; Jain, Nitin Kumar; Sanyal, Shubhashis

    2016-01-01

    The problem of finding the stress concentration factor of a loaded rectangular plate has offered considerably analytical difficulty. The present work focused on understanding of behavior of isotropic and orthotropic plate subjected to static in-plane loading using finite element method. The complete plate model configuration has been analyzed using finite element method based software ANSYS. In the present work two parameters: thickness to width of plate (T/A) and diameter of hole to width of plate (D/A) have been varied for analysis of stress concentration factor (SCF) and its mitigation. Plates of five different materials have been considered for complete analysis to find out the sensitivity of stress concentration factor. The D/A ratio varied from 0.1 to 0.7 for analysis of SCF and varied from 0.1 to 0.5 for analyzing the mitigation of SCF. 0.01, 0.05 and 0.1 are considered as T/A ratio for all the cases. The results are presented in graphical form and discussed. The mitigation in SCF reported is very encouraging. The SCF is more sensitive to D/A ratio as compared to T/A.

  1. Provider risk factors for medication administration error alerts: analyses of a large-scale closed-loop medication administration system using RFID and barcode.

    Science.gov (United States)

    Hwang, Yeonsoo; Yoon, Dukyong; Ahn, Eun Kyoung; Hwang, Hee; Park, Rae Woong

    2016-12-01

    To determine the risk factors and rate of medication administration error (MAE) alerts by analyzing large-scale medication administration data and related error logs automatically recorded in a closed-loop medication administration system using radio-frequency identification and barcodes. The subject hospital adopted a closed-loop medication administration system. All medication administrations in the general wards were automatically recorded in real-time using radio-frequency identification, barcodes, and hand-held point-of-care devices. MAE alert logs recorded during a full 1 year of 2012. We evaluated risk factors for MAE alerts including administration time, order type, medication route, the number of medication doses administered, and factors associated with nurse practices by logistic regression analysis. A total of 2 874 539 medication dose records from 30 232 patients (882.6 patient-years) were included in 2012. We identified 35 082 MAE alerts (1.22% of total medication doses). The MAE alerts were significantly related to administration at non-standard time [odds ratio (OR) 1.559, 95% confidence interval (CI) 1.515-1.604], emergency order (OR 1.527, 95%CI 1.464-1.594), and the number of medication doses administered (OR 0.993, 95%CI 0.992-0.993). Medication route, nurse's employment duration, and working schedule were also significantly related. The MAE alert rate was 1.22% over the 1-year observation period in the hospital examined in this study. The MAE alerts were significantly related to administration time, order type, medication route, the number of medication doses administered, nurse's employment duration, and working schedule. The real-time closed-loop medication administration system contributed to improving patient safety by preventing potential MAEs. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Energy management as a factor of success. International comparative analysis of energy management systems standards; Energiemanagement als Erfolgsfaktor. International vergleichende Analyse von Energiemanagementnormen

    Energy Technology Data Exchange (ETDEWEB)

    Kahlenborn, Walter; Knopf, Jutta; Richter, Ina [adelphi research, Berlin (Germany)

    2010-11-15

    This report outlines the current state of standardised energy management systems (EnMSs) worldwide whose aim is to promote energy efficiency in the industrial sector. The core intention of the study is to identify the potential of EnMSs for German energy efficiency policy. The study examines the experiences of countries that can be defined as front runners in this context, such as the Netherlands, Denmark, Sweden, Ireland and the USA. Further input was taken from recently completed, and still ongoing, development processes of national standards. Data were generated from an intensive literature review as well as interviews with experts. Central to the analysis are questions of characteristics as well as the effectiveness of national energy management standards. In addition, political frameworks (i.e. voluntary agreements), financial tools (i.e. subsidies) and other measures of assistance (i.e. capacity building) supporting the implementation of an EnMS were analysed. The study concludes with a comparison of findings from the country-by-country analysis and provides recommendations for the effective implementation of EnMS in Germany. As part of the entire project adelphi produced a manual on the use of EN 16001 which has been published by BMU/UBA. (orig.)

  3. Analysing risk factors of co-occurrence of schistosomiasis haematobium and hookworm using bivariate regression models: Case study of Chikwawa, Malawi

    Directory of Open Access Journals (Sweden)

    Bruce B.W. Phiri

    2016-06-01

    Full Text Available Schistosomiasis and soil-transmitted helminth (STH infections constitute a major public health problem in many parts of sub-Saharan Africa. In areas where prevalence of geo-helminths and schistosomes is high, co-infection with multiple parasite species is common, resulting in disproportionately elevated burden compared with single infections. Determining risk factors of co-infection intensity is important for better design of targeted interventions. In this paper, we examined risk factors of hookworm and S. haematobium co-infection intensity, in Chikwawa district, southern Malawi in 2005, using bivariate count models. Results show that hookworm and S. haematobium infections were much localised with small proportion of individuals harbouring more parasites especially among school-aged children. The risk of co-intensity with both hookworm and S. haematobium was high for all ages, although this diminished with increasing age, increased with fishing (hookworm: coefficient. = 12.29; 95% CI = 11.50–13.09; S. haematobium: 0.040; 95% CI = 0.0037, 3.832. Both infections were abundant in those with primary education (hookworm: coef. = 0.072; 95% CI = 0.056, 0.401 and S. haematobium: coef. = 0.286; 95% CI = 0.034, 0.538. However, much lower risk was observed for those who were farmers (hookworm: coef. = −0.349, 95% CI = −0.547,−0.150; S. haematobium: coef. −0.239, 95% CI = −0.406, −0.072. In conclusion, our findings suggest that efforts to control helminths infection should be co-integrated and health promotion campaigns should be aimed at school-going children and adults who are in constant contact with water.

  4. [Regulatory factors for images of the elderly among elementary school students assessed through secular trend analyses by frequency of inter-exchange with "REPRINTS" senior volunteers].

    Science.gov (United States)

    Fujiwara, Yoshinori; Watanabe, Naoki; Nishi, Mariko; Lee, Sangyoon; Ohba, Hiromi; Yoshida, Hiroto; Sakuma, Naoko; Fukaya, Taro; Kousa, Youko; Inoue, Kazuko; Amano, Hidenori; Uchida, Hayato; Kakuno, Fumihiko; Shinkai, Shoji

    2007-09-01

    We have launched a new intervention study, called "REPRINTS" (Research of productivity by intergenerational sympathy) in which senior volunteers aged 60 years and over engage in reading picture books to school children, regularly visiting public elementary schools since 2004. The purpose of this study was to clarify characteristics of images of older people held by elementary school children and factors associated with such images, as well as to examine changes in images through intervention by "REPRINTS" senior volunteers (volunteers) for the initial one year period. Four to six volunteers as a group visited A elementary school in a suburb Kawasaki city (470 students) twice a week to read picture books. The baseline survey was conducted one month after launching the volunteer activity. First and second follow-up surveys were conducted at 6 month intervals after the baseline survey. Grade, gender, short version of emotional-like image scale of older adults assessed by the SD (Semantic Differential) method (6 items in the subscale for "evaluation" and 4 items in the subscale for "potency/activity"), experience of living with grandparents, experience of interchange with older people, frequency of interchange with volunteers and the social desirability scale for children. Related variables for a higher score in the subscale for "evaluation" included lower grade and abundant experience of interchange with older people such as grandparents. Those for "potency/ activity" included lower grade, male gender, and a higher social desirability scale for children in the multiple logistic regression model. Students were divided into two groups in terms of frequency of interchange with volunteers (low and high-frequency groups) through three surveys. In the subscale for "evaluation", the general linear model demonstrated a significant interaction between the group and number of surveys adjusted for confounding factors. Although emotional images of older people significantly

  5. Analysing the spatial patterns of livestock anthrax in Kazakhstan in relation to environmental factors: a comparison of local (Gi* and morphology cluster statistics

    Directory of Open Access Journals (Sweden)

    Ian T. Kracalik

    2012-11-01

    Full Text Available We compared a local clustering and a cluster morphology statistic using anthrax outbreaks in large (cattle and small (sheep and goats domestic ruminants across Kazakhstan. The Getis-Ord (Gi* statistic and a multidirectional optimal ecotope algorithm (AMOEBA were compared using 1st, 2nd and 3rd order Rook contiguity matrices. Multivariate statistical tests were used to evaluate the environmental signatures between clusters and non-clusters from the AMOEBA and Gi* tests. A logistic regression was used to define a risk surface for anthrax outbreaks and to compare agreement between clustering methodologies. Tests revealed differences in the spatial distribution of clusters as well as the total number of clusters in large ruminants for AMOEBA (n = 149 and for small ruminants (n = 9. In contrast, Gi* revealed fewer large ruminant clusters (n = 122 and more small ruminant clusters (n = 61. Significant environmental differences were found between groups using the Kruskall-Wallis and Mann- Whitney U tests. Logistic regression was used to model the presence/absence of anthrax outbreaks and define a risk surface for large ruminants to compare with cluster analyses. The model predicted 32.2% of the landscape as high risk. Approximately 75% of AMOEBA clusters corresponded to predicted high risk, compared with ~64% of Gi* clusters. In general, AMOEBA predicted more irregularly shaped clusters of outbreaks in both livestock groups, while Gi* tended to predict larger, circular clusters. Here we provide an evaluation of both tests and a discussion of the use of each to detect environmental conditions associated with anthrax outbreak clusters in domestic livestock. These findings illustrate important differences in spatial statistical methods for defining local clusters and highlight the importance of selecting appropriate levels of data aggregation.

  6. Poincaré plot analysis of autocorrelation function of RR intervals in patients with acute myocardial infarction.

    Science.gov (United States)

    Chuang, Shin-Shin; Wu, Kung-Tai; Lin, Chen-Yang; Lee, Steven; Chen, Gau-Yang; Kuo, Cheng-Deng

    2014-08-01

    The Poincaré plot of RR intervals (RRI) is obtained by plotting RRIn+1 against RRIn. The Pearson correlation coefficient (ρRRI), slope (SRRI), Y-intercept (YRRI), standard deviation of instantaneous beat-to-beat RRI variability (SD1RR), and standard deviation of continuous long-term RRI variability (SD2RR) can be defined to characterize the plot. Similarly, the Poincaré plot of autocorrelation function (ACF) of RRI can be obtained by plotting ACFk+1 against ACFk. The corresponding Pearson correlation coefficient (ρACF), slope (SACF), Y-intercept (YACF), SD1ACF, and SD2ACF can be defined similarly to characterize the plot. By comparing the indices of Poincaré plots of RRI and ACF between patients with acute myocardial infarction (AMI) and patients with patent coronary artery (PCA), we found that the ρACF and SACF were significantly larger, whereas the RMSSDACF/SDACF and SD1ACF/SD2ACF were significantly smaller in AMI patients. The ρACF and SACF correlated significantly and negatively with normalized high-frequency power (nHFP), and significantly and positively with normalized very low-frequency power (nVLFP) of heart rate variability in both groups of patients. On the contrary, the RMSSDACF/SDACF and SD1ACF/SD2ACF correlated significantly and positively with nHFP, and significantly and negatively with nVLFP and low-/high-frequency power ratio (LHR) in both groups of patients. We concluded that the ρACF, SACF, RMSSDACF/SDACF, and SD1ACF/SD2ACF, among many other indices of ACF Poincaré plot, can be used to differentiate between patients with AMI and patients with PCA, and that the increase in ρACF and SACF and the decrease in RMSSDACF/SDACF and SD1ACF/SD2ACF suggest an increased sympathetic and decreased vagal modulations in both groups of patients.

  7. Metabolic and molecular analyses of white mutant Vaccinium berries show down-regulation of MYBPA1-type R2R3 MYB regulatory factor.

    Science.gov (United States)

    Primetta, Anja K; Karppinen, Katja; Riihinen, Kaisu R; Jaakola, Laura

    2015-09-01

    MYBPA1-type R2R3 MYB transcription factor shows down-regulation in white mutant berries of Vaccinium uliginosum deficient in anthocyanins but not proanthocyanidins suggesting a role in the regulation of anthocyanin biosynthesis. Berries of the genus Vaccinium are among the best natural sources of flavonoids. In this study, the expression of structural and regulatory flavonoid biosynthetic genes and the accumulation of flavonoids in white mutant and blue-colored wild-type bog bilberry (V. uliginosum) fruits were measured at different stages of berry development. In contrast to high contents of anthocyanins in ripe blue-colored berries, only traces were detected by HPLC-ESI-MS in ripe white mutant berries. However, similar profile and high levels of flavonol glycosides and proanthocyanidins were quantified in both ripe white and ripe wild-type berries. Analysis with qRT-PCR showed strong down-regulation of structural genes chalcone synthase (VuCHS), dihydroflavonol 4-reductase (VuDFR) and anthocyanidin synthase (VuANS) as well as MYBPA1-type transcription factor VuMYBPA1 in white berries during ripening compared to wild-type berries. The profiles of transcript accumulation of chalcone isomerase (VuCHI), anthocyanidin reductase (VuANR), leucoanthocyanidin reductase (VuLAR) and flavonoid 3'5' hydroxylase (VuF3'5'H) were more similar between the white and the wild-type berries during fruit development, while expression of UDP-glucose: flavonoid 3-O-glucosyltransferase (VuUFGT) showed similar trend but fourfold lower level in white mutant. VuMYBPA1, the R2R3 MYB family member, is a homologue of VmMYB2 of V. myrtillus and VcMYBPA1 of V. corymbosum and belongs to MYBPA1-type MYB family which members are shown in some species to be related with proanthocyanidin biosynthesis in fruits. Our results combined with earlier data of the role of VmMYB2 in white mutant berries of V. myrtillus suggest that the regulation of anthocyanin biosynthesis in Vaccinium species could differ

  8. Molecular modeling of the human eukaryotic translation initiation factor 5A (eIF5A) based on spectroscopic and computational analyses

    International Nuclear Information System (INIS)

    Costa-Neto, Claudio M.; Parreiras-e-Silva, Lucas T.; Ruller, Roberto; Oliveira, Eduardo B.; Miranda, Antonio; Oliveira, Laerte; Ward, Richard J.

    2006-01-01

    The eukaryotic translation initiation factor 5A (eIF5A) is a protein ubiquitously present in archaea and eukarya, which undergoes a unique two-step post-translational modification called hypusination. Several studies have shown that hypusination is essential for a variety of functional roles for eIF5A, including cell proliferation and synthesis of proteins involved in cell cycle control. Up to now neither a totally selective inhibitor of hypusination nor an inhibitor capable of directly binding to eIF5A has been reported in the literature. The discovery of such an inhibitor might be achieved by computer-aided drug design based on the 3D structure of the human eIF5A. In this study, we present a molecular model for the human eIF5A protein based on the crystal structure of the eIF5A from Leishmania brasiliensis, and compare the modeled conformation of the loop bearing the hypusination site with circular dichroism data obtained with a synthetic peptide of this loop. Furthermore, analysis of amino acid variability between different human eIF5A isoforms revealed peculiar structural characteristics that are of functional relevance

  9. Analysing the Structural Effect of Point Mutations of Cytotoxic Necrotizing Factor 1 (CNF1 on Lu/BCAM Adhesion Glycoprotein Association

    Directory of Open Access Journals (Sweden)

    Alexandre G. de Brevern

    2018-03-01

    Full Text Available Cytotoxic Necrotizing Factor 1 (CNF1 was identified in 1983 as a protein toxin produced by certain pathogenic strains of Escherichia coli. Since then, numerous studies have investigated its particularities. For instance, it is associated with the single chain AB-toxin family, and can be divided into different functional and structural domains, e.g., catalytic and transmembrane domain and interaction sites. A few years ago, the identification of the Lutheran (Lu adhesion glycoprotein/basal cell adhesion molecule (BCAM as a cellular receptor for CNF1 provided new insights into the adhesion process of CNF1. Very recently, the Ig-like domain 2 of Lu/BCAM was confirmed as the main interaction site using protein-protein interaction and competition studies with various different mutants. Here, I present in silico approaches that precisely explain the impact of these mutations, leading to a better explanation of these experimental studies. These results can be used in the development of future antitoxin strategies.

  10. Possible role of diet in cancer: systematic review and multiple meta-analyses of dietary patterns, lifestyle factors, and cancer risk.

    Science.gov (United States)

    Grosso, Giuseppe; Bella, Francesca; Godos, Justyna; Sciacca, Salvatore; Del Rio, Daniele; Ray, Sumantra; Galvano, Fabio; Giovannucci, Edward L

    2017-06-01

    Evidence of an association between dietary patterns derived a posteriori and risk of cancer has not been reviewed comprehensively. The aim of this review was to investigate the relation between a posteriori-derived dietary patterns, grouped as healthy or unhealthy, and cancer risk. The relation between cancer risk and background characteristics associated with adherence to dietary patterns was also examined. PubMed and Embase electronic databases were searched. A total of 93 studies including over 85 000 cases, 100 000 controls, and 2 000 000 exposed individuals were selected. Data were extracted from each identified study using a standardized form by two independent authors. The most convincing evidence (significant results from prospective cohort studies) supported an association between healthy dietary patterns and decreased risk of colon and breast cancer, especially in postmenopausal, hormone receptor-negative women, and an association between unhealthy dietary patterns and increased risk of colon cancer. Limited evidence of a relation between an unhealthy dietary pattern and risk of upper aerodigestive tract, pancreatic, ovarian, endometrial, and prostatic cancers relied only on case-control studies. Unhealthy dietary patterns were associated with higher body mass index and energy intake, while healthy patterns were associated with higher education, physical activity, and less smoking. Potential differences across geographical regions require further evaluation. The results suggest a potential role of diet in certain cancers, but the evidence is not conclusive and may be driven or mediated by lifestyle factors. © The Author(s) 2017. Published by Oxford University Press on behalf of the International Life Sciences Institute. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Comparison of the equivalent width, the autocorrelation width, and the variance as figures of merit for XPS narrow scans

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Bhupinder [Department of Chemistry and Biochemistry, C-100 BNSN, Brigham Young University, Provo, UT 84602 (United States); Velázquez, Daniel; Terry, Jeff [Department of Physics, Illinois Institute of Technology, Chicago, IL 60616 (United States); Linford, Matthew R., E-mail: mrlinford@chem.byu.edu [Department of Chemistry and Biochemistry, C-100 BNSN, Brigham Young University, Provo, UT 84602 (United States)

    2014-12-15

    Highlights: • We apply the equivalent and autocorrelation widths and variance to XPS narrow scans. • This approach is complementary to traditional peak fitting methods. • It is bias free and responsive to subtle chemical changes in spectra. • It has the potential for machine interpretation of spectra and quality control. • It has the potential for analysis of complex spectra and tracking charging/artifacts. - Abstract: X-ray photoelectron spectroscopy (XPS) is widely used in surface and materials laboratories around the world. It is a near surface technique, providing detailed chemical information about samples in the form of survey and narrow scans. To extract the maximum amount of information about materials it is often necessary to peak fit XPS narrow scans. And while indispensable to XPS data analysis, even experienced practitioners can struggle with their peak fitting. In our previous publication, we introduced the equivalent width (EW{sub XPS}) as both a possible machine automated method, one that requires less expert judgment for characterizing XPS narrow scans, and as an approach that may be well suited for the analysis of complex spectra. The EW{sub XPS} figure of merit was applied to four different data sets. However, as previously noted, other width functions are also regularly employed for analyzing functions. Here we evaluate two other width functions for XPS narrow scan analysis: the autocorrelation width (AW{sub XPS}) and the variance (σ{sub XPS}{sup 2}). These widths were applied to the same four sets of spectra studied before: (a) four C 1s narrow scans of ozone-treated carbon nanotubes (CNTs) (EW{sub XPS}: ∼2.11–2.16 eV, AW{sub XPS}: ∼3.9–4.1 eV, σ{sub XPS}{sup 2}: ∼5.0–5.2 eV, and a modified form of σ{sub XPS}{sup 2}, denoted σ{sub XPS}{sup 2*}: ∼6.3–6.8 eV), (b) silicon wafers with different oxide thicknesses (EW{sub XPS}: ∼1.5–2.9 eV, AW{sub XPS}: ∼2.28–4.9, and σ{sub XPS}{sup 2}: ∼0.7–4.9 eV), (iii

  12. Recommended number of strides for automatic assessment of gait symmetry and regularity in above-knee amputees by means of accelerometry and autocorrelation analysis

    Directory of Open Access Journals (Sweden)

    Tura Andrea

    2012-02-01

    Full Text Available Abstract Background Symmetry and regularity of gait are essential outcomes of gait retraining programs, especially in lower-limb amputees. This study aims presenting an algorithm to automatically compute symmetry and regularity indices, and assessing the minimum number of strides for appropriate evaluation of gait symmetry and regularity through autocorrelation of acceleration signals. Methods Ten transfemoral amputees (AMP and ten control subjects (CTRL were studied. Subjects wore an accelerometer and were asked to walk for 70 m at their natural speed (twice. Reference values of step and stride regularity indices (Ad1 and Ad2 were obtained by autocorrelation analysis of the vertical and antero-posterior acceleration signals, excluding initial and final strides. The Ad1 and Ad2 coefficients were then computed at different stages by analyzing increasing portions of the signals (considering both the signals cleaned by initial and final strides, and the whole signals. At each stage, the difference between Ad1 and Ad2 values and the corresponding reference values were compared with the minimum detectable difference, MDD, of the index. If that difference was less than MDD, it was assumed that the portion of signal used in the analysis was of sufficient length to allow reliable estimation of the autocorrelation coefficient. Results All Ad1 and Ad2 indices were lower in AMP than in CTRL (P Conclusions Without the need to identify and eliminate the phases of gait initiation and termination, twenty strides can provide a reasonable amount of information to reliably estimate gait regularity in transfemoral amputees.

  13. Streams of events and performance of queuing systems: The basic anatomy of arrival/departure processes, when the focus is set on autocorrelation

    DEFF Research Database (Denmark)

    Nielsen, Erland Hejn

    2004-01-01

    significant nature or (2) aggregate system behaviour is in general very different from just the summing-up (even for finite sets of micro-behavioural patterns) and/or (3) it is simply a wrong assumption that in many cases is chosen by mere convention or plain convenience. It is evident that before choosing...... method or some autocorrelation extended descriptive sampling method, can then easily be applied. The results from the Livny, Melamed and Tsiolis (1993) study as well as the results from this work both indicates that system performance measures as for instance average waiting time or average time...

  14. Autocorrelation and cross-correlation between hCGβ and PAPP-A in repeated sampling during first trimester of pregnancy

    DEFF Research Database (Denmark)

    Nørgaard, Pernille; Wright, Dave; Ball, Susan

    2013-01-01

    Theoretically, repeated sampling of free β-human chorionic gonadotropin (hCGβ) and pregnancy associated plasma protein-A (PAPP-A) in the first trimester of pregnancy might improve performance of risk assessment of trisomy 21 (T21). To assess the performance of a screening test involving repeated...... measures of biochemical markers, correlations between markers must be estimated. The aims of this study were to calculate the autocorrelation and cross-correlation between hCGβ and PAPP-A in the first trimester of pregnancy and to investigate the possible impact of gestational age at the first sample...

  15. Application of the Spatial Auto-Correlation Method for Shear-Wave Velocity Studies Using Ambient Noise

    Science.gov (United States)

    Asten, M. W.; Hayashi, K.

    2018-05-01

    Ambient seismic noise or microtremor observations used in spatial auto-correlation (SPAC) array methods consist of a wide frequency range of surface waves from the frequency of about 0.1 Hz to several tens of Hz. The wavelengths (and hence depth sensitivity of such surface waves) allow determination of the site S-wave velocity model from a depth of 1 or 2 m down to a maximum of several kilometres; it is a passive seismic method using only ambient noise as the energy source. Application usually uses a 2D seismic array with a small number of seismometers (generally between 2 and 15) to estimate the phase velocity dispersion curve and hence the S-wave velocity depth profile for the site. A large number of methods have been proposed and used to estimate the dispersion curve; SPAC is the one of the oldest and the most commonly used methods due to its versatility and minimal instrumentation requirements. We show that direct fitting of observed and model SPAC spectra generally gives a superior bandwidth of useable data than does the more common approach of inversion after the intermediate step of constructing an observed dispersion curve. Current case histories demonstrate the method with a range of array types including two-station arrays, L-shaped multi-station arrays, triangular and circular arrays. Array sizes from a few metres to several-km in diameter have been successfully deployed in sites ranging from downtown urban settings to rural and remote desert sites. A fundamental requirement of the method is the ability to average wave propagation over a range of azimuths; this can be achieved with either or both of the wave sources being widely distributed in azimuth, and the use of a 2D array sampling the wave field over a range of azimuths. Several variants of the method extend its applicability to under-sampled data from sparse arrays, the complexity of multiple-mode propagation of energy, and the problem of precise estimation where array geometry departs from an

  16. Mitigating high ‘equity capital’ risk exposure to ‘small cap’ sector in India: analysing ‘key factors of success’ for ‘Institutional Investors’ whilst Investing in small cap sector in India

    OpenAIRE

    Narang, Anish

    2014-01-01

    This paper deals with the subject of mitigating high ‘Equity Capital’ Risk Exposure to ‘Small Cap’ Sector in India. Institutional investors in India are prone to be risk averse when it comes to investing in the small cap sector in India as they find the companies risky and volatile. This paper will help analyse ‘Key Factors of success’ for ‘Institutional Investors’ whilst investing in Small Cap sector in India as some of these Indian small cap stocks offer handsome returns despite economic do...

  17. Human reliability and human factors in complex organizations: epistemological and critical analysis - practical avenues to action; Fiabilite humaine et facteurs humains dans les organisations complexes: analyse epistemologique et critique voies pratiques pour l`action

    Energy Technology Data Exchange (ETDEWEB)

    Llory, A

    1991-08-01

    This article starts out with comment on the existence of persistent problems inherent to probabilistic safety assessments (PSA). It first surveys existing American documents on the subject which make a certain number of criticisms on human reliability analyses, e.g. limitations due to the scant quantities of data available, lack of a basic theoretical model, non-reproducibility of analyses, etc. The article therefore examines and criticizes the epistemological bases of these analyses. One of the fundamental points stressed is that human reliability analyses do not take account of all the special features of the work situation which result in human error (so as to draw up statistical data from a sufficiently representative number of cases), and consequently lose all notion of the `relationships` between human errors and the different aspects of the working environment. The other key points of criticism concern the collective nature of work which is not taken into account, and the frequent confusion between what operatives actually do and their formally prescribed job-tasks. The article proposes aspects to be given thought in order to overcome these difficulties, e.g. quantitative assessment of the social environment within a company, non-linear model for assessment of the accident rate, analysis of stress levels in staff on off-shore platforms. The method approaches used in these three studies are of the same type, and could be transposed to human-reliability problems. The article then goes into greater depth on thinking aimed at developing a `positive` view of the human factor (and not just a `negative` one, i.e. centred on human errors and organizational malfunctions), applying investigation methods developed in the occupational human sciences (occupational psychodynamics, ergonomics, occupational sociology). The importance of operatives working as actors of a team is stressed.

  18. Expressional and functional analyses of transcription factors activated by BMP-4s signaling in early xenopus embryo; BMP-4 shigunaru dentatsu kiko to sono hyoteki kakunai tensha inshi ni kansuru kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Maeno, Mitsugu [Niigata University, Niigata (Japan). Faculty of Science

    1998-12-16

    The expression and physiological function of two transcription factors, GATA-2 and Xmsx-1, in amphibian embryos has been analyzed. The expression of these mRNAs in embryonic cells were firmly regulated by the BMP-4 signaling, that plays a central role in the formation of ventral tissues. The microinjection studies of GATA-2 RNA into embryonic cells suggested that this factor functions in two adjacent germ layers, mesoderm and ectoderm, to participate in blood cell formation in ventral area of embryo. Embryos injected with Xmsx-1 RNA, but not with GATA-2, in dorsal blastomeres exhibited a ventralized phenotype, with microcephaly and swollen abdomen. Thus, Xmsx-1 is a ventralizing agent. However, on the basis of molecular marker analyses, Xmsx-1 did not promote erythropoietic differentiation, but promoted muscle tissue formation. It has been concluded that Xmsx-1 si a target transcription factor of the BMP-4 signaling, but possesses a distinct activity on dorso-ventral patterning of mesodermal tissues. (author)

  19. Synchronous scattering and diffraction from gold nanotextured surfaces with structure factors

    Science.gov (United States)

    Gu, Min-Jhong; Lee, Ming-Tsang; Huang, Chien-Hsun; Wu, Chi-Chun; Chen, Yu-Bin

    2018-05-01

    Synchronous scattering and diffraction were demonstrated using reflectance from gold nanotextured surfaces at oblique (θi = 15° and 60°) incidence of wavelength λ = 405 nm. Two samples of unique auto-correlation functions were cost-effectively fabricated. Multiple structure factors of their profiles were confirmed with Fourier expansions. Bi-directional reflectance function (BRDF) from these samples provided experimental proofs. On the other hand, standard deviation of height and unique auto-correlation function of each sample were used to generate surfaces numerically. Comparing their BRDF with those of totally random rough surfaces further suggested that structure factors in profile could reduce specular reflection more than totally random roughness.

  20. Coarsening in 3D nonconserved Ising model at zero temperature: Anomaly in structure and slow relaxation of order-parameter autocorrelation

    Science.gov (United States)

    Chakraborty, Saikat; Das, Subir K.

    2017-09-01

    Via Monte Carlo simulations we study pattern and aging during coarsening in a nonconserved nearest-neighbor Ising model, following quenches from infinite to zero temperature, in space dimension d = 3. The decay of the order-parameter autocorrelation function appears to obey a power-law behavior, as a function of the ratio between the observation and waiting times, in the large ratio limit. However, the exponent of the power law, estimated accurately via a state-of-the-art method, violates a well-known lower bound. This surprising fact has been discussed in connection with a quantitative picture of the structural anomaly that the 3D Ising model exhibits during coarsening at zero temperature. These results are compared with those for quenches to a temperature above that of the roughening transition.

  1. Computation and analysis of the transverse current autocorrelation function, Ct(k,t), for small wave vectors: A molecular-dynamics study for a Lennard-Jones fluid

    Science.gov (United States)

    Vogelsang, R.; Hoheisel, C.

    1987-02-01

    Molecular-dynamics (MD) calculations are reported for three thermodynamic states of a Lennard-Jones fluid. Systems of 2048 particles and 105 integration steps were used. The transverse current autocorrelation function, Ct(k,t), has been determined for wave vectors of the range 0.5viscosities which showed a systematic behavior as a function of k. Extrapolation to the hydrodynamic region at k=0 gave shear viscosity coefficients in good agreement with direct Green-Kubo results obtained in previous work. The two-exponential model fit for the memory function proposed by other authors does not provide a reasonable description of the MD results, as the fit parameters show no systematic wave-vector dependence, although the Ct(k,t) functions are somewhat better fitted. Similarly, the semiempirical interpolation formula for the decay time based on the viscoelastic concept proposed by Akcasu and Daniels fails to reproduce the correct k dependence for the wavelength range investigated herein.

  2. Sample preparation in foodomic analyses.

    Science.gov (United States)

    Martinović, Tamara; Šrajer Gajdošik, Martina; Josić, Djuro

    2018-04-16

    Representative sampling and adequate sample preparation are key factors for successful performance of further steps in foodomic analyses, as well as for correct data interpretation. Incorrect sampling and improper sample preparation can be sources of severe bias in foodomic analyses. It is well known that both wrong sampling and sample treatment cannot be corrected anymore. These, in the past frequently neglected facts, are now taken into consideration, and the progress in sampling and sample preparation in foodomics is reviewed here. We report the use of highly sophisticated instruments for both high-performance and high-throughput analyses, as well as miniaturization and the use of laboratory robotics in metabolomics, proteomics, peptidomics and genomics. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  3. Continuous and split-course radiotherapy in locally advanced carcinoma of the uterine cervix. Analyses of local control, distant metastases, crude survival, early and late morbidity and prognostic factors

    International Nuclear Information System (INIS)

    Pedersen, D.E.

    1994-01-01

    From 1974 to 1984, 442 consecutive patients with carcinoma of the uterine cervix were referred for combined intracavitary (IRT) and external radiotherapy (ERT). Dose prescriptions were performed based on the points A and B of the Manchester system. From 1978 the treatment strategy was changed from continuous (CRT) to split course radiotherapy (SCRT) with a higher total dose to point B, a lower dose to point A from the IRT, and a longer total treatment time (TTT). The purpose of the present thesis is: To evaluate local tumour control, distant metastases, survival and complications in the rectosigmoid and bladder in relation to treatment strategy (continuous and split course radiotherapy). To evaluate prognostic factors and importance of treatment strategy for local control, distant metastases, and survival by uni- and multivariate analyses. To develop a classification system (AADK, Aarhus, Denmark) for the recording of early and late radiation complications allowing and estimation of the importance of latency when reporting late radiotherapeutic morbidity and a rescoring of complication grade, and to compare results from AADK with those from the French-Italian glossary recording the maximal damage. To evaluate early and late radiotherapeutic morbidity and the importance of latency by comparing frequencies and actuarial estimates of late complications, to estimate the combined late organ morbidity and the probability of being alive, cured and without serious complications. (EG) (61 refs.)

  4. Quantifying Temporal Autocorrelations for the Expression of Geobacter species mRNA Gene Transcripts at Variable Ammonium Levels during in situ U(VI) Bioremediation

    Science.gov (United States)

    Mouser, P. J.

    2010-12-01

    In order to develop decision-making tools for the prediction and optimization of subsurface bioremediation strategies, we must be able to link the molecular-scale activity of microorganisms involved in remediation processes with biogeochemical processes observed at the field-scale. This requires the ability to quantify changes in the in situ metabolic condition of dominant microbes and associate these changes to fluctuations in nutrient levels throughout the bioremediation process. It also necessitates a need to understand the spatiotemporal variability of the molecular-scale information to develop meaningful parameters and constraint ranges in complex bio-physio-chemical models. The expression of three Geobacter species genes (ammonium transporter (amtB), nitrogen fixation (nifD), and a housekeeping gene (recA)) were tracked at two monitoring locations that differed significantly in ammonium (NH4+) concentrations during a field-scale experiment where acetate was injected into the subsurface to simulate Geobacteraceae in a uranium-contaminated aquifer. Analysis of amtB and nifD mRNA transcript levels indicated that NH4+ was the primary form of fixed nitrogen during bioremediation. Overall expression levels of amtB were on average 8-fold higher at NH4+ concentrations of 300 μM or more than at lower NH4+ levels (average 60 μM). The degree of temporal correlation in Geobacter species mRNA expression levels was calculated at both locations using autocorrelation methods that describe the relationship between sample semi-variance and time lag. At the monitoring location with lower NH4+, a temporal correlation lag of 8 days was observed for both amtB and nifD transcript patterns. At the location where higher NH4+ levels were observed, no discernable temporal correlation lag above the sampling frequency (approximately every 2 days) was observed for amtB or nifD transcript fluctuations. Autocorrelation trends in recA expression levels at both locations indicated that

  5. Bioinformatic Analyses of Subgroup-A Members of the Wheat bZIP Transcription Factor Family and Functional Identification of TabZIP174 Involved in Drought Stress Response

    Directory of Open Access Journals (Sweden)

    Xueyin Li

    2016-11-01

    Full Text Available Extensive studies in Arabidopsis and rice have demonstrated that Subgroup-A members of the bZIP transcription factor family play important roles in plant responses to multiple abiotic stresses. Although common wheat (Triticum aestivum is one of the most widely cultivated and consumed food crops in the world, there are limited investigations into Subgroup A of the bZIP family in wheat. In this study, we performed bioinformatic analyses of the 41 Subgroup-A members of the wheat bZIP family. Phylogenetic and conserved motif analyses showed that most of the Subgroup-A bZIP proteins involved in abiotic stress responses of wheat, Arabidopsis and rice clustered in Clade A1 of the phylogenetic tree, and shared a majority of conserved motifs, suggesting the potential importance of Clade-A1 members in abiotic stress responses. Gene structure analysis showed that TabZIP genes with close phylogenetic relationships tended to possess similar exon-intron compositions, and the positions of introns in the hinge regions of the bZIP domains were highly conserved, whereas introns in the leucine zipper regions were at variable positions. Additionally, eleven groups of homologs and two groups of tandem paralogs were also identified in Subgroup A of the wheat bZIP family. Expression profiling analysis indicated that most Subgroup-A TabZIP genes were responsive to abscisic acid and various abiotic stress treatments. TabZIP27, TabZIP74, TabZIP138 and TabZIP174 proteins were localized in the nucleus of wheat protoplasts, whereas TabZIP9-GFP fusion protein was simultaneously present in the nucleus, cytoplasm and cell membrane. Transgenic Arabidopsis overexpressing TabZIP174 displayed increased seed germination rates and primary root lengths under drought treatments. Overexpression of TabZIP174 in transgenic Arabidopsis conferred enhanced drought tolerance, and transgenic plants exhibited lower water loss rates, higher survival rates, higher proline, soluble sugar and leaf

  6. Bioinformatic Analyses of Subgroup-A Members of the Wheat bZIP Transcription Factor Family and Functional Identification of TabZIP174 Involved in Drought Stress Response

    Science.gov (United States)

    Li, Xueyin; Feng, Biane; Zhang, Fengjie; Tang, Yimiao; Zhang, Liping; Ma, Lingjian; Zhao, Changping; Gao, Shiqing

    2016-01-01

    Extensive studies in Arabidopsis and rice have demonstrated that Subgroup-A members of the bZIP transcription factor family play important roles in plant responses to multiple abiotic stresses. Although common wheat (Triticum aestivum) is one of the most widely cultivated and consumed food crops in the world, there are limited investigations into Subgroup A of the bZIP family in wheat. In this study, we performed bioinformatic analyses of the 41 Subgroup-A members of the wheat bZIP family. Phylogenetic and conserved motif analyses showed that most of the Subgroup-A bZIP proteins involved in abiotic stress responses of wheat, Arabidopsis, and rice clustered in Clade A1 of the phylogenetic tree, and shared a majority of conserved motifs, suggesting the potential importance of Clade-A1 members in abiotic stress responses. Gene structure analysis showed that TabZIP genes with close phylogenetic relationships tended to possess similar exon–intron compositions, and the positions of introns in the hinge regions of the bZIP domains were highly conserved, whereas introns in the leucine zipper regions were at variable positions. Additionally, eleven groups of homologs and two groups of tandem paralogs were also identified in Subgroup A of the wheat bZIP family. Expression profiling analysis indicated that most Subgroup-A TabZIP genes were responsive to abscisic acid and various abiotic stress treatments. TabZIP27, TabZIP74, TabZIP138, and TabZIP174 proteins were localized in the nucleus of wheat protoplasts, whereas TabZIP9-GFP fusion protein was simultaneously present in the nucleus, cytoplasm, and cell membrane. Transgenic Arabidopsis overexpressing TabZIP174 displayed increased seed germination rates and primary root lengths under drought treatments. Overexpression of TabZIP174 in transgenic Arabidopsis conferred enhanced drought tolerance, and transgenic plants exhibited lower water loss rates, higher survival rates, higher proline, soluble sugar, and leaf chlorophyll

  7. Advancing the application of systems thinking in health: analysing the contextual and social network factors influencing the use of sustainability indicators in a health system--a comparative study in Nepal and Somaliland.

    Science.gov (United States)

    Blanchet, Karl; Palmer, Jennifer; Palanchowke, Raju; Boggs, Dorothy; Jama, Ali; Girois, Susan

    2014-08-26

    Health systems strengthening is becoming a key component of development agendas for low-income countries worldwide. Systems thinking emphasizes the role of diverse stakeholders in designing solutions to system problems, including sustainability. The objective of this paper is to compare the definition and use of sustainability indicators developed through the Sustainability Analysis Process in two rehabilitation sectors, one in Nepal and one in Somaliland, and analyse the contextual factors (including the characteristics of system stakeholder networks) influencing the use of sustainability data. Using the Sustainability Analysis Process, participants collectively clarified the boundaries of their respective systems, defined sustainability, and identified sustainability indicators. Baseline indicator data was gathered, where possible, and then researched again 2 years later. As part of the exercise, system stakeholder networks were mapped at baseline and at the 2-year follow-up. We compared stakeholder networks and interrelationships with baseline and 2-year progress toward self-defined sustainability goals. Using in-depth interviews and observations, additional contextual factors affecting the use of sustainability data were identified. Differences in the selection of sustainability indicators selected by local stakeholders from Nepal and Somaliland reflected differences in the governance and structure of the present rehabilitation system. At 2 years, differences in the structure of social networks were more marked. In Nepal, the system stakeholder network had become more dense and decentralized. Financial support by an international organization facilitated advancement toward self-identified sustainability goals. In Somaliland, the small, centralised stakeholder network suffered a critical rupture between the system's two main information brokers due to competing priorities and withdrawal of international support to one of these. Progress toward self

  8. The prevalence of chronic diseases and major disease risk factors at different ages among 150 000 men and women living in Mexico City: cross-sectional analyses of a prospective study

    Directory of Open Access Journals (Sweden)

    Peto Richard

    2009-01-01

    Full Text Available Abstract Background While most of the global burden from chronic diseases, and especially vascular diseases, is now borne by low and middle-income countries, few large-scale epidemiological studies of chronic diseases in such countries have been performed. Methods From 1998–2004, 52 584 men and 106 962 women aged ≥35 years were visited in their homes in Mexico City. Self reported diagnoses of chronic diseases and major disease risk factors were ascertained and physical measurements taken. Age- and sex-specific prevalences and means were analysed. Results After about age 50 years, diabetes was extremely common – for example, 23.8% of men and 26.9% of women aged 65–74 reported a diagnosis. By comparison, ischaemic heart disease was reported by 4.8% of men and 3.0% of women aged 65–74, a history of stroke by 2.8% and 2.3%, respectively, and a history of cancer by 1.3% and 2.1%. Cancer history was generally more common among women than men – the excess being largest in middle-age, due to breast and cervical cancer. At older ages, the gap narrowed because of an increasing prevalence of prostate cancer. 51% of men and 25% of women aged 35–54 smoked cigarettes, while 29% of men and 41% of women aged 35–54 were obese (i.e. BMI ≥30 kg/m2. The prevalence of treated hypertension or measured blood pressure ≥140/90 mmHg increased about 50% more steeply with age among women than men, to 66% of women and 58% of men aged 65–74. Physical inactivity was highly prevalent but daily alcohol drinking was relatively uncommon. Conclusion Diabetes, obesity and tobacco smoking are highly prevalent among adults living in Mexico City. Long-term follow-up of this and other cohorts will establish the relevance of such factors to the major causes of death and disability in Mexico.

  9. Rolling element bearing fault diagnosis based on Over-Complete rational dilation wavelet transform and auto-correlation of analytic energy operator

    Science.gov (United States)

    Singh, Jaskaran; Darpe, A. K.; Singh, S. P.

    2018-02-01

    Local damage in rolling element bearings usually generates periodic impulses in vibration signals. The severity, repetition frequency and the fault excited resonance zone by these impulses are the key indicators for diagnosing bearing faults. In this paper, a methodology based on over complete rational dilation wavelet transform (ORDWT) is proposed, as it enjoys a good shift invariance. ORDWT offers flexibility in partitioning the frequency spectrum to generate a number of subbands (filters) with diverse bandwidths. The selection of the optimal filter that perfectly overlaps with the bearing fault excited resonance zone is based on the maximization of a proposed impulse detection measure "Temporal energy operated auto correlated kurtosis". The proposed indicator is robust and consistent in evaluating the impulsiveness of fault signals in presence of interfering vibration such as heavy background noise or sporadic shocks unrelated to the fault or normal operation. The structure of the proposed indicator enables it to be sensitive to fault severity. For enhanced fault classification, an autocorrelation of the energy time series of the signal filtered through the optimal subband is proposed. The application of the proposed methodology is validated on simulated and experimental data. The study shows that the performance of the proposed technique is more robust and consistent in comparison to the original fast kurtogram and wavelet kurtogram.

  10. Discussion on sensor location in circular array for spatial autocorrelation method; Kukan jiko sokanho no enkei array ni okeru jishinkei haichi no kento

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, H; Iwamoto, K; Saito, T; Yoshida, A [Iwate University, Iwate (Japan). Faculty of Engineering

    1997-05-27

    Methods to derive underground structures by utilizing the dispersion phenomenon of surface waves contained in microtremors include the frequency-wave number analysis method (the F-K method) and the spatial autocorrelation method (SAC method). The SAC method is said capable of estimating the structures to deeper depths than with the F-K method if the same seismometer is used. However, the F-K method is used more frequently. This is because the SAC method imposes a strict restriction that seismometers must be arranged evenly on the same circumference, while the F-K method allows seismometers to be arranged arbitrarily during an observation. Therefore, the present study has discussed whether the SAC method can be applied to observations with the seismometers arranged in the same way as in the F-K method, by using microtremor data acquired from actual observations. It was made clear that a seismometer arrangement for the SAC method may be sufficed with at least three meters arranged on the same circumference. These meters may not have to be arranged evenly, but because the acquired phase velocities may vary according to wave arriving directions and seismometer arrangement, it is desirable to perform observations with seismometers arranged as evenly as possible. 13 figs.

  11. Spot auto-focusing and spot auto-stigmation methods with high-definition auto-correlation function in high-resolution TEM.

    Science.gov (United States)

    Isakozawa, Shigeto; Fuse, Taishi; Amano, Junpei; Baba, Norio

    2018-04-01

    As alternatives to the diffractogram-based method in high-resolution transmission electron microscopy, a spot auto-focusing (AF) method and a spot auto-stigmation (AS) method are presented with a unique high-definition auto-correlation function (HD-ACF). The HD-ACF clearly resolves the ACF central peak region in small amorphous-thin-film images, reflecting the phase contrast transfer function. At a 300-k magnification for a 120-kV transmission electron microscope, the smallest areas used are 64 × 64 pixels (~3 nm2) for the AF and 256 × 256 pixels for the AS. A useful advantage of these methods is that the AF function has an allowable accuracy even for a low s/n (~1.0) image. A reference database on the defocus dependency of the HD-ACF by the pre-acquisition of through-focus amorphous-thin-film images must be prepared to use these methods. This can be very beneficial because the specimens are not limited to approximations of weak phase objects but can be extended to objects outside such approximations.

  12. Application of lag-k autocorrelation coefficient and the TGA signals approach to detecting and quantifying adulterations of extra virgin olive oil with inferior edible oils

    Energy Technology Data Exchange (ETDEWEB)

    Torrecilla, Jose S., E-mail: jstorre@quim.ucm.es [Department of Chemical Engineering, Faculty of Chemistry, University Complutense of Madrid, 28040 Madrid (Spain); Garcia, Julian; Garcia, Silvia; Rodriguez, Francisco [Department of Chemical Engineering, Faculty of Chemistry, University Complutense of Madrid, 28040 Madrid (Spain)

    2011-03-04

    The combination of lag-k autocorrelation coefficients (LCCs) and thermogravimetric analyzer (TGA) equipment is defined here as a tool to detect and quantify adulterations of extra virgin olive oil (EVOO) with refined olive (ROO), refined olive pomace (ROPO), sunflower (SO) or corn (CO) oils, when the adulterating agents concentration are less than 14%. The LCC is calculated from TGA scans of adulterated EVOO samples. Then, the standardized skewness of this coefficient has been applied to classify pure and adulterated samples of EVOO. In addition, this chaotic parameter has also been used to quantify the concentration of adulterant agents, by using successful linear correlation of LCCs and ROO, ROPO, SO or CO in 462 EVOO adulterated samples. In the case of detection, more than 82% of adulterated samples have been correctly classified. In the case of quantification of adulterant concentration, by an external validation process, the LCC/TGA approach estimates the adulterant agents concentration with a mean correlation coefficient (estimated versus real adulterant agent concentration) greater than 0.90 and a mean square error less than 4.9%.

  13. Application of lag-k autocorrelation coefficient and the TGA signals approach to detecting and quantifying adulterations of extra virgin olive oil with inferior edible oils

    International Nuclear Information System (INIS)

    Torrecilla, Jose S.; Garcia, Julian; Garcia, Silvia; Rodriguez, Francisco

    2011-01-01

    The combination of lag-k autocorrelation coefficients (LCCs) and thermogravimetric analyzer (TGA) equipment is defined here as a tool to detect and quantify adulterations of extra virgin olive oil (EVOO) with refined olive (ROO), refined olive pomace (ROPO), sunflower (SO) or corn (CO) oils, when the adulterating agents concentration are less than 14%. The LCC is calculated from TGA scans of adulterated EVOO samples. Then, the standardized skewness of this coefficient has been applied to classify pure and adulterated samples of EVOO. In addition, this chaotic parameter has also been used to quantify the concentration of adulterant agents, by using successful linear correlation of LCCs and ROO, ROPO, SO or CO in 462 EVOO adulterated samples. In the case of detection, more than 82% of adulterated samples have been correctly classified. In the case of quantification of adulterant concentration, by an external validation process, the LCC/TGA approach estimates the adulterant agents concentration with a mean correlation coefficient (estimated versus real adulterant agent concentration) greater than 0.90 and a mean square error less than 4.9%.

  14. Laser Beam Focus Analyser

    DEFF Research Database (Denmark)

    Nielsen, Peter Carøe; Hansen, Hans Nørgaard; Olsen, Flemming Ove

    2007-01-01

    the obtainable features in direct laser machining as well as heat affected zones in welding processes. This paper describes the development of a measuring unit capable of analysing beam shape and diameter of lasers to be used in manufacturing processes. The analyser is based on the principle of a rotating......The quantitative and qualitative description of laser beam characteristics is important for process implementation and optimisation. In particular, a need for quantitative characterisation of beam diameter was identified when using fibre lasers for micro manufacturing. Here the beam diameter limits...... mechanical wire being swept through the laser beam at varying Z-heights. The reflected signal is analysed and the resulting beam profile determined. The development comprised the design of a flexible fixture capable of providing both rotation and Z-axis movement, control software including data capture...

  15. Contesting Citizenship: Comparative Analyses

    DEFF Research Database (Denmark)

    Siim, Birte; Squires, Judith

    2007-01-01

    importance of particularized experiences and multiple ineequality agendas). These developments shape the way citizenship is both practiced and analysed. Mapping neat citizenship modles onto distinct nation-states and evaluating these in relation to formal equality is no longer an adequate approach....... Comparative citizenship analyses need to be considered in relation to multipleinequalities and their intersections and to multiple governance and trans-national organisinf. This, in turn, suggests that comparative citizenship analysis needs to consider new spaces in which struggles for equal citizenship occur...

  16. Risico-analyse brandstofpontons

    NARCIS (Netherlands)

    Uijt de Haag P; Post J; LSO

    2001-01-01

    Voor het bepalen van de risico's van brandstofpontons in een jachthaven is een generieke risico-analyse uitgevoerd. Er is een referentiesysteem gedefinieerd, bestaande uit een betonnen brandstofponton met een relatief grote inhoud en doorzet. Aangenomen is dat de ponton gelegen is in een

  17. Fast multichannel analyser

    Energy Technology Data Exchange (ETDEWEB)

    Berry, A; Przybylski, M M; Sumner, I [Science Research Council, Daresbury (UK). Daresbury Lab.

    1982-10-01

    A fast multichannel analyser (MCA) capable of sampling at a rate of 10/sup 7/ s/sup -1/ has been developed. The instrument is based on an 8 bit parallel encoding analogue to digital converter (ADC) reading into a fast histogramming random access memory (RAM) system, giving 256 channels of 64 k count capacity. The prototype unit is in CAMAC format.

  18. A fast multichannel analyser

    International Nuclear Information System (INIS)

    Berry, A.; Przybylski, M.M.; Sumner, I.

    1982-01-01

    A fast multichannel analyser (MCA) capable of sampling at a rate of 10 7 s -1 has been developed. The instrument is based on an 8 bit parallel encoding analogue to digital converter (ADC) reading into a fast histogramming random access memory (RAM) system, giving 256 channels of 64 k count capacity. The prototype unit is in CAMAC format. (orig.)

  19. Genome-Wide Analyses of the NAC Transcription Factor Gene Family in Pepper (Capsicum annuum L.: Chromosome Location, Phylogeny, Structure, Expression Patterns, Cis-Elements in the Promoter, and Interaction Network

    Directory of Open Access Journals (Sweden)

    Weiping Diao

    2018-03-01

    Full Text Available The NAM, ATAF1/2, and CUC2 (NAC transcription factors form a large plant-specific gene family, which is involved in the regulation of tissue development in response to biotic and abiotic stress. To date, there have been no comprehensive studies investigating chromosomal location, gene structure, gene phylogeny, conserved motifs, or gene expression of NAC in pepper (Capsicum annuum L.. The recent release of the complete genome sequence of pepper allowed us to perform a genome-wide investigation of Capsicum annuum L. NAC (CaNAC proteins. In the present study, a comprehensive analysis of the CaNAC gene family in pepper was performed, and a total of 104 CaNAC genes were identified. Genome mapping analysis revealed that CaNAC genes were enriched on four chromosomes (chromosomes 1, 2, 3, and 6. In addition, phylogenetic analysis of the NAC domains from pepper, potato, Arabidopsis, and rice showed that CaNAC genes could be clustered into three groups (I, II, and III. Group III, which contained 24 CaNAC genes, was exclusive to the Solanaceae plant family. Gene structure and protein motif analyses showed that these genes were relatively conserved within each subgroup. The number of introns in CaNAC genes varied from 0 to 8, with 83 (78.9% of CaNAC genes containing two or less introns. Promoter analysis confirmed that CaNAC genes are involved in pepper growth, development, and biotic or abiotic stress responses. Further, the expression of 22 selected CaNAC genes in response to seven different biotic and abiotic stresses [salt, heat shock, drought, Phytophthora capsici, abscisic acid, salicylic acid (SA, and methyl jasmonate (MeJA] was evaluated by quantitative RT-PCR to determine their stress-related expression patterns. Several putative stress-responsive CaNAC genes, including CaNAC72 and CaNAC27, which are orthologs of the known stress-responsive Arabidopsis gene ANAC055 and potato gene StNAC30, respectively, were highly regulated by treatment with

  20. Possible future HERA analyses

    International Nuclear Information System (INIS)

    Geiser, Achim

    2015-12-01

    A variety of possible future analyses of HERA data in the context of the HERA data preservation programme is collected, motivated, and commented. The focus is placed on possible future analyses of the existing ep collider data and their physics scope. Comparisons to the original scope of the HERA pro- gramme are made, and cross references to topics also covered by other participants of the workshop are given. This includes topics on QCD, proton structure, diffraction, jets, hadronic final states, heavy flavours, electroweak physics, and the application of related theory and phenomenology topics like NNLO QCD calculations, low-x related models, nonperturbative QCD aspects, and electroweak radiative corrections. Synergies with other collider programmes are also addressed. In summary, the range of physics topics which can still be uniquely covered using the existing data is very broad and of considerable physics interest, often matching the interest of results from colliders currently in operation. Due to well-established data and MC sets, calibrations, and analysis procedures the manpower and expertise needed for a particular analysis is often very much smaller than that needed for an ongoing experiment. Since centrally funded manpower to carry out such analyses is not available any longer, this contribution not only targets experienced self-funded experimentalists, but also theorists and master-level students who might wish to carry out such an analysis.

  1. Biomass feedstock analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wilen, C.; Moilanen, A.; Kurkela, E. [VTT Energy, Espoo (Finland). Energy Production Technologies

    1996-12-31

    The overall objectives of the project `Feasibility of electricity production from biomass by pressurized gasification systems` within the EC Research Programme JOULE II were to evaluate the potential of advanced power production systems based on biomass gasification and to study the technical and economic feasibility of these new processes with different type of biomass feed stocks. This report was prepared as part of this R and D project. The objectives of this task were to perform fuel analyses of potential woody and herbaceous biomasses with specific regard to the gasification properties of the selected feed stocks. The analyses of 15 Scandinavian and European biomass feed stock included density, proximate and ultimate analyses, trace compounds, ash composition and fusion behaviour in oxidizing and reducing atmospheres. The wood-derived fuels, such as whole-tree chips, forest residues, bark and to some extent willow, can be expected to have good gasification properties. Difficulties caused by ash fusion and sintering in straw combustion and gasification are generally known. The ash and alkali metal contents of the European biomasses harvested in Italy resembled those of the Nordic straws, and it is expected that they behave to a great extent as straw in gasification. Any direct relation between the ash fusion behavior (determined according to the standard method) and, for instance, the alkali metal content was not found in the laboratory determinations. A more profound characterisation of the fuels would require gasification experiments in a thermobalance and a PDU (Process development Unit) rig. (orig.) (10 refs.)

  2. Mono-static GPR without transmitting anything for pavement damage inspection: interferometry by auto-correlation applied to mobile phone signals

    Science.gov (United States)

    Feld, R.; Slob, E. C.; Thorbecke, J.

    2015-12-01

    Creating virtual sources at locations where physical receivers have measured a response is known as seismic interferometry. A much appreciated benefit of interferometry is its independence of the actual source locations. The use of ambient noise as actual source is therefore not uncommon in this field. Ambient noise can be commercial noise, like for example mobile phone signals. For GPR this can be useful in cases where it is not possible to place a source, for instance when it is prohibited by laws and regulations. A mono-static GPR antenna can measure ambient noise. Interferometry by auto-correlation (AC) places a virtual source on this antenna's position, without actually transmitting anything. This can be used for pavement damage inspection. Earlier work showed very promising results with 2D numerical models of damaged pavement. 1D and 2D heterogeneities were compared, both modelled in a 2D pavement world. In a 1D heterogeneous model energy leaks away to the sides, whereas in a 2D heterogeneous model rays can reflect and therefore still add to the signal reconstruction (see illustration). In the first case the amount of stationary points is strictly limited, while in the other case the amount of stationary points is very large. We extend these models to a 3D world and optimise an experimental configuration. The illustration originates from the journal article under submission 'Non-destructive pavement damage inspection by mono-static GPR without transmitting anything' by R. Feld, E.C. Slob, and J.W. Thorbecke. (a) 2D heterogeneous pavement model with three irregular-shaped misalignments between the base and subbase layer (marked by arrows). Mono-antenna B-scan positions are shown schematically. (b) Ideal output: a real source at the receiver's position. The difference w.r.t. the trace found in the middle is shown. (c) AC output: a virtual source at the receiver's position. There is a clear overlap with the ideal output.

  3. Spatial Autocorrelation, Source Water and the Distribution of Total and Viable Microbial Abundances within a Crystalline Formation to a Depth of 800 m

    Directory of Open Access Journals (Sweden)

    E. D. Beaton

    2017-09-01

    Full Text Available Proposed radioactive waste repositories require long residence times within deep geological settings for which we have little knowledge of local or regional subsurface dynamics that could affect the transport of hazardous species over the period of radioactive decay. Given the role of microbial processes on element speciation and transport, knowledge and understanding of local microbial ecology within geological formations being considered as host formations can aid predictions for long term safety. In this relatively unexplored environment, sampling opportunities are few and opportunistic. We combined the data collected for geochemistry and microbial abundances from multiple sampling opportunities from within a proposed host formation and performed multivariate mixing and mass balance (M3 modeling, spatial analysis and generalized linear modeling to address whether recharge can explain how subsurface communities assemble within fracture water obtained from multiple saturated fractures accessed by boreholes drilled into the crystalline formation underlying the Chalk River Laboratories site (Deep River, ON, Canada. We found that three possible source waters, each of meteoric origin, explained 97% of the samples, these are: modern recharge, recharge from the period of the Laurentide ice sheet retreat (ca. ∼12000 years before present and a putative saline source assigned as Champlain Sea (also ca. 12000 years before present. The distributed microbial abundances and geochemistry provide a conceptual model of two distinct regions within the subsurface associated with bicarbonate – used as a proxy for modern recharge – and manganese; these regions occur at depths relevant to a proposed repository within the formation. At the scale of sampling, the associated spatial autocorrelation means that abundances linked with geochemistry were not unambiguously discerned, although fine scale Moran’s eigenvector map (MEM coefficients were correlated with

  4. Growth history and crown vine coverage are principal factors influencing growth and mortality rates of big-leaf mahogany Swietenia macrophylla in Brazil

    Science.gov (United States)

    James Grogan; R. Matthew Landis

    2009-01-01

    1. Current efforts to model population dynamics of high-value tropical timber species largely assume that individual growth history is unimportant to population dynamics, yet growth autocorrelation is known to adversely affect model predictions. In this study, we analyse a decade of annual census data from a natural population of big-leaf mahogany Swietenia macrophylla...

  5. AMS analyses at ANSTO

    Energy Technology Data Exchange (ETDEWEB)

    Lawson, E.M. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia). Physics Division

    1998-03-01

    The major use of ANTARES is Accelerator Mass Spectrometry (AMS) with {sup 14}C being the most commonly analysed radioisotope - presently about 35 % of the available beam time on ANTARES is used for {sup 14}C measurements. The accelerator measurements are supported by, and dependent on, a strong sample preparation section. The ANTARES AMS facility supports a wide range of investigations into fields such as global climate change, ice cores, oceanography, dendrochronology, anthropology, and classical and Australian archaeology. Described here are some examples of the ways in which AMS has been applied to support research into the archaeology, prehistory and culture of this continent`s indigenous Aboriginal peoples. (author)

  6. AMS analyses at ANSTO

    International Nuclear Information System (INIS)

    Lawson, E.M.

    1998-01-01

    The major use of ANTARES is Accelerator Mass Spectrometry (AMS) with 14 C being the most commonly analysed radioisotope - presently about 35 % of the available beam time on ANTARES is used for 14 C measurements. The accelerator measurements are supported by, and dependent on, a strong sample preparation section. The ANTARES AMS facility supports a wide range of investigations into fields such as global climate change, ice cores, oceanography, dendrochronology, anthropology, and classical and Australian archaeology. Described here are some examples of the ways in which AMS has been applied to support research into the archaeology, prehistory and culture of this continent's indigenous Aboriginal peoples. (author)

  7. Analyses of Sickness Absence

    NARCIS (Netherlands)

    Heijnen, S.M.M.

    2014-01-01

    Sickness absence is an empirical phenomenon of all time. Generally, it has a medical cause. However, other factors also appear to have an impact on the actual rate of sickness absence, such as the institutional setting, the business cycle and the economic structure. Many questions on the different

  8. Analyses of MHD instabilities

    International Nuclear Information System (INIS)

    Takeda, Tatsuoki

    1985-01-01

    In this article analyses of the MHD stabilities which govern the global behavior of a fusion plasma are described from the viewpoint of the numerical computation. First, we describe the high accuracy calculation of the MHD equilibrium and then the analysis of the linear MHD instability. The former is the basis of the stability analysis and the latter is closely related to the limiting beta value which is a very important theoretical issue of the tokamak research. To attain a stable tokamak plasma with good confinement property it is necessary to control or suppress disruptive instabilities. We, next, describe the nonlinear MHD instabilities which relate with the disruption phenomena. Lastly, we describe vectorization of the MHD codes. The above MHD codes for fusion plasma analyses are relatively simple though very time-consuming and parts of the codes which need a lot of CPU time concentrate on a small portion of the codes, moreover, the codes are usually used by the developers of the codes themselves, which make it comparatively easy to attain a high performance ratio on the vector processor. (author)

  9. Uncertainty Analyses and Strategy

    International Nuclear Information System (INIS)

    Kevin Coppersmith

    2001-01-01

    The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository

  10. Establishing and Analysing the Model of the Factors Affecting the Profits of Private House Mortgage Loans%个人住房抵押贷款利润影响因素的建模与分析

    Institute of Scientific and Technical Information of China (English)

    彭宜钟; 肖俊喜; 王庆石

    2003-01-01

    Based on an real bank case, this paper explores the main factors that affect the bank' s profit of housing mortgage loan and then quantify and model them with the profit formation. These models help the bank analyze and mainpulate these factors in order to gain the satisfactory profits.

  11. A simple beam analyser

    International Nuclear Information System (INIS)

    Lemarchand, G.

    1977-01-01

    (ee'p) experiments allow to measure the missing energy distribution as well as the momentum distribution of the extracted proton in the nucleus versus the missing energy. Such experiments are presently conducted on SACLAY's A.L.S. 300 Linac. Electrons and protons are respectively analysed by two spectrometers and detected in their focal planes. Counting rates are usually low and include time coincidences and accidentals. Signal-to-noise ratio is dependent on the physics of the experiment and the resolution of the coincidence, therefore it is mandatory to get a beam current distribution as flat as possible. Using new technologies has allowed to monitor in real time the behavior of the beam pulse and determine when the duty cycle can be considered as being good with respect to a numerical basis

  12. EEG analyses with SOBI.

    Energy Technology Data Exchange (ETDEWEB)

    Glickman, Matthew R.; Tang, Akaysha (University of New Mexico, Albuquerque, NM)

    2009-02-01

    The motivating vision behind Sandia's MENTOR/PAL LDRD project has been that of systems which use real-time psychophysiological data to support and enhance human performance, both individually and of groups. Relevant and significant psychophysiological data being a necessary prerequisite to such systems, this LDRD has focused on identifying and refining such signals. The project has focused in particular on EEG (electroencephalogram) data as a promising candidate signal because it (potentially) provides a broad window on brain activity with relatively low cost and logistical constraints. We report here on two analyses performed on EEG data collected in this project using the SOBI (Second Order Blind Identification) algorithm to identify two independent sources of brain activity: one in the frontal lobe and one in the occipital. The first study looks at directional influences between the two components, while the second study looks at inferring gender based upon the frontal component.

  13. Pathway-based analyses.

    Science.gov (United States)

    Kent, Jack W

    2016-02-03

    New technologies for acquisition of genomic data, while offering unprecedented opportunities for genetic discovery, also impose severe burdens of interpretation and penalties for multiple testing. The Pathway-based Analyses Group of the Genetic Analysis Workshop 19 (GAW19) sought reduction of multiple-testing burden through various approaches to aggregation of highdimensional data in pathways informed by prior biological knowledge. Experimental methods testedincluded the use of "synthetic pathways" (random sets of genes) to estimate power and false-positive error rate of methods applied to simulated data; data reduction via independent components analysis, single-nucleotide polymorphism (SNP)-SNP interaction, and use of gene sets to estimate genetic similarity; and general assessment of the efficacy of prior biological knowledge to reduce the dimensionality of complex genomic data. The work of this group explored several promising approaches to managing high-dimensional data, with the caveat that these methods are necessarily constrained by the quality of external bioinformatic annotation.

  14. Analysing Access Control Specifications

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, René Rydhof

    2009-01-01

    When prosecuting crimes, the main question to answer is often who had a motive and the possibility to commit the crime. When investigating cyber crimes, the question of possibility is often hard to answer, as in a networked system almost any location can be accessed from almost anywhere. The most...... common tool to answer this question, analysis of log files, faces the problem that the amount of logged data may be overwhelming. This problems gets even worse in the case of insider attacks, where the attacker’s actions usually will be logged as permissible, standard actions—if they are logged at all....... Recent events have revealed intimate knowledge of surveillance and control systems on the side of the attacker, making it often impossible to deduce the identity of an inside attacker from logged data. In this work we present an approach that analyses the access control configuration to identify the set...

  15. Network class superposition analyses.

    Directory of Open Access Journals (Sweden)

    Carl A B Pearson

    Full Text Available Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30 for the yeast cell cycle process, considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.

  16. Hydrodynamic description of the long-time tails of the linear and rotational velocity autocorrelation functions of a particle in a confined geometry.

    Science.gov (United States)

    Frydel, Derek; Rice, Stuart A

    2007-12-01

    We report a hydrodynamic analysis of the long-time behavior of the linear and angular velocity autocorrelation functions of an isolated colloid particle constrained to have quasi-two-dimensional motion, and compare the predicted behavior with the results of lattice-Boltzmann simulations. Our analysis uses the singularity method to characterize unsteady linear motion of an incompressible fluid. For bounded fluids we construct an image system with a discrete set of fundamental solutions of the Stokes equation from which we extract the long-time decay of the velocity. For the case that there are free slip boundary conditions at walls separated by H particle diameters, the time evolution of the parallel linear velocity and the perpendicular rotational velocity following impulsive excitation both correspond to the time evolution of a two-dimensional (2D) fluid with effective density rho_(2D)=rhoH. For the case that there are no slip boundary conditions at the walls, the same types of motion correspond to 2D fluid motions with a coefficient of friction xi=pi(2)nu/H(2) modulo a prefactor of order 1, with nu the kinematic viscosity. The linear particle motion perpendicular to the walls also experiences an effective frictional force, but the time dependence is proportional to t(-2) , which cannot be related to either pure 3D or pure 2D fluid motion. Our incompressible fluid model predicts correct self-diffusion constants but it does not capture all of the effects of the fluid confinement on the particle motion. In particular, the linear motion of a particle perpendicular to the walls is influenced by coupling between the density flux and the velocity field, which leads to damped velocity oscillations whose frequency is proportional to c_(s)/H , with c_(s) the velocity of sound. For particle motion parallel to no slip walls there is a slowing down of a density flux that spreads diffusively, which generates a long-time decay proportional to t(-1) .

  17. Lack of negative autocorrelations of daily food intake on successive days challenges the concept of the regulation of body weight in humans.

    Science.gov (United States)

    Levitsky, David A; Raea Limb, Ji Eun; Wilkinson, Lua; Sewall, Anna; Zhong, Yingyi; Olabi, Ammar; Hunter, Jean

    2017-09-01

    According to most theories, the amount of food consumed on one day should be negatively related to intake on subsequent days. Several studies have observed such a negative correlation between the amount consumed on one day and the amount consumed two to four days later. The present study attempted to replicate this observation by re-examining data from a previous study where all food ingested over a 30-day observation period was measured. Nine male and seven female participants received a vegan diet prepared, dispensed, and measured in a metabolic unit. Autocorrelations were performed on total food intake consume on one day and that consumed one to five days later. A significant positive correlation was detected between the weight of food eaten on one day and on the amount consumed on the following day (r = 0.29, 95% CI [0.37, 0.20]). No correlation was found between weights of food consumed on one day and up to twelve days later (r = 0.09, 95% CI [0.24, -0.06]), (r = 0.11, 95% CI [0.26, -0.0.26]) (r = 0.02, 95% CI [0.15, -0.7]) (r = -0.08, 95% CI [0.11, -0.09]). The same positive correlation with the previous day's intake was observed at the succeeding breakfast but not at either lunch or dinner. However, the participants underestimated their daily energy need resulting in a small, but statistically significant weight loss. Daily food intake increased slightly (13 g/day), but significantly, across the 30-day period. An analysis of the previous studies revealed that the negative correlations observed by others was caused by a statistical artifact resulting from normalizing data before testing for the correlations. These results, when combined with the published literature, indicate that there is little evidence that humans precisely compensate for the previous day's intake by altering the amount consumed on subsequent days. Moreover, the small but persistent increase in food intake suggests that physiological mechanisms that affect food intake

  18. Seismic fragility analyses

    International Nuclear Information System (INIS)

    Kostov, Marin

    2000-01-01

    In the last two decades there is increasing number of probabilistic seismic risk assessments performed. The basic ideas of the procedure for performing a Probabilistic Safety Analysis (PSA) of critical structures (NUREG/CR-2300, 1983) could be used also for normal industrial and residential buildings, dams or other structures. The general formulation of the risk assessment procedure applied in this investigation is presented in Franzini, et al., 1984. The probability of failure of a structure for an expected lifetime (for example 50 years) can be obtained from the annual frequency of failure, β E determined by the relation: β E ∫[d[β(x)]/dx]P(flx)dx. β(x) is the annual frequency of exceedance of load level x (for example, the variable x may be peak ground acceleration), P(fI x) is the conditional probability of structure failure at a given seismic load level x. The problem leads to the assessment of the seismic hazard β(x) and the fragility P(fl x). The seismic hazard curves are obtained by the probabilistic seismic hazard analysis. The fragility curves are obtained after the response of the structure is defined as probabilistic and its capacity and the associated uncertainties are assessed. Finally the fragility curves are combined with the seismic loading to estimate the frequency of failure for each critical scenario. The frequency of failure due to seismic event is presented by the scenario with the highest frequency. The tools usually applied for probabilistic safety analyses of critical structures could relatively easily be adopted to ordinary structures. The key problems are the seismic hazard definitions and the fragility analyses. The fragility could be derived either based on scaling procedures or on the base of generation. Both approaches have been presented in the paper. After the seismic risk (in terms of failure probability) is assessed there are several approaches for risk reduction. Generally the methods could be classified in two groups. The

  19. Website-analyse

    DEFF Research Database (Denmark)

    Thorlacius, Lisbeth

    2009-01-01

    eller blindgyder, når han/hun besøger sitet. Studier i design og analyse af de visuelle og æstetiske aspekter i planlægning og brug af websites har imidlertid kun i et begrænset omfang været under reflektorisk behandling. Det er baggrunden for dette kapitel, som indleder med en gennemgang af æstetikkens......Websitet er i stigende grad det foretrukne medie inden for informationssøgning,virksomhedspræsentation, e-handel, underholdning, undervisning og social kontakt. I takt med denne voksende mangfoldighed af kommunikationsaktiviteter på nettet, er der kommet mere fokus på at optimere design og...... planlægning af de funktionelle og indholdsmæssige aspekter ved websites. Der findes en stor mængde teori- og metodebøger, som har specialiseret sig i de tekniske problemstillinger i forbindelse med interaktion og navigation, samt det sproglige indhold på websites. Den danske HCI (Human Computer Interaction...

  20. A channel profile analyser

    International Nuclear Information System (INIS)

    Gobbur, S.G.

    1983-01-01

    It is well understood that due to the wide band noise present in a nuclear analog-to-digital converter, events at the boundaries of adjacent channels are shared. It is a difficult and laborious process to exactly find out the shape of the channels at the boundaries. A simple scheme has been developed for the direct display of channel shape of any type of ADC on a cathode ray oscilliscope display. This has been accomplished by sequentially incrementing the reference voltage of a precision pulse generator by a fraction of a channel and storing ADC data in alternative memory locations of a multichannel pulse height analyser. Alternative channels are needed due to the sharing at the boundaries of channels. In the flat region of the profile alternate memory locations are channels with zero counts and channels with the full scale counts. At the boundaries all memory locations will have counts. The shape of this is a direct display of the channel boundaries. (orig.)

  1. Methodological challenges in carbohydrate analyses

    Directory of Open Access Journals (Sweden)

    Mary Beth Hall

    2007-07-01

    Full Text Available Carbohydrates can provide up to 80% of the dry matter in animal diets, yet their specific evaluation for research and diet formulation is only now becoming a focus in the animal sciences. Partitioning of dietary carbohydrates for nutritional purposes should reflect differences in digestion and fermentation characteristics and effects on animal performance. Key challenges to designating nutritionally important carbohydrate fractions include classifying the carbohydrates in terms of nutritional characteristics, and selecting analytical methods that describe the desired fraction. The relative lack of information on digestion characteristics of various carbohydrates and their interactions with other fractions in diets means that fractions will not soon be perfectly established. Developing a system of carbohydrate analysis that could be used across animal species could enhance the utility of analyses and amount of data we can obtain on dietary effects of carbohydrates. Based on quantities present in diets and apparent effects on animal performance, some nutritionally important classes of carbohydrates that may be valuable to measure include sugars, starch, fructans, insoluble fiber, and soluble fiber. Essential to selection of methods for these fractions is agreement on precisely what carbohydrates should be included in each. Each of these fractions has analyses that could potentially be used to measure them, but most of the available methods have weaknesses that must be evaluated to see if they are fatal and the assay is unusable, or if the assay still may be made workable. Factors we must consider as we seek to analyze carbohydrates to describe diets: Does the assay accurately measure the desired fraction? Is the assay for research, regulatory, or field use (affects considerations of acceptable costs and throughput? What are acceptable accuracy and variability of measures? Is the assay robust (enhances accuracy of values? For some carbohydrates, we

  2. Onychomycosis of Toenails and Post-hoc Analyses with Efinaconazole 10% Solution Once-daily Treatment: Impact of Disease Severity and Other Concomitant Associated Factors on Selection of Therapy and Therapeutic Outcomes.

    Science.gov (United States)

    Del Rosso, James Q

    2016-02-01

    Topical treatment for toenail onychomycosis has been fraught with a long-standing reputation of poor efficaey, primarily due to physical properties of the nail unit that impede drug penetration. Newer topical agents have been formulated as Solution, which appear to provide better therapeutic response in properly selected patients. It is important to recognize the impact the effects that mitigating and concomitant factors can have on efficaey. These factors include disease severity, gender, presence of tinea pedis, and diabetes. This article reviews results achieved in Phase 3 pivotal studies with topical efinaconazole 10% Solution applied once daily for 48 weeks with a focus on how the aforementioned factors influenced therapeutic outcomes. It is important for clinicians treating patients for onychomycosis to evaluate severity, treat concomitant tinea pedis, address control of diabetes if present by encouraging involvement of the patient's primary care physician, and consider longer treatment courses when clinically relevant.

  3. The role of self-perceived usefulness and competence in the self-esteem of elderly adults: confirmatory factor analyses of the Bachman revision of Rosenberg's Self-Esteem Scale.

    Science.gov (United States)

    Ranzijn, R; Keeves, J; Luszcz, M; Feather, N T

    1998-03-01

    This article reports on a confirmatory analytic study of the Bachman Revision (1970) of Rosenberg's Self-Esteem Scale (1965) that was used in the Australian Longitudinal Study of Ageing (ALSA). Participants comprised 1,087 elderly people aged between 70 and 103 years (mean 77 years). Five competing factor models were tested with LISREL8. The best-fitting model was a nested one, with a General Self-Esteem second-order factor and two first-order factors, Positive Self-regard and Usefulness/Competence. This model was validated with data from a later wave of ALSA. Usefulness and competence have received little attention in the gerontological literature to date. Preliminary results indicate that usefulness/competence may be an important predictor of well-being. Further work is required on the relationships among usefulness, competence, self-esteem, and well-being in elderly people.

  4. Insights into the efficacy of golimumab plus methotrexate in patients with active rheumatoid arthritis who discontinued prior anti-tumour necrosis factor therapy: post-hoc analyses from the GO-AFTER study

    NARCIS (Netherlands)

    Smolen, Josef S.; Kay, Jonathan; Matteson, Eric L.; Landewé, Robert; Hsia, Elizabeth C.; Xu, Stephen; Zhou, Yiying; Doyle, Mittie K.

    2014-01-01

    Evaluate golimumab in patients with active rheumatoid arthritis (RA) and previous tumour necrosis factor-α (TNF) inhibitor use. Patients (n=461) previously receiving ≥1 TNF inhibitor were randomised to subcutaneous injections of placebo, golimumab 50 mg or golimumab 100 mg q4 weeks. Primary endpoint

  5. Functional analyses of lupulin gland-specific regulatory factors from WD40, bHLH and Myb families of hop (Humulus lupulus L.) show formation of crucial complexes activating chs_H1

    Czech Academy of Sciences Publication Activity Database

    Matoušek, Jaroslav; Patzak, J.; Kocábek, Tomáš; Füssy, Zoltán; Stehlík, Jan; Orctová, Lidmila; Duraisamy, Ganesh Selvaraj

    2011-01-01

    Roč. 64, č. 6 (2011), s. 151-155 ISSN 1613-2041 R&D Projects: GA ČR GA521/08/0740; GA MZe QH81052 Institutional research plan: CEZ:AV0Z50510513 Keywords : lupulin metabolome * Humulus lupulus L. * protein complexes * transcription factors Subject RIV: EB - Genetics ; Molecular Biology

  6. NOAA's National Snow Analyses

    Science.gov (United States)

    Carroll, T. R.; Cline, D. W.; Olheiser, C. M.; Rost, A. A.; Nilsson, A. O.; Fall, G. M.; Li, L.; Bovitz, C. T.

    2005-12-01

    NOAA's National Operational Hydrologic Remote Sensing Center (NOHRSC) routinely ingests all of the electronically available, real-time, ground-based, snow data; airborne snow water equivalent data; satellite areal extent of snow cover information; and numerical weather prediction (NWP) model forcings for the coterminous U.S. The NWP model forcings are physically downscaled from their native 13 km2 spatial resolution to a 1 km2 resolution for the CONUS. The downscaled NWP forcings drive an energy-and-mass-balance snow accumulation and ablation model at a 1 km2 spatial resolution and at a 1 hour temporal resolution for the country. The ground-based, airborne, and satellite snow observations are assimilated into the snow model's simulated state variables using a Newtonian nudging technique. The principle advantages of the assimilation technique are: (1) approximate balance is maintained in the snow model, (2) physical processes are easily accommodated in the model, and (3) asynoptic data are incorporated at the appropriate times. The snow model is reinitialized with the assimilated snow observations to generate a variety of snow products that combine to form NOAA's NOHRSC National Snow Analyses (NSA). The NOHRSC NSA incorporate all of the available information necessary and available to produce a "best estimate" of real-time snow cover conditions at 1 km2 spatial resolution and 1 hour temporal resolution for the country. The NOHRSC NSA consist of a variety of daily, operational, products that characterize real-time snowpack conditions including: snow water equivalent, snow depth, surface and internal snowpack temperatures, surface and blowing snow sublimation, and snowmelt for the CONUS. The products are generated and distributed in a variety of formats including: interactive maps, time-series, alphanumeric products (e.g., mean areal snow water equivalent on a hydrologic basin-by-basin basis), text and map discussions, map animations, and quantitative gridded products

  7. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  8. Pathological complete response after neoadjuvant chemotherapy is an independent predictive factor irrespective of simplified breast cancer intrinsic subtypes: a landmark and two-step approach analyses from the EORTC 10994/BIG 1-00 phase III trial.

    Science.gov (United States)

    Bonnefoi, H; Litière, S; Piccart, M; MacGrogan, G; Fumoleau, P; Brain, E; Petit, T; Rouanet, P; Jassem, J; Moldovan, C; Bodmer, A; Zaman, K; Cufer, T; Campone, M; Luporsi, E; Malmström, P; Werutsky, G; Bogaerts, J; Bergh, J; Cameron, D A

    2014-06-01

    Pathological complete response (pCR) following chemotherapy is strongly associated with both breast cancer subtype and long-term survival. Within a phase III neoadjuvant chemotherapy trial, we sought to determine whether the prognostic implications of pCR, TP53 status and treatment arm (taxane versus non-taxane) differed between intrinsic subtypes. Patients were randomized to receive either six cycles of anthracycline-based chemotherapy or three cycles of docetaxel then three cycles of eprirubicin/docetaxel (T-ET). pCR was defined as no evidence of residual invasive cancer (or very few scattered tumour cells) in primary tumour and lymph nodes. We used a simplified intrinsic subtypes classification, as suggested by the 2011 St Gallen consensus. Interactions between pCR, TP53 status, treatment arm and intrinsic subtype on event-free survival (EFS), distant metastasis-free survival (DMFS) and overall survival (OS) were studied using a landmark and a two-step approach multivariate analyses. Sufficient data for pCR analyses were available in 1212 (65%) of 1856 patients randomized. pCR occurred in 222 of 1212 (18%) patients: 37 of 496 (7.5%) luminal A, 22 of 147 (15%) luminal B/HER2 negative, 51 of 230 (22%) luminal B/HER2 positive, 43 of 118 (36%) HER2 positive/non-luminal, 69 of 221(31%) triple negative (TN). The prognostic effect of pCR on EFS did not differ between subtypes and was an independent predictor for better EFS [hazard ratio (HR) = 0.40, P analysis. EORTC 10994/BIG 1-00 Trial registration number NCT00017095. © The Author 2014. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  9. Multivariate differential analyses of adolescents' experiences of ...

    African Journals Online (AJOL)

    Aggression is reasoned to be dependent on aspects such as self-concept, moral reasoning, communication, frustration tolerance and family relationships. To analyse the data from questionnaires of 101 families (95 adolescents, 95 mothers and 91 fathers) Cronbach Alpha, various consecutive first and second order factor ...

  10. Time series analyses of hydrological parameter variations and their correlations at a coastal area in Busan, South Korea

    Science.gov (United States)

    Chung, Sang Yong; Senapathi, Venkatramanan; Sekar, Selvam; Kim, Tae Hyung

    2018-02-01

    Monitoring and time-series analysis of the hydrological parameters electrical conductivity (EC), water pressure, precipitation and tide were carried out, to understand the characteristics of the parameter variations and their correlations at a coastal area in Busan, South Korea. The monitoring data were collected at a sharp interface between freshwater and saline water at the depth of 25 m below ground. Two well-logging profiles showed that seawater intrusion has largely expanded (progressed inland), and has greatly affected the groundwater quality in a coastal aquifer of tuffaceous sedimentary rock over a 9-year period. According to the time series analyses, the periodograms of the hydrological parameters present very similar trends to the power spectral densities (PSD) of the hydrological parameters. Autocorrelation functions (ACF) and partial autocorrelation functions (PACF) of the hydrological parameters were produced to evaluate their self-correlations. The ACFs of all hydrologic parameters showed very good correlation over the entire time lag, but the PACF revealed that the correlations were good only at time lag 1. Crosscorrelation functions (CCF) were used to evaluate the correlations between the hydrological parameters and the characteristics of seawater intrusion in the coastal aquifer system. The CCFs showed that EC had a close relationship with water pressure and precipitation rather than tide. The CCFs of water pressure with tide and precipitation were in inverse proportion, and the CCF of water pressure with precipitation was larger than that with tide.

  11. Structural modelling and phylogenetic analyses of PgeIF4A2 (Eukaryotic translation initiation factor) from Pennisetum glaucum reveal signature motifs with a role in stress tolerance and development.

    Science.gov (United States)

    Agarwal, Aakrati; Mudgil, Yashwanti; Pandey, Saurabh; Fartyal, Dhirendra; Reddy, Malireddy K

    2016-01-01

    Eukaryotic translation initiation factor 4A (eIF4A) is an indispensable component of the translation machinery and also play a role in developmental processes and stress alleviation in plants and animals. Different eIF4A isoforms are present in the cytosol of the cell, namely, eIF4A1, eIF4A2, and eIF4A3 and their expression is tightly regulated in cap-dependent translation. We revealed the structural model of PgeIF4A2 protein using the crystal structure of Homo sapiens eIF4A3 (PDB ID: 2J0S) as template by Modeller 9.12. The resultant PgeIF4A2 model structure was refined by PROCHECK, ProSA, Verify3D and RMSD that showed the model structure is reliable with 77 % amino acid sequence identity with template. Investigation revealed two conserved signatures for ATP-dependent RNA Helicase DEAD-box conserved site (VLDEADEML) and RNA helicase DEAD-box type, Q-motif in sheet-turn-helix and α-helical region respectively. All these conserved motifs are responsible for response during developmental stages and stress tolerance in plants.

  12. Implementation of quality by design principles in the development of microsponges as drug delivery carriers: Identification and optimization of critical factors using multivariate statistical analyses and design of experiments studies.

    Science.gov (United States)

    Simonoska Crcarevska, Maja; Dimitrovska, Aneta; Sibinovska, Nadica; Mladenovska, Kristina; Slavevska Raicki, Renata; Glavas Dodov, Marija

    2015-07-15

    Microsponges drug delivery system (MDDC) was prepared by double emulsion-solvent-diffusion technique using rotor-stator homogenization. Quality by design (QbD) concept was implemented for the development of MDDC with potential to be incorporated into semisolid dosage form (gel). Quality target product profile (QTPP) and critical quality attributes (CQA) were defined and identified, accordingly. Critical material attributes (CMA) and Critical process parameters (CPP) were identified using quality risk management (QRM) tool, failure mode, effects and criticality analysis (FMECA). CMA and CPP were identified based on results obtained from principal component analysis (PCA-X&Y) and partial least squares (PLS) statistical analysis along with literature data, product and process knowledge and understanding. FMECA identified amount of ethylcellulose, chitosan, acetone, dichloromethane, span 80, tween 80 and water ratio in primary/multiple emulsions as CMA and rotation speed and stirrer type used for organic solvent removal as CPP. The relationship between identified CPP and particle size as CQA was described in the design space using design of experiments - one-factor response surface method. Obtained results from statistically designed experiments enabled establishment of mathematical models and equations that were used for detailed characterization of influence of identified CPP upon MDDC particle size and particle size distribution and their subsequent optimization. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Factors Influencing Goal Attainment in Patients with Post-Stroke Upper Limb Spasticity Following Treatment with Botulinum Toxin A in Real-Life Clinical Practice: Sub-Analyses from the Upper Limb International Spasticity (ULIS-II Study

    Directory of Open Access Journals (Sweden)

    Klemens Fheodoroff

    2015-04-01

    Full Text Available In this post-hoc analysis of the ULIS-II study, we investigated factors influencing person-centred goal setting and achievement following botulinum toxin-A (BoNT-A treatment in 456 adults with post-stroke upper limb spasticity (ULS. Patients with primary goals categorised as passive function had greater motor impairment (p < 0.001, contractures (soft tissue shortening [STS] (p = 0.006 and spasticity (p = 0.02 than those setting other goal types. Patients with goals categorised as active function had less motor impairment (0.0001, contracture (p < 0.0001, spasticity (p < 0.001 and shorter time since stroke (p = 0.001. Patients setting goals for pain were older (p = 0.01 with more contractures (p = 0.008. The proportion of patients achieving their primary goal was not impacted by timing of first-ever BoNT-A injection (medium-term (≤1 year vs. longer-term (>1 year post-stroke (80.0% vs. 79.2% or presence or absence of severe contractures (76.7% vs. 80.6%, although goal types differed. Earlier BoNT-A intervention was associated with greater achievement of active function goals. Severe contractures impacted negatively on goal achievement except in pain and passive function. Goal setting by patients with ULS is influenced by impairment severity, age and time since stroke. Our findings resonate with clinical experience and may assist patients and clinicians in selecting realistic, achievable goals for treatment.

  14. Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses

    Science.gov (United States)

    Lanfear, Robert; Hua, Xia; Warren, Dan L.

    2016-01-01

    Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794

  15. Application of spatial and non-spatial data analysis in determination of the factors that impact municipal solid waste generation rates in Turkey

    International Nuclear Information System (INIS)

    Keser, Saniye; Duzgun, Sebnem; Aksoy, Aysegul

    2012-01-01

    Highlights: ► Spatial autocorrelation exists in municipal solid waste generation rates for different provinces in Turkey. ► Traditional non-spatial regression models may not provide sufficient information for better solid waste management. ► Unemployment rate is a global variable that significantly impacts the waste generation rates in Turkey. ► Significances of global parameters may diminish at local scale for some provinces. ► GWR model can be used to create clusters of cities for solid waste management. - Abstract: In studies focusing on the factors that impact solid waste generation habits and rates, the potential spatial dependency in solid waste generation data is not considered in relating the waste generation rates to its determinants. In this study, spatial dependency is taken into account in determination of the significant socio-economic and climatic factors that may be of importance for the municipal solid waste (MSW) generation rates in different provinces of Turkey. Simultaneous spatial autoregression (SAR) and geographically weighted regression (GWR) models are used for the spatial data analyses. Similar to ordinary least squares regression (OLSR), regression coefficients are global in SAR model. In other words, the effect of a given independent variable on a dependent variable is valid for the whole country. Unlike OLSR or SAR, GWR reveals the local impact of a given factor (or independent variable) on the waste generation rates of different provinces. Results show that provinces within closer neighborhoods have similar MSW generation rates. On the other hand, this spatial autocorrelation is not very high for the exploratory variables considered in the study. OLSR and SAR models have similar regression coefficients. GWR is useful to indicate the local determinants of MSW generation rates. GWR model can be utilized to plan waste management activities at local scale including waste minimization, collection, treatment, and disposal. At global

  16. Skin barrier and contact allergy: Genetic risk factor analyses

    DEFF Research Database (Denmark)

    Ross-Hansen, Katrine

    2013-01-01

    allergy. Objectives To evaluate the effect of specific gene polymorphisms on the risk of developing contact allergy by a candidate gene approach. These included polymorphisms in the glutathione S-transferase genes (GSTM1, -T1 and -P1 variants), the claudin-1 gene (CLDN1), and the filaggrin gene (FLG......) in particular. Methods Epidemiological genetic association studies were performed on a general Danish population. Participants were patch tested, answered a questionnaire on general health and were genotyped for GST, CLDN1 and FLG polymorphisms. Filaggrin’s nickel binding potential was evaluated biochemically...

  17. Epidermal growth factor receptor analyses in colorectal cancer

    DEFF Research Database (Denmark)

    Spindler, Karen-Lise Garm; Lindebjerg, Jan; Nielsen, Jens Nederby

    2006-01-01

    EGFR immunohistochemistry (IHC) status is not a reliable predictive marker for response to EGFR-targeted therapies. The present study compares the EGFR status at DNA, RNA and protein level. Blood samples, corresponding normal colon and colorectal cancer tissue were collected from 199 colorectal...... equivalent EGFR status (28/34). There was a tendency to higher median protein level (by ELISA) in IHC positive patients compared to IHC negative patients (p=0.086). The median EGFR gene expression level was significantly lower in tumours than in the normal colon with no difference according to IHC status....... No tumours had increased gene copy number by FISH. EGFR Sp1-216 polymorphism analysis showed a tendency for different EGFR tumour protein levels and gene expression levels according to the different genotypes. The results show a poor correlation between EGFR status at DNA, RNA and protein level...

  18. Acoustic analyses of speech sounds and rhythms in Japanese- and English-learning infants

    Directory of Open Access Journals (Sweden)

    Yuko eYamashita

    2013-02-01

    Full Text Available The purpose of this study was to explore developmental changes, in terms of spectral fluctuations and temporal periodicity with Japanese- and English-learning infants. Three age groups (15, 20, and 24 months were selected, because infants diversify phonetic inventories with age. Natural speech of the infants was recorded. We utilized a critical-band-filter bank, which simulated the frequency resolution in adults’ auditory periphery. First, the correlations between the critical-band outputs represented by factor analysis were observed in order to see how the critical bands should be connected to each other, if a listener is to differentiate sounds in infants’ speech. In the following analysis, we analyzed the temporal fluctuations of factor scores by calculating autocorrelations. The present analysis identified three factors observed in adult speech at 24 months of age in both linguistic environments. These three factors were shifted to a higher frequency range corresponding to the smaller vocal tract size of the infants. The results suggest that the vocal tract structures of the infants had developed to become adult-like configuration by 24 months of age in both language environments. The amount of utterances with periodic nature of shorter time increased with age in both environments. This trend was clearer in the Japanese environment.

  19. Descriptive Analyses of Mechanical Systems

    DEFF Research Database (Denmark)

    Andreasen, Mogens Myrup; Hansen, Claus Thorp

    2003-01-01

    Forord Produktanalyse og teknologianalyse kan gennmføres med et bredt socio-teknisk sigte med henblik på at forstå kulturelle, sociologiske, designmæssige, forretningsmæssige og mange andre forhold. Et delområde heri er systemisk analyse og beskrivelse af produkter og systemer. Nærværende kompend...

  20. Analysing and Comparing Encodability Criteria

    Directory of Open Access Journals (Sweden)

    Kirstin Peters

    2015-08-01

    Full Text Available Encodings or the proof of their absence are the main way to compare process calculi. To analyse the quality of encodings and to rule out trivial or meaningless encodings, they are augmented with quality criteria. There exists a bunch of different criteria and different variants of criteria in order to reason in different settings. This leads to incomparable results. Moreover it is not always clear whether the criteria used to obtain a result in a particular setting do indeed fit to this setting. We show how to formally reason about and compare encodability criteria by mapping them on requirements on a relation between source and target terms that is induced by the encoding function. In particular we analyse the common criteria full abstraction, operational correspondence, divergence reflection, success sensitiveness, and respect of barbs; e.g. we analyse the exact nature of the simulation relation (coupled simulation versus bisimulation that is induced by different variants of operational correspondence. This way we reduce the problem of analysing or comparing encodability criteria to the better understood problem of comparing relations on processes.

  1. Analysing Children's Drawings: Applied Imagination

    Science.gov (United States)

    Bland, Derek

    2012-01-01

    This article centres on a research project in which freehand drawings provided a richly creative and colourful data source of children's imagined, ideal learning environments. Issues concerning the analysis of the visual data are discussed, in particular, how imaginative content was analysed and how the analytical process was dependent on an…

  2. Impact analyses after pipe rupture

    International Nuclear Information System (INIS)

    Chun, R.C.; Chuang, T.Y.

    1983-01-01

    Two of the French pipe whip experiments are reproduced with the computer code WIPS. The WIPS results are in good agreement with the experimental data and the French computer code TEDEL. This justifies the use of its pipe element in conjunction with its U-bar element in a simplified method of impact analyses

  3. Millifluidic droplet analyser for microbiology

    NARCIS (Netherlands)

    Baraban, L.; Bertholle, F.; Salverda, M.L.M.; Bremond, N.; Panizza, P.; Baudry, J.; Visser, de J.A.G.M.; Bibette, J.

    2011-01-01

    We present a novel millifluidic droplet analyser (MDA) for precisely monitoring the dynamics of microbial populations over multiple generations in numerous (=103) aqueous emulsion droplets (100 nL). As a first application, we measure the growth rate of a bacterial strain and determine the minimal

  4. Analyser of sweeping electron beam

    International Nuclear Information System (INIS)

    Strasser, A.

    1993-01-01

    The electron beam analyser has an array of conductors that can be positioned in the field of the sweeping beam, an electronic signal treatment system for the analysis of the signals generated in the conductors by the incident electrons and a display for the different characteristics of the electron beam

  5. On the use of the autocorrelation and covariance methods for feedforward control of transverse angle and position jitter in linear particle beam accelerators

    International Nuclear Information System (INIS)

    Barr, D.S.

    1994-01-01

    It is desired to design a predictive feedforward transverse jitter control system to control both angle and position jitter in pulsed linear accelerators. Such a system will increase the accuracy and bandwidth of correction over that of currently available feedback correction systems. Intrapulse correction is performed. An offline process actually ''learns'' the properties of the jitter, and uses these properties to apply correction to the beam. The correction weights calculated offline are downloaded to a real-time analog correction system between macropulses. Jitter data were taken at the Los Alamos National Laboratory (LANL) Ground Test Accelerator (GTA) telescope experiment at Argonne National Laboratory (ANL). The experiment consisted of the LANL telescope connected to the ANL ZGS proton source and linac. A simulation of the correction system using this data was shown to decrease the average rms jitter by a factor of two over that of a comparable standard feedback correction system. The system also improved the correction bandwidth

  6. On the use of the autocorrelation and covariance methods for feedforward control of transverse angle and position jitter in linear particle beam accelerators

    International Nuclear Information System (INIS)

    Barr, D.S.

    1993-01-01

    It is desired to design a predictive feedforward transverse jitter control system to control both angle and position jitter in pulsed linear accelerators. Such a system will increase the accuracy and bandwidth of correction over that of currently available feedback correction systems. Intrapulse correction is performed. An offline process actually open-quotes learnsclose quotes the properties of the jitter, and uses these properties to apply correction to the beam. The correction weights calculated offline are downloaded to a real-time analog correction system between macropulses. Jitter data were taken at the Los Alamos National Laboratory (LANL) Ground Test Accelerator (GTA) telescope experiment at Argonne National Laboratory (ANL). The experiment consisted of the LANL telescope connected to the ANL ZGS proton source and linac. A simulation of the correction system using this data was shown to decrease the average rms jitter by a factor of two over that of a comparable standard feedback correction system. The system also improved the correction bandwidth

  7. Spatial Analyses of Harappan Urban Settlements

    Directory of Open Access Journals (Sweden)

    Hirofumi Teramura

    2006-12-01

    Full Text Available The Harappan Civilization occupies a unique place among the early civilizations of the world with its well planned urban settlements, advanced handicraft and technology, religious and trade activities. Using a Geographical Information Systems (GIS, this study presents spatial analyses that locate urban settlements on a digital elevation model (DEM according to the three phases of early, mature and late. Understanding the relationship between the spatial distribution of Harappan sites and the change in some factors, such as topographic features, river passages or sea level changes, will lead to an understanding of the dynamism of this civilization. It will also afford a glimpse of the factors behind the formation, development, and decline of the Harappan Civilization.

  8. Abundance analyses of thirty cool carbon stars

    International Nuclear Information System (INIS)

    Utsumi, Kazuhiko

    1985-01-01

    The results were previously obtained by use of the absolute gf-values and the cosmic abundance as a standard. These gf-values were found to contain large systematic errors, and as a result, the solar photospheric abundances were revised. Our previous results, therefore, must be revised by using new gf-values, and abundance analyses are extended for as many carbon stars as possible. In conclusion, in normal cool carbon stars heavy metals are overabundant by factors of 10 - 100 and rare-earth elements are overabundant by a factor of about 10, and in J-type cool carbon stars, C 12 /C 13 ratio is smaller, C 2 and CN bands and Li 6708 are stronger than in normal cool carbon stars, and the abundances of s-process elements with respect to Fe are nearly normal. (Mori, K.)

  9. Workload analyse of assembling process

    Science.gov (United States)

    Ghenghea, L. D.

    2015-11-01

    The workload is the most important indicator for managers responsible of industrial technological processes no matter if these are automated, mechanized or simply manual in each case, machines or workers will be in the focus of workload measurements. The paper deals with workload analyses made to a most part manual assembling technology for roller bearings assembling process, executed in a big company, with integrated bearings manufacturing processes. In this analyses the delay sample technique have been used to identify and divide all bearing assemblers activities, to get information about time parts from 480 minutes day work time that workers allow to each activity. The developed study shows some ways to increase the process productivity without supplementary investments and also indicated the process automation could be the solution to gain maximum productivity.

  10. Mitogenomic analyses from ancient DNA

    DEFF Research Database (Denmark)

    Paijmans, Johanna L. A.; Gilbert, Tom; Hofreiter, Michael

    2013-01-01

    The analysis of ancient DNA is playing an increasingly important role in conservation genetic, phylogenetic and population genetic analyses, as it allows incorporating extinct species into DNA sequence trees and adds time depth to population genetics studies. For many years, these types of DNA...... analyses (whether using modern or ancient DNA) were largely restricted to the analysis of short fragments of the mitochondrial genome. However, due to many technological advances during the past decade, a growing number of studies have explored the power of complete mitochondrial genome sequences...... yielded major progress with regard to both the phylogenetic positions of extinct species, as well as resolving population genetics questions in both extinct and extant species....

  11. Recriticality analyses for CAPRA cores

    International Nuclear Information System (INIS)

    Maschek, W.; Thiem, D.

    1995-01-01

    The first scoping calculation performed show that the energetics levels from recriticalities in CAPRA cores are in the same range as in conventional cores. However, considerable uncertainties exist and further analyses are necessary. Additional investigations are performed for the separation scenarios of fuel/steel/inert and matrix material as a large influence of these processes on possible ramp rates and kinetics parameters was detected in the calculations. (orig./HP)

  12. Recriticality analyses for CAPRA cores

    Energy Technology Data Exchange (ETDEWEB)

    Maschek, W.; Thiem, D.

    1995-08-01

    The first scoping calculation performed show that the energetics levels from recriticalities in CAPRA cores are in the same range as in conventional cores. However, considerable uncertainties exist and further analyses are necessary. Additional investigations are performed for the separation scenarios of fuel/steel/inert and matrix material as a large influence of these processes on possible ramp rates and kinetics parameters was detected in the calculations. (orig./HP)

  13. Technical center for transportation analyses

    International Nuclear Information System (INIS)

    Foley, J.T.

    1978-01-01

    A description is presented of an information search/retrieval/research activity of Sandia Laboratories which provides technical environmental information which may be used in transportation risk analyses, environmental impact statements, development of design and test criteria for packaging of energy materials, and transportation mode research studies. General activities described are: (1) history of center development; (2) environmental information storage/retrieval system; (3) information searches; (4) data needs identification; and (5) field data acquisition system and applications

  14. Methodology of cost benefit analyses

    International Nuclear Information System (INIS)

    Patrik, M.; Babic, P.

    2000-10-01

    The report addresses financial aspects of proposed investments and other steps which are intended to contribute to nuclear safety. The aim is to provide introductory insight into the procedures and potential of cost-benefit analyses as a routine guide when making decisions on costly provisions as one of the tools to assess whether a particular provision is reasonable. The topic is applied to the nuclear power sector. (P.A.)

  15. Determination of the Projected Atomic Potential by Deconvolution of the Auto-Correlation Function of TEM Electron Nano-Diffraction Patterns

    Directory of Open Access Journals (Sweden)

    Liberato De Caro

    2016-11-01

    Full Text Available We present a novel method to determine the projected atomic potential of a specimen directly from transmission electron microscopy coherent electron nano-diffraction patterns, overcoming common limitations encountered so far due to the dynamical nature of electron-matter interaction. The projected potential is obtained by deconvolution of the inverse Fourier transform of experimental diffraction patterns rescaled in intensity by using theoretical values of the kinematical atomic scattering factors. This novelty enables the compensation of dynamical effects typical of transmission electron microscopy (TEM experiments on standard specimens with thicknesses up to a few tens of nm. The projected atomic potentials so obtained are averaged on sample regions illuminated by nano-sized electron probes and are in good quantitative agreement with theoretical expectations. Contrary to lens-based microscopy, here the spatial resolution in the retrieved projected atomic potential profiles is related to the finer lattice spacing measured in the electron diffraction pattern. The method has been successfully applied to experimental nano-diffraction data of crystalline centrosymmetric and non-centrosymmetric specimens achieving a resolution of 65 pm.

  16. Multitrait-Multimethod Analyses of Two Self-Concept Instruments.

    Science.gov (United States)

    Marsh, Herbert W.; Smith, Ian D.

    1982-01-01

    The multidimensionality of self-concept and the use of factor analysis in the development of self-concept instruments are supported in multitrait-multimethod analyses of the Sears and Coopersmith instruments. Convergent validity and discriminate validity of subscales in factor analysis and multitrait-multimethod analysis of longitudinal data are…

  17. On-site processing systems for determination of the phase velocity of Rayleigh waves in microtremors using the spatial autocorrelation method; Kukan jiko sokanho wo mochiita bidochu no Rayleigh ha iso sokudo no genba kettei system

    Energy Technology Data Exchange (ETDEWEB)

    Matsuoka, T; Umezawa, N [Saitama Institute of Environmental Pollution, Saitama (Japan)

    1996-05-01

    To render the spatial autocorrelation (SAC) method easier to use, a system has been constructed that can be used with ease on the site for the calculation of phase velocities. This system can perform two observation methods of the same frequency characteristics, that is, the simultaneous multi-point observation and one-point independent observation. The pickup is a velocity type seismograph of a natural period of 1 second that has been so electrically adjusted as to work on an apparent natural period of 7 seconds. Among the frequency characteristics, those related to phase are regarded as important because the SAC method is based on the measurement of coherence between two points. The analysis software runs on a waveform processing software DADiSP/WIN designed for personal computers. To know the operability of this system on the site and to accumulate records using the SAC method, observations were made at the depth of 100-500m at 6 locations in Saitama Prefecture where the underground structure was known thanks to prior PS logging. As the result, a dispersion curve was obtained by use of an array of appropriate dimensions at every location agreeing with the underground structure. 9 refs., 10 figs.

  18. Chapter No.4. Safety analyses

    International Nuclear Information System (INIS)

    2002-01-01

    In 2001 the activity in the field of safety analyses was focused on verification of the safety analyses reports for NPP V-2 Bohunice and NPP Mochovce concerning the new profiled fuel and probabilistic safety assessment study for NPP Mochovce. The calculation safety analyses were performed and expert reviews for the internal UJD needs were elaborated. An important part of work was performed also in solving of scientific and technical tasks appointed within bilateral projects of co-operation between UJD and its international partnership organisations as well as within international projects ordered and financed by the European Commission. All these activities served as an independent support for UJD in its deterministic and probabilistic safety assessment of nuclear installations. A special attention was paid to a review of probabilistic safety assessment study of level 1 for NPP Mochovce. The probabilistic safety analysis of NPP related to the full power operation was elaborated in the study and a contribution of the technical and operational improvements to the risk decreasing was quantified. A core damage frequency of the reactor was calculated and the dominant initiating events and accident sequences with the major contribution to the risk were determined. The target of the review was to determine the acceptance of the sources of input information, assumptions, models, data, analyses and obtained results, so that the probabilistic model could give a real picture of the NPP. The review of the study was performed in co-operation of UJD with the IAEA (IPSART mission) as well as with other external organisations, which were not involved in the elaboration of the reviewed document and probabilistic model of NPP. The review was made in accordance with the IAEA guidelines and methodical documents of UJD and US NRC. In the field of calculation safety analyses the UJD activity was focused on the analysis of an operational event, analyses of the selected accident scenarios

  19. Analysing the Wrongness of Killing

    DEFF Research Database (Denmark)

    Di Nucci, Ezio

    2014-01-01

    This article provides an in-depth analysis of the wrongness of killing by comparing different versions of three influential views: the traditional view that killing is always wrong; the liberal view that killing is wrong if and only if the victim does not want to be killed; and Don Marquis‟ future...... of value account of the wrongness of killing. In particular, I illustrate the advantages that a basic version of the liberal view and a basic version of the future of value account have over competing alternatives. Still, ultimately none of the views analysed here are satisfactory; but the different...

  20. Theorising and Analysing Academic Labour

    Directory of Open Access Journals (Sweden)

    Thomas Allmer

    2018-01-01

    Full Text Available The aim of this article is to contextualise universities historically within capitalism and to analyse academic labour and the deployment of digital media theoretically and critically. It argues that the post-war expansion of the university can be considered as medium and outcome of informational capitalism and as a dialectical development of social achievement and advanced commodification. The article strives to identify the class position of academic workers, introduces the distinction between academic work and labour, discusses the connection between academic, information and cultural work, and suggests a broad definition of university labour. It presents a theoretical model of working conditions that helps to systematically analyse the academic labour process and to provide an overview of working conditions at universities. The paper furthermore argues for the need to consider the development of education technologies as a dialectics of continuity and discontinuity, discusses the changing nature of the forces and relations of production, and the impact on the working conditions of academics in the digital university. Based on Erik Olin Wright’s inclusive approach of social transformation, the article concludes with the need to bring together anarchist, social democratic and revolutionary strategies for establishing a socialist university in a commons-based information society.

  1. CFD analyses in regulatory practice

    International Nuclear Information System (INIS)

    Bloemeling, F.; Pandazis, P.; Schaffrath, A.

    2012-01-01

    Numerical software is used in nuclear regulatory procedures for many problems in the fields of neutron physics, structural mechanics, thermal hydraulics etc. Among other things, the software is employed in dimensioning and designing systems and components and in simulating transients and accidents. In nuclear technology, analyses of this kind must meet strict requirements. Computational Fluid Dynamics (CFD) codes were developed for computing multidimensional flow processes of the type occurring in reactor cooling systems or in containments. Extensive experience has been accumulated by now in selected single-phase flow phenomena. At the present time, there is a need for development and validation with respect to the simulation of multi-phase and multi-component flows. As insufficient input by the user can lead to faulty results, the validity of the results and an assessment of uncertainties are guaranteed only through consistent application of so-called Best Practice Guidelines. The authors present the possibilities now available to CFD analyses in nuclear regulatory practice. This includes a discussion of the fundamental requirements to be met by numerical software, especially the demands upon computational analysis made by nuclear rules and regulations. In conclusion, 2 examples are presented of applications of CFD analysis to nuclear problems: Determining deboration in the condenser reflux mode of operation, and protection of the reactor pressure vessel (RPV) against brittle failure. (orig.)

  2. Severe accident recriticality analyses (SARA)

    DEFF Research Database (Denmark)

    Frid, W.; Højerup, C.F.; Lindholm, I.

    2001-01-01

    with all three codes. The core initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality-both super-prompt power bursts and quasi steady-state power......Recriticality in a BWR during reflooding of an overheated partly degraded core, i.e. with relocated control rods, has been studied for a total loss of electric power accident scenario. In order to assess the impact of recriticality on reactor safety, including accident management strategies......, which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal g(-1), was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding rate of 2000 kg s(-1). In most cases, however, the predicted energy deposition was smaller, below...

  3. Hydrogen Analyses in the EPR

    International Nuclear Information System (INIS)

    Worapittayaporn, S.; Eyink, J.; Movahed, M.

    2008-01-01

    In severe accidents with core melting large amounts of hydrogen may be released into the containment. The EPR provides a combustible gas control system to prevent hydrogen combustion modes with the potential to challenge the containment integrity due to excessive pressure and temperature loads. This paper outlines the approach for the verification of the effectiveness and efficiency of this system. Specifically, the justification is a multi-step approach. It involves the deployment of integral codes, lumped parameter containment codes and CFD codes and the use of the sigma criterion, which provides the link to the broad experimental data base for flame acceleration (FA) and deflagration to detonation transition (DDT). The procedure is illustrated with an example. The performed analyses show that hydrogen combustion at any time does not lead to pressure or temperature loads that threaten the containment integrity of the EPR. (authors)

  4. Uncertainty and Sensitivity Analyses Plan

    International Nuclear Information System (INIS)

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project

  5. The hemispherical deflector analyser revisited

    Energy Technology Data Exchange (ETDEWEB)

    Benis, E.P. [Institute of Electronic Structure and Laser, P.O. Box 1385, 71110 Heraklion, Crete (Greece)], E-mail: benis@iesl.forth.gr; Zouros, T.J.M. [Institute of Electronic Structure and Laser, P.O. Box 1385, 71110 Heraklion, Crete (Greece); Department of Physics, University of Crete, P.O. Box 2208, 71003 Heraklion, Crete (Greece)

    2008-04-15

    Using the basic spectrometer trajectory equation for motion in an ideal 1/r potential derived in Eq. (101) of part I [T.J.M. Zouros, E.P. Benis, J. Electron Spectrosc. Relat. Phenom. 125 (2002) 221], the operational characteristics of a hemispherical deflector analyser (HDA) such as dispersion, energy resolution, energy calibration, input lens magnification and energy acceptance window are investigated from first principles. These characteristics are studied as a function of the entry point R{sub 0} and the nominal value of the potential V(R{sub 0}) at entry. Electron-optics simulations and actual laboratory measurements are compared to our theoretical results for an ideal biased paracentric HDA using a four-element zoom lens and a two-dimensional position sensitive detector (2D-PSD). These results should be of particular interest to users of modern HDAs utilizing a PSD.

  6. The hemispherical deflector analyser revisited

    International Nuclear Information System (INIS)

    Benis, E.P.; Zouros, T.J.M.

    2008-01-01

    Using the basic spectrometer trajectory equation for motion in an ideal 1/r potential derived in Eq. (101) of part I [T.J.M. Zouros, E.P. Benis, J. Electron Spectrosc. Relat. Phenom. 125 (2002) 221], the operational characteristics of a hemispherical deflector analyser (HDA) such as dispersion, energy resolution, energy calibration, input lens magnification and energy acceptance window are investigated from first principles. These characteristics are studied as a function of the entry point R 0 and the nominal value of the potential V(R 0 ) at entry. Electron-optics simulations and actual laboratory measurements are compared to our theoretical results for an ideal biased paracentric HDA using a four-element zoom lens and a two-dimensional position sensitive detector (2D-PSD). These results should be of particular interest to users of modern HDAs utilizing a PSD

  7. Analysing Protocol Stacks for Services

    DEFF Research Database (Denmark)

    Gao, Han; Nielson, Flemming; Nielson, Hanne Riis

    2011-01-01

    We show an approach, CaPiTo, to model service-oriented applications using process algebras such that, on the one hand, we can achieve a certain level of abstraction without being overwhelmed by the underlying implementation details and, on the other hand, we respect the concrete industrial...... standards used for implementing the service-oriented applications. By doing so, we will be able to not only reason about applications at different levels of abstractions, but also to build a bridge between the views of researchers on formal methods and developers in industry. We apply our approach...... to the financial case study taken from Chapter 0-3. Finally, we develop a static analysis to analyse the security properties as they emerge at the level of concrete industrial protocols....

  8. Analysing performance through value creation

    Directory of Open Access Journals (Sweden)

    Adrian TRIFAN

    2015-12-01

    Full Text Available This paper draws a parallel between measuring financial performance in 2 variants: the first one using data offered by accounting, which lays emphasis on maximizing profit, and the second one which aims to create value. The traditional approach to performance is based on some indicators from accounting data: ROI, ROE, EPS. The traditional management, based on analysing the data from accounting, has shown its limits, and a new approach is needed, based on creating value. The evaluation of value based performance tries to avoid the errors due to accounting data, by using other specific indicators: EVA, MVA, TSR, CVA. The main objective is shifted from maximizing the income to maximizing the value created for shareholders. The theoretical part is accompanied by a practical analysis regarding the creation of value and an analysis of the main indicators which evaluate this concept.

  9. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....

  10. Analyse des facteurs socioculturels et survenue des troubles musculosquelettiques : le cas des couturières en Tunisie Analysis of sociocultural factors and the onset of musculoskeletal disorders: the case of seamstresses in Tunisia Análisis de factores socio-culturales y aparición de problemas músculo-esqueléticos: el caso de las costureras en Túnez

    Directory of Open Access Journals (Sweden)

    Raouf Ghram

    2010-05-01

    Full Text Available Le contexte actuel de mondialisation implique parfois des délocalisations et une internationalisation du travail qui conduisent quelquefois à des situations de travail défavorables pour la préservation de la santé des travailleurs. En effet, les transferts de technologies nécessitent une compréhension des déterminants contextuels pouvant influencer les situations de travail. Nous y retrouvons notamment les spécificités socioculturelles que nous avons traitées dans cette recherche. La prise en considération de celles-ci nous a permis d’avoir une meilleure compréhension des situations de travail d’opératrices en confection en Tunisie. De fait, nous percevons l’intérêt de prendre en compte ces dimensions dans le but de concevoir des situations de travail adéquates respectueuses de la santé.The current context of globalization sometimes involves work relocation and internationalization, which sometimes lead to work situations unfavourable for the preservation of workers’ health. Indeed, technology transfers require an understanding of the contextual determining factors that can influence work situations. Here, we find in particular the sociocultural specificities that we addressed in this study. Their consideration provided us with a better understanding of operator work situations in the clothing industry in Tunisia. In fact, we note the interest in taking these aspects into consideration in order to plan appropriate work situations that respect health.El contexto actual de globalización implica en ocasiones deslocalizaciones y una internacionalización del trabajo, que conduce a veces a situaciones de trabajo desfavorables para la conservación de la salud de los trabajadores. En efecto, las transferencias de tecnologías requieren una comprensión de los factores del contexto que pueden influir en las situaciones de trabajo. Hay, en particular, las especificaciones socio-culturales que tratamos en esta investigaci

  11. DEPUTY: analysing architectural structures and checking style

    International Nuclear Information System (INIS)

    Gorshkov, D.; Kochelev, S.; Kotegov, S.; Pavlov, I.; Pravilnikov, V.; Wellisch, J.P.

    2001-01-01

    The DepUty (dependencies utility) can be classified as a project and process management tool. The main goal of DepUty is to assist by means of source code analysis and graphical representation using UML, in understanding dependencies of sub-systems and packages in CMS Object Oriented software, to understand architectural structure, and to schedule code release in modularised integration. It also allows a new-comer to more easily understand the global structure of CMS software, and to void circular dependencies up-front or re-factor the code, in case it was already too close to the edge of non-maintainability. The authors will discuss the various views DepUty provides to analyse package dependencies, and illustrate both the metrics and style checking facilities it provides

  12. Seismic analyses of structures. 1st draft

    International Nuclear Information System (INIS)

    David, M.

    1995-01-01

    The dynamic analysis presented in this paper refers to the seismic analysis of the main building of Paks NPP. The aim of the analysis was to determine the floor response spectra as response to seismic input. This analysis was performed by the 3-dimensional calculation model and the floor response spectra were determined for a number levels from the floor response time histories and no other adjustments were applied. The following results of seismic analysis are presented: 3-dimensional finite element model; basic assumptions of dynamic analyses; table of frequencies and included factors; modal masses for all modes; floor response spectra in all the selected nodes with figures of indicated nodes and important nodes of free vibration

  13. Analysing Terrorism from a Systems Thinking Perspective

    Directory of Open Access Journals (Sweden)

    Lukas Schoenenberger

    2014-02-01

    Full Text Available Given the complexity of terrorism, solutions based on single factors are destined to fail. Systems thinking offers various tools for helping researchers and policy makers comprehend terrorism in its entirety. We have developed a semi-quantitative systems thinking approach for characterising relationships between variables critical to terrorism and their impact on the system as a whole. For a better understanding of the mechanisms underlying terrorism, we present a 16-variable model characterising the critical components of terrorism and perform a series of highly focused analyses. We show how to determine which variables are best suited for government intervention, describing in detail their effects on the key variable—the political influence of a terrorist network. We also offer insights into how to elicit variables that destabilise and ultimately break down these networks. Because we clarify our novel approach with fictional data, the primary importance of this paper lies in the new framework for reasoning that it provides.

  14. Seismic analyses of structures. 1st draft

    Energy Technology Data Exchange (ETDEWEB)

    David, M [David Consulting, Engineering and Design Office (Czech Republic)

    1995-07-01

    The dynamic analysis presented in this paper refers to the seismic analysis of the main building of Paks NPP. The aim of the analysis was to determine the floor response spectra as responseto seismic input. This analysis was performed by the 3-dimensional calculation model and the floor response spectra were determined for a number levels from the floor response time histories and no other adjustments were applied. The following results of seismic analysis are presented: 3-dimensional finite element model; basic assumptions of dynamic analyses; table of frequencies and included factors; modal masses for all modes; floor response spectra in all the selected nodes with figures of indicated nodes and important nodes of free vibration.

  15. Externalizing Behaviour for Analysing System Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof

    2013-01-01

    System models have recently been introduced to model organisations and evaluate their vulnerability to threats and especially insider threats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside...... attackers. Therefore, many attacks are considerably easier to be performed for insiders than for outsiders. However, current models do not support explicit specification of different behaviours. Instead, behaviour is deeply embedded in the analyses supported by the models, meaning that it is a complex......, if not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to integrate...

  16. Proteins analysed as virtual knots

    Science.gov (United States)

    Alexander, Keith; Taylor, Alexander J.; Dennis, Mark R.

    2017-02-01

    Long, flexible physical filaments are naturally tangled and knotted, from macroscopic string down to long-chain molecules. The existence of knotting in a filament naturally affects its configuration and properties, and may be very stable or disappear rapidly under manipulation and interaction. Knotting has been previously identified in protein backbone chains, for which these mechanical constraints are of fundamental importance to their molecular functionality, despite their being open curves in which the knots are not mathematically well defined; knotting can only be identified by closing the termini of the chain somehow. We introduce a new method for resolving knotting in open curves using virtual knots, which are a wider class of topological objects that do not require a classical closure and so naturally capture the topological ambiguity inherent in open curves. We describe the results of analysing proteins in the Protein Data Bank by this new scheme, recovering and extending previous knotting results, and identifying topological interest in some new cases. The statistics of virtual knots in protein chains are compared with those of open random walks and Hamiltonian subchains on cubic lattices, identifying a regime of open curves in which the virtual knotting description is likely to be important.

  17. Digital image analyser for autoradiography

    International Nuclear Information System (INIS)

    Muth, R.A.; Plotnick, J.

    1985-01-01

    The most critical parameter in quantitative autoradiography for assay of tissue concentrations of tracers is the ability to obtain precise and accurate measurements of optical density of the images. Existing high precision systems for image analysis, rotating drum densitometers, are expensive, suffer from mechanical problems and are slow. More moderately priced and reliable video camera based systems are available, but their outputs generally do not have the uniformity and stability necessary for high resolution quantitative autoradiography. The authors have designed and constructed an image analyser optimized for quantitative single and multiple tracer autoradiography which the authors refer to as a memory-mapped charged-coupled device scanner (MM-CCD). The input is from a linear array of CCD's which is used to optically scan the autoradiograph. Images are digitized into 512 x 512 picture elements with 256 gray levels and the data is stored in buffer video memory in less than two seconds. Images can then be transferred to RAM memory by direct memory-mapping for further processing. Arterial blood curve data and optical density-calibrated standards data can be entered and the optical density images can be converted automatically to tracer concentration or functional images. In double tracer studies, images produced from both exposures can be stored and processed in RAM to yield ''pure'' individual tracer concentration or functional images. Any processed image can be transmitted back to the buffer memory to be viewed on a monitor and processed for region of interest analysis

  18. Autocorrelations in hybrid Monte Carlo simulations

    International Nuclear Information System (INIS)

    Schaefer, Stefan; Virotta, Francesco

    2010-11-01

    Simulations of QCD suffer from severe critical slowing down towards the continuum limit. This problem is known to be prominent in the topological charge, however, all observables are affected to various degree by these slow modes in the Monte Carlo evolution. We investigate the slowing down in high statistics simulations and propose a new error analysis method, which gives a realistic estimate of the contribution of the slow modes to the errors. (orig.)

  19. Severe Accident Recriticality Analyses (SARA)

    Energy Technology Data Exchange (ETDEWEB)

    Frid, W. [Swedish Nuclear Power Inspectorate, Stockholm (Sweden); Hoejerup, F. [Risoe National Lab. (Denmark); Lindholm, I.; Miettinen, J.; Puska, E.K. [VTT Energy, Helsinki (Finland); Nilsson, Lars [Studsvik Eco and Safety AB, Nykoeping (Sweden); Sjoevall, H. [Teoliisuuden Voima Oy (Finland)

    1999-11-01

    Recriticality in a BWR has been studied for a total loss of electric power accident scenario. In a BWR, the B{sub 4}C control rods would melt and relocate from the core before the fuel during core uncovery and heat-up. If electric power returns during this time-window unborated water from ECCS systems will start to reflood the partly control rod free core. Recriticality might take place for which the only mitigating mechanisms are the Doppler effect and void formation. In order to assess the impact of recriticality on reactor safety, including accident management measures, the following issues have been investigated in the SARA project: 1. the energy deposition in the fuel during super-prompt power burst, 2. the quasi steady-state reactor power following the initial power burst and 3. containment response to elevated quasi steady-state reactor power. The approach was to use three computer codes and to further develop and adapt them for the task. The codes were SIMULATE-3K, APROS and RECRIT. Recriticality analyses were carried out for a number of selected reflooding transients for the Oskarshamn 3 plant in Sweden with SIMULATE-3K and for the Olkiluoto 1 plant in Finland with all three codes. The core state initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality - both superprompt power bursts and quasi steady-state power generation - for the studied range of parameters, i. e. with core uncovery and heat-up to maximum core temperatures around 1800 K and water flow rates of 45 kg/s to 2000 kg/s injected into the downcomer. Since the recriticality takes place in a small fraction of the core the power densities are high which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal/g, was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding

  20. Severe accident recriticality analyses (SARA)

    Energy Technology Data Exchange (ETDEWEB)

    Frid, W. E-mail: wiktor.frid@ski.se; Hoejerup, F.; Lindholm, I.; Miettinen, J.; Nilsson, L.; Puska, E.K.; Sjoevall, H

    2001-11-01

    Recriticality in a BWR during reflooding of an overheated partly degraded core, i.e. with relocated control rods, has been studied for a total loss of electric power accident scenario. In order to assess the impact of recriticality on reactor safety, including accident management strategies, the following issues have been investigated in the SARA project: (1) the energy deposition in the fuel during super-prompt power burst; (2) the quasi steady-state reactor power following the initial power burst; and (3) containment response to elevated quasi steady-state reactor power. The approach was to use three computer codes and to further develop and adapt them for the task. The codes were SIMULATE-3K, APROS and RECRIT. Recriticality analyses were carried out for a number of selected reflooding transients for the Oskarshamn 3 plant in Sweden with SIMULATE-3K and for the Olkiluoto 1 plant in Finland with all three codes. The core initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality--both super-prompt power bursts and quasi steady-state power generation--for the range of parameters studied, i.e. with core uncovering and heat-up to maximum core temperatures of approximately 1800 K, and water flow rates of 45-2000 kg s{sup -1} injected into the downcomer. Since recriticality takes place in a small fraction of the core, the power densities are high, which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal g{sup -1}, was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding rate of 2000 kg s{sup -1}. In most cases, however, the predicted energy deposition was smaller, below the regulatory limits for fuel failure, but close to or above recently observed thresholds for fragmentation and dispersion of high burn-up fuel. The highest calculated

  1. Severe accident recriticality analyses (SARA)

    International Nuclear Information System (INIS)

    Frid, W.; Hoejerup, F.; Lindholm, I.; Miettinen, J.; Nilsson, L.; Puska, E.K.; Sjoevall, H.

    2001-01-01

    Recriticality in a BWR during reflooding of an overheated partly degraded core, i.e. with relocated control rods, has been studied for a total loss of electric power accident scenario. In order to assess the impact of recriticality on reactor safety, including accident management strategies, the following issues have been investigated in the SARA project: (1) the energy deposition in the fuel during super-prompt power burst; (2) the quasi steady-state reactor power following the initial power burst; and (3) containment response to elevated quasi steady-state reactor power. The approach was to use three computer codes and to further develop and adapt them for the task. The codes were SIMULATE-3K, APROS and RECRIT. Recriticality analyses were carried out for a number of selected reflooding transients for the Oskarshamn 3 plant in Sweden with SIMULATE-3K and for the Olkiluoto 1 plant in Finland with all three codes. The core initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality--both super-prompt power bursts and quasi steady-state power generation--for the range of parameters studied, i.e. with core uncovering and heat-up to maximum core temperatures of approximately 1800 K, and water flow rates of 45-2000 kg s -1 injected into the downcomer. Since recriticality takes place in a small fraction of the core, the power densities are high, which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal g -1 , was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding rate of 2000 kg s -1 . In most cases, however, the predicted energy deposition was smaller, below the regulatory limits for fuel failure, but close to or above recently observed thresholds for fragmentation and dispersion of high burn-up fuel. The highest calculated quasi steady

  2. Severe Accident Recriticality Analyses (SARA)

    International Nuclear Information System (INIS)

    Frid, W.; Hoejerup, F.; Lindholm, I.; Miettinen, J.; Puska, E.K.; Nilsson, Lars; Sjoevall, H.

    1999-11-01

    Recriticality in a BWR has been studied for a total loss of electric power accident scenario. In a BWR, the B 4 C control rods would melt and relocate from the core before the fuel during core uncovery and heat-up. If electric power returns during this time-window unborated water from ECCS systems will start to reflood the partly control rod free core. Recriticality might take place for which the only mitigating mechanisms are the Doppler effect and void formation. In order to assess the impact of recriticality on reactor safety, including accident management measures, the following issues have been investigated in the SARA project: 1. the energy deposition in the fuel during super-prompt power burst, 2. the quasi steady-state reactor power following the initial power burst and 3. containment response to elevated quasi steady-state reactor power. The approach was to use three computer codes and to further develop and adapt them for the task. The codes were SIMULATE-3K, APROS and RECRIT. Recriticality analyses were carried out for a number of selected reflooding transients for the Oskarshamn 3 plant in Sweden with SIMULATE-3K and for the Olkiluoto 1 plant in Finland with all three codes. The core state initial and boundary conditions prior to recriticality have been studied with the severe accident codes SCDAP/RELAP5, MELCOR and MAAP4. The results of the analyses show that all three codes predict recriticality - both superprompt power bursts and quasi steady-state power generation - for the studied range of parameters, i. e. with core uncovery and heat-up to maximum core temperatures around 1800 K and water flow rates of 45 kg/s to 2000 kg/s injected into the downcomer. Since the recriticality takes place in a small fraction of the core the power densities are high which results in large energy deposition in the fuel during power burst in some accident scenarios. The highest value, 418 cal/g, was obtained with SIMULATE-3K for an Oskarshamn 3 case with reflooding

  3. Analysis of the accuracy of certain methods used for measuring very low reactivities; Analyse de la precision de certaines methodes de mesure de tres basses reactivites

    Energy Technology Data Exchange (ETDEWEB)

    Valat, J; Stern, T E

    1964-07-01

    The rapid measurement of anti-reactivities, in particular very low ones (i.e. a few tens of {beta}) appears to be an interesting method for the automatic start-up a reactor and its optimisation. With this in view, the present report explores the various methods studied essentially from the point of view of the time required for making the measurement with a given statistical accuracy, especially as far as very low activities are concerned. The statistical analysis is applied in turn to: the methods for the natural background noise (auto-correlation and spectral density); the sinusoidal excitation methods for the reactivity or the source, with synchronous detection ; the periodic source excitation method using pulsed neutrons. Finally, the statistical analysis leads to the suggestion of a new method of source excitation using neutronic random square waves combined with an intercorrelation between the random excitation and the resulting output. (authors) [French] La mesure rapide des antireactivites, en particulier celle des tres basses (soit quelques dizaines de {beta}), apparait comme une voie interessante pour le demarrage automatique d'un reacteur et son optimalisation. Dans cette optique, le present rapport explore diverses methodes etudiees essentiellement sous l'angle de la duree de mesure necessaire a une precision relative statistique donnee, plus particulierement en ce qui concerne les tres basses reactivites. L'analyse statistique porte successivement sur: les methodes du bruit de fond naturel (autocorrelation et densite spectrale); les methodes d'excitation sinusoidale de reactivite ou de source, avec detection synchrone; la methode d'excitation periodique de source par neutrons pulses. Enfin l'analyse statistique amene a proposer une methode nouvelle d'excitation de source par creneaux neutroniques aleatoires alliee a une intercorrelation entre l'excitation aleatoire et la sortie resultante. (auteurs)

  4. Correlation Factors Describing Primary and Spatial Sensations of Sound Fields

    Science.gov (United States)

    ANDO, Y.

    2002-11-01

    The theory of subjective preference of the sound field in a concert hall is established based on the model of human auditory-brain system. The model consists of the autocorrelation function (ACF) mechanism and the interaural crosscorrelation function (IACF) mechanism for signals arriving at two ear entrances, and the specialization of human cerebral hemispheres. This theory can be developed to describe primary sensations such as pitch or missing fundamental, loudness, timbre and, in addition, duration sensation which is introduced here as a fourth. These four primary sensations may be formulated by the temporal factors extracted from the ACF associated with the left hemisphere and, spatial sensations such as localization in the horizontal plane, apparent source width and subjective diffuseness are described by the spatial factors extracted from the IACF associated with the right hemisphere. Any important subjective responses of sound fields may be described by both temporal and spatial factors.

  5. Effects of dating errors on nonparametric trend analyses of speleothem time series

    Directory of Open Access Journals (Sweden)

    M. Mudelsee

    2012-10-01

    Full Text Available A fundamental problem in paleoclimatology is to take fully into account the various error sources when examining proxy records with quantitative methods of statistical time series analysis. Records from dated climate archives such as speleothems add extra uncertainty from the age determination to the other sources that consist in measurement and proxy errors. This paper examines three stalagmite time series of oxygen isotopic composition (δ18O from two caves in western Germany, the series AH-1 from the Atta Cave and the series Bu1 and Bu4 from the Bunker Cave. These records carry regional information about past changes in winter precipitation and temperature. U/Th and radiocarbon dating reveals that they cover the later part of the Holocene, the past 8.6 thousand years (ka. We analyse centennial- to millennial-scale climate trends by means of nonparametric Gasser–Müller kernel regression. Error bands around fitted trend curves are determined by combining (1 block bootstrap resampling to preserve noise properties (shape, autocorrelation of the δ18O residuals and (2 timescale simulations (models StalAge and iscam. The timescale error influences on centennial- to millennial-scale trend estimation are not excessively large. We find a "mid-Holocene climate double-swing", from warm to cold to warm winter conditions (6.5 ka to 6.0 ka to 5.1 ka, with warm–cold amplitudes of around 0.5‰ δ18O; this finding is documented by all three records with high confidence. We also quantify the Medieval Warm Period (MWP, the Little Ice Age (LIA and the current warmth. Our analyses cannot unequivocally support the conclusion that current regional winter climate is warmer than that during the MWP.

  6. Análise quantitativa da influência de um novo paradigma ecológico: autocorrelação espacial - DOI: 10.4025/actascibiolsci.v25i1.2113 Quantitative analysis of the influence of a new ecological paradigm: spatial autocorrelation - DOI: 10.4025/actascibiolsci.v25i1.2113

    Directory of Open Access Journals (Sweden)

    Luis Mauricio Bini

    2003-04-01

    Full Text Available O objetivo desse trabalho foi o de avaliar a influência da autocorrelação espacial (ausência de independência estatística de observações obtidas ao longo do espaço geográfico nos estudos ecológicos. Para tanto, uma avaliação dos trabalhos que empregaram os métodos necessários para a quantificação da autocorrelação espacial foi realizada, utilizando os dados fornecidos pelo “Institute for Scientific Information”. Os resultados demonstraram que existe uma tendência crescente da utilização das análises de autocorrelação espacial em estudos ecológicos, e que a presença de autocorrelação espacial significativa foi detectada na maior parte dos estudos. Além disso, esses estudos foram desenvolvidos em vários países, por cientistas de diferentes nacionalidades, com diferentes grupos de organismos e em diferentes tipos de ecossistemas. Dessa forma, pode-se considerar que o reconhecimento explícito da estrutura espacial dos processos naturais, por meio da análise da autocorrelação espacial, é um novo paradigma dos estudos ecológicosThe aim of this paper was to evaluate the influence of the spatial autocorrelation (absence of independence among observations gathered along geographical space in ecological studies. For this task, an evaluation of the studies that used spatial autocorrelation analysis was carried out using the data furnished by the Institute for Scientific Information. There is a positive temporal tendency in the number of studies that used spatial autocorrelation analysis. A significant autocorrelation was detected in most studies. Moreover, scientist of several nationalities carried out these studies in different countries, with different organisms and in different types of ecosystems. In this way, it is possible to consider that the explicit incorporation of the spatial structure of natural processes, through the autocorrelation analysis, is a new ecological paradigm

  7. Pawnee Nation Energy Option Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Matlock, M.; Kersey, K.; Riding In, C.

    2009-07-21

    Pawnee Nation of Oklahoma Energy Option Analyses In 2003, the Pawnee Nation leadership identified the need for the tribe to comprehensively address its energy issues. During a strategic energy planning workshop a general framework was laid out and the Pawnee Nation Energy Task Force was created to work toward further development of the tribe’s energy vision. The overarching goals of the “first steps” project were to identify the most appropriate focus for its strategic energy initiatives going forward, and to provide information necessary to take the next steps in pursuit of the “best fit” energy options. Description of Activities Performed The research team reviewed existing data pertaining to the availability of biomass (focusing on woody biomass, agricultural biomass/bio-energy crops, and methane capture), solar, wind and hydropower resources on the Pawnee-owned lands. Using these data, combined with assumptions about costs and revenue streams, the research team performed preliminary feasibility assessments for each resource category. The research team also reviewed available funding resources and made recommendations to Pawnee Nation highlighting those resources with the greatest potential for financially-viable development, both in the near-term and over a longer time horizon. Findings and Recommendations Due to a lack of financial incentives for renewable energy, particularly at the state level, combined mediocre renewable energy resources, renewable energy development opportunities are limited for Pawnee Nation. However, near-term potential exists for development of solar hot water at the gym, and an exterior wood-fired boiler system at the tribe’s main administrative building. Pawnee Nation should also explore options for developing LFGTE resources in collaboration with the City of Pawnee. Significant potential may also exist for development of bio-energy resources within the next decade. Pawnee Nation representatives should closely monitor

  8. Geographical Environment Factors and Risk Assessment of Tick-Borne Encephalitis in Hulunbuir, Northeastern China.

    Science.gov (United States)

    Li, Yifan; Wang, Juanle; Gao, Mengxu; Fang, Liqun; Liu, Changhua; Lyu, Xin; Bai, Yongqing; Zhao, Qiang; Li, Hairong; Yu, Hongjie; Cao, Wuchun; Feng, Liqiang; Wang, Yanjun; Zhang, Bin

    2017-05-26

    Tick-borne encephalitis (TBE) is one of natural foci diseases transmitted by ticks. Its distribution and transmission are closely related to geographic and environmental factors. Identification of environmental determinates of TBE is of great importance to understanding the general distribution of existing and potential TBE natural foci. Hulunbuir, one of the most severe endemic areas of the disease, is selected as the study area. Statistical analysis, global and local spatial autocorrelation analysis, and regression methods were applied to detect the spatiotemporal characteristics, compare the impact degree of associated factors, and model the risk distribution using the heterogeneity. The statistical analysis of gridded geographic and environmental factors and TBE incidence show that the TBE patients mainly occurred during spring and summer and that there is a significant positive spatial autocorrelation between the distribution of TBE cases and environmental characteristics. The impact degree of these factors on TBE risks has the following descending order: temperature, relative humidity, vegetation coverage, precipitation and topography. A high-risk area with a triangle shape was determined in the central part of Hulunbuir; the low-risk area is located in the two belts next to the outside edge of the central triangle. The TBE risk distribution revealed that the impact of the geographic factors changed depending on the heterogeneity.

  9. Pathway analyses implicate glial cells in schizophrenia.

    Directory of Open Access Journals (Sweden)

    Laramie E Duncan

    Full Text Available The quest to understand the neurobiology of schizophrenia and bipolar disorder is ongoing with multiple lines of evidence indicating abnormalities of glia, mitochondria, and glutamate in both disorders. Despite high heritability estimates of 81% for schizophrenia and 75% for bipolar disorder, compelling links between findings from neurobiological studies, and findings from large-scale genetic analyses, are only beginning to emerge.Ten publically available gene sets (pathways related to glia, mitochondria, and glutamate were tested for association to schizophrenia and bipolar disorder using MAGENTA as the primary analysis method. To determine the robustness of associations, secondary analyses were performed with: ALIGATOR, INRICH, and Set Screen. Data from the Psychiatric Genomics Consortium (PGC were used for all analyses. There were 1,068,286 SNP-level p-values for schizophrenia (9,394 cases/12,462 controls, and 2,088,878 SNP-level p-values for bipolar disorder (7,481 cases/9,250 controls.The Glia-Oligodendrocyte pathway was associated with schizophrenia, after correction for multiple tests, according to primary analysis (MAGENTA p = 0.0005, 75% requirement for individual gene significance and also achieved nominal levels of significance with INRICH (p = 0.0057 and ALIGATOR (p = 0.022. For bipolar disorder, Set Screen yielded nominally and method-wide significant associations to all three glial pathways, with strongest association to the Glia-Astrocyte pathway (p = 0.002.Consistent with findings of white matter abnormalities in schizophrenia by other methods of study, the Glia-Oligodendrocyte pathway was associated with schizophrenia in our genomic study. These findings suggest that the abnormalities of myelination observed in schizophrenia are at least in part due to inherited factors, contrasted with the alternative of purely environmental causes (e.g. medication effects or lifestyle. While not the primary purpose of our study

  10. In service monitoring based on fatigue analyses, possibilities and limitations

    International Nuclear Information System (INIS)

    Dittmar, S.; Binder, F.

    2004-01-01

    German LWR reactors are equipped with monitoring systems which are to enable a comparison of real transients with load case catalogues and fatigue catalogues for fatigue analyses. The information accuracy depends on the accuracy of measurements, on the consideration of parameters influencing fatigue (medium, component surface, component size, etc.), and on the accuracy of the load analyses. The contribution attempts a critical evaluation, also inview of the fact that real fatigue damage often are impossible to quantify on the basis of fatigue analyses at a later stage. The effects of the consideration or non-consideration of various influencing factors are discussed, as well as the consequences of the scatter of material characteristics on which the analyses are based. Possible measures to be taken in operational monitoring are derived. (orig.) [de

  11. Improving word coverage using unsupervised morphological analyser

    Indian Academy of Sciences (India)

    To enable a computer to process information in human languages, ... vised morphological analyser (UMA) would learn how to analyse a language just by looking ... result for English, but they did remarkably worse for Finnish and Turkish.

  12. Techniques for Analysing Problems in Engineering Projects

    DEFF Research Database (Denmark)

    Thorsteinsson, Uffe

    1998-01-01

    Description of how CPM network can be used for analysing complex problems in engineering projects.......Description of how CPM network can be used for analysing complex problems in engineering projects....

  13. Modelling typhoid risk in Dhaka metropolitan area of Bangladesh: the role of socio-economic and environmental factors.

    Science.gov (United States)

    Corner, Robert J; Dewan, Ashraf M; Hashizume, Masahiro

    2013-03-16

    Developing countries in South Asia, such as Bangladesh, bear a disproportionate burden of diarrhoeal diseases such as cholera, typhoid and paratyphoid. These seem to be aggravated by a number of social and environmental factors such as lack of access to safe drinking water, overcrowdedness and poor hygiene brought about by poverty. Some socioeconomic data can be obtained from census data whilst others are more difficult to elucidate. This study considers a range of both census data and spatial data from other sources, including remote sensing, as potential predictors of typhoid risk. Typhoid data are aggregated from hospital admission records for the period from 2005 to 2009. The spatial and statistical structures of the data are analysed and principal axis factoring is used to reduce the degree of co-linearity in the data. The resulting factors are combined into a quality of life index, which in turn is used in a regression model of typhoid occurrence and risk. The three principal factors used together explain 87% of the variance in the initial candidate predictors, which eminently qualifies them for use as a set of uncorrelated explanatory variables in a linear regression model. Initial regression result using ordinary least squares (OLS) were disappointing, this was explainable by analysis of the spatial autocorrelation inherent in the principal factors. The use of geographically weighted regression caused a considerable increase in the predictive power of regressions based on these factors. The best prediction, determined by analysis of the Akaike information criterion (AIC) was found when the three factors were combined into a quality of life index, using a method previously published by others, and had a coefficient of determination of 73%. The typhoid occurrence/risk prediction equation was used to develop the first risk map showing areas of Dhaka metropolitan area whose inhabitants are at greater or lesser risk of typhoid infection. This, coupled with

  14. Graphite analyser upgrade for the IRIS spectrometer at ISIS

    International Nuclear Information System (INIS)

    Campbell, S.I.; Telling, M.T.F.; Carlile, C.J.

    1999-01-01

    Complete text of publication follows. The pyrolytic graphite (PG) analyser bank on the IRIS high resolution inelastic spectrometer [1] at ISIS is to be upgraded. At present the analyser consists of 1350 graphite pieces (6 rows by 225 columns) cooled to 25K [2]. The new analyser array, however, will provide a three-fold increase in area and employ 4212 crystal pieces (18 rows by 234 columns). In addition, the graphite crystals will be cooled close to liquid helium temperature to further reduce thermal diffuse scattering (TDS) and improve the sensitivity of the spectrometer [2]. For an instrument such as IRIS, with its analyser in near back-scattering geometry, optical aberration and variation in the time-of-flight of the analysed neutrons is introduced as one moves out from the horizontal scattering plane. To minimise such effects, the profile of the analyser array has been redesigned. The concept behind the design of the new analyser bank and factors that effect the overall resolution of the instrument are discussed. Results of Monte Carlo simulations of the expected resolution and intensity of the complete instrument are presented and compared to the current instrument performance. (author) [1] C.J. Carlile et al, Physica B 182 (1992) 431-440.; [2] C.J. Carlile et al, Nuclear Instruments and Methods In Physics Research A 338 (1994) 78-82

  15. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic prog...

  16. A database structure for radiological optimization analyses of decommissioning operations

    International Nuclear Information System (INIS)

    Zeevaert, T.; Van de Walle, B.

    1995-09-01

    The structure of a database for decommissioning experiences is described. Radiological optimization is a major radiation protection principle in practices and interventions, involving radiological protection factors, economic costs, social factors. An important lack of knowledge with respect to these factors exists in the domain of the decommissioning of nuclear power plants, due to the low number of decommissioning operations already performed. Moreover, decommissioning takes place only once for a installation. Tasks, techniques, and procedures are in most cases rather specific, limiting the use of past experiences in the radiological optimization analyses of new decommissioning operations. Therefore, it is important that relevant data or information be acquired from decommissioning experiences. These data have to be stored in a database in a way they can be used efficiently in ALARA analyses of future decommissioning activities

  17. Systematic Mapping and Statistical Analyses of Valley Landform and Vegetation Asymmetries Across Hydroclimatic Gradients

    Science.gov (United States)

    Poulos, M. J.; Pierce, J. L.; McNamara, J. P.; Flores, A. N.; Benner, S. G.

    2015-12-01

    Terrain aspect alters the spatial distribution of insolation across topography, driving eco-pedo-hydro-geomorphic feedbacks that can alter landform evolution and result in valley asymmetries for a suite of land surface characteristics (e.g. slope length and steepness, vegetation, soil properties, and drainage development). Asymmetric valleys serve as natural laboratories for studying how landscapes respond to climate perturbation. In the semi-arid montane granodioritic terrain of the Idaho batholith, Northern Rocky Mountains, USA, prior works indicate that reduced insolation on northern (pole-facing) aspects prolongs snow pack persistence, and is associated with thicker, finer-grained soils, that retain more water, prolong the growing season, support coniferous forest rather than sagebrush steppe ecosystems, stabilize slopes at steeper angles, and produce sparser drainage networks. We hypothesize that the primary drivers of valley asymmetry development are changes in the pedon-scale water-balance that coalesce to alter catchment-scale runoff and drainage development, and ultimately cause the divide between north and south-facing land surfaces to migrate northward. We explore this conceptual framework by coupling land surface analyses with statistical modeling to assess relationships and the relative importance of land surface characteristics. Throughout the Idaho batholith, we systematically mapped and tabulated various statistical measures of landforms, land cover, and hydroclimate within discrete valley segments (n=~10,000). We developed a random forest based statistical model to predict valley slope asymmetry based upon numerous measures (n>300) of landscape asymmetries. Preliminary results suggest that drainages are tightly coupled with hillslopes throughout the region, with drainage-network slope being one of the strongest predictors of land-surface-averaged slope asymmetry. When slope-related statistics are excluded, due to possible autocorrelation, valley

  18. Uncertainty Analyses for Back Projection Methods

    Science.gov (United States)

    Zeng, H.; Wei, S.; Wu, W.

    2017-12-01

    So far few comprehensive error analyses for back projection methods have been conducted, although it is evident that high frequency seismic waves can be easily affected by earthquake depth, focal mechanisms and the Earth's 3D structures. Here we perform 1D and 3D synthetic tests for two back projection methods, MUltiple SIgnal Classification (MUSIC) (Meng et al., 2011) and Compressive Sensing (CS) (Yao et al., 2011). We generate synthetics for both point sources and finite rupture sources with different depths, focal mechanisms, as well as 1D and 3D structures in the source region. The 3D synthetics are generated through a hybrid scheme of Direct Solution Method and Spectral Element Method. Then we back project the synthetic data using MUSIC and CS. The synthetic tests show that the depth phases can be back projected as artificial sources both in space and time. For instance, for a source depth of 10km, back projection gives a strong signal 8km away from the true source. Such bias increases with depth, e.g., the error of horizontal location could be larger than 20km for a depth of 40km. If the array is located around the nodal direction of direct P-waves the teleseismic P-waves are dominated by the depth phases. Therefore, back projections are actually imaging the reflection points of depth phases more than the rupture front. Besides depth phases, the strong and long lasted coda waves due to 3D effects near trench can lead to additional complexities tested here. The strength contrast of different frequency contents in the rupture models also produces some variations to the back projection results. In the synthetic tests, MUSIC and CS derive consistent results. While MUSIC is more computationally efficient, CS works better for sparse arrays. In summary, our analyses indicate that the impact of various factors mentioned above should be taken into consideration when interpreting back projection images, before we can use them to infer the earthquake rupture physics.

  19. The Self-Correlation Function of Real Gases; Fonction d'Autocorrelation des Gaz Reels; 0424 0423 041d 041a 0426 0418 042f 0421 0410 041c 041e 041a 041e 0420 0420 0415 041b 042f 0426 0418 0418 0420 0415 0410 041b 042c 041d 042b 0425 0413 0410 0417 041e 0412 ; La Funcion de Autocorrelacion de los Gases Reales

    Energy Technology Data Exchange (ETDEWEB)

    Sigmar, D. J. [Institute for Theoretical Physics, Technical University of Vienna Vienna (Austria)

    1965-06-15

    In the formal theory of inelastic scattering of neutrons, the self-correlation function has been worked out in terms of statistical averages of the derivatives of die N-body interaction-potential of the scatterer. In the present paper, these averages are evaluated for real gases by means of a cluster-expansion related to that of Mayer-Ursell. This leads to certain non-linear types of clusters, which are investigated with respect to the topology of the graphs, their multiplicity (by combinatorial analysis) and their quadrature. As one expects, in view of the many-body problem, some of the clusters are not separable and have to be machine-integrated. In this way, the self-correlation function {gamma}{sub s}(K, t) is calculated for short times, including also the first non-Gaussian term. The cluster-expansion breaks off after the first interaction term, so that the results are valid for low density only. This still gives rise to very many different types of clusters, containing up to seven points, for each coefficient. The assumed potential is a general two-particle, hard-core type. As Singwi et al. have shown, the long time behaviour of {gamma}s is determined by the time integral of the velocity auto-correlation: {integral}{sup {infinity}}{sub 0} {sub T}dt. To construct the integrand for all times, we can make use of our cluster-expansion for small t and adopt Langevin's diffusion theory for large t. Numerical computations are under way. (author) [French] Dans la theorie formelle de la diffusion inelastique des neutrons, on a etabli la fonction d'autocorrelation en se fondant sur les moyennes statistiques des derivees du potentiel d'interaction a N corps du diffuseur. L'auteur a evalue ces moyennes pour des gaz reels a l'aide d'un developpement par amas en rapport avec celui de Mayer-Ursell. U obtient ainsi des types d'amas non lineaires qu'il etudie du point de vue de la topo- logie des diagrammes, de leur multiplicite (par analyse

  20. Dynamical analyses of the time series for three foreign exchange rates

    Science.gov (United States)

    Kim, Sehyun; Kim, Soo Yong; Jung, Jae-Won; Kim, Kyungsik

    2012-05-01

    In this study, we investigate the multifractal properties of three foreign exchange rates (USD-KRW, USD-JPY, and EUR-USD) that are quoted with different economic scales. We estimate and analyze both the generalized Hurst exponent and the autocorrelation function in three foreign exchange rates. The USD-KRW is shown to have the strongest of the Hurst exponents when compared with the other two foreign exchange rates. In particular, the autocorrelation function of the USD-KRW has the largest memory behavior among three foreign exchange rates. It also exhibits a long-memory property in the first quarter, more than those in the other quarters.

  1. Fracture analyses of WWER reactor pressure vessels

    International Nuclear Information System (INIS)

    Sievers, J.; Liu, X.

    1997-01-01

    In the paper first the methodology of fracture assessment based on finite element (FE) calculations is described and compared with simplified methods. The FE based methodology was verified by analyses of large scale thermal shock experiments in the framework of the international comparative study FALSIRE (Fracture Analyses of Large Scale Experiments) organized by GRS and ORNL. Furthermore, selected results from fracture analyses of different WWER type RPVs with postulated cracks under different loading transients are presented. 11 refs, 13 figs, 1 tab

  2. Fracture analyses of WWER reactor pressure vessels

    Energy Technology Data Exchange (ETDEWEB)

    Sievers, J; Liu, X [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany)

    1997-09-01

    In the paper first the methodology of fracture assessment based on finite element (FE) calculations is described and compared with simplified methods. The FE based methodology was verified by analyses of large scale thermal shock experiments in the framework of the international comparative study FALSIRE (Fracture Analyses of Large Scale Experiments) organized by GRS and ORNL. Furthermore, selected results from fracture analyses of different WWER type RPVs with postulated cracks under different loading transients are presented. 11 refs, 13 figs, 1 tab.

  3. YALINA Booster subcritical assembly modeling and analyses

    International Nuclear Information System (INIS)

    Talamo, A.; Gohar, Y.; Aliberti, G.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.; Sadovich, S.

    2010-01-01

    Full text: Accurate simulation models of the YALINA Booster assembly of the Joint Institute for Power and Nuclear Research (JIPNR)-Sosny, Belarus have been developed by Argonne National Laboratory (ANL) of the USA. YALINA-Booster has coupled zones operating with fast and thermal neutron spectra, which requires a special attention in the modelling process. Three different uranium enrichments of 90%, 36% or 21% were used in the fast zone and 10% uranium enrichment was used in the thermal zone. Two of the most advanced Monte Carlo computer programs have been utilized for the ANL analyses: MCNP of the Los Alamos National Laboratory and MONK of the British Nuclear Fuel Limited and SERCO Assurance. The developed geometrical models for both computer programs modelled all the details of the YALINA Booster facility as described in the technical specifications defined in the International Atomic Energy Agency (IAEA) report without any geometrical approximation or material homogenization. Materials impurities and the measured material densities have been used in the models. The obtained results for the neutron multiplication factors calculated in criticality mode (keff) and in source mode (ksrc) with an external neutron source from the two Monte Carlo programs are very similar. Different external neutron sources have been investigated including californium, deuterium-deuterium (D-D), and deuterium-tritium (D-T) neutron sources. The spatial neutron flux profiles and the neutron spectra in the experimental channels were calculated. In addition, the kinetic parameters were defined including the effective delayed neutron fraction, the prompt neutron lifetime, and the neutron generation time. A new calculation methodology has been developed at ANL to simulate the pulsed neutron source experiments. In this methodology, the MCNP code is used to simulate the detector response from a single pulse of the external neutron source and a C code is used to superimpose the pulse until the

  4. [Anne Arold. Kontrastive Analyse...] / Paul Alvre

    Index Scriptorium Estoniae

    Alvre, Paul, 1921-2008

    2001-01-01

    Arvustus: Arold, Anne. Kontrastive analyse der Wortbildungsmuster im Deutschen und im Estnischen (am Beispiel der Aussehensadjektive). Tartu, 2000. (Dissertationes philologiae germanicae Universitatis Tartuensis)

  5. Angular analyses in relativistic quantum mechanics; Analyses angulaires en mecanique quantique relativiste

    Energy Technology Data Exchange (ETDEWEB)

    Moussa, P [Commissariat a l' Energie Atomique, 91 - Saclay (France). Centre d' Etudes Nucleaires

    1968-06-01

    This work describes the angular analysis of reactions between particles with spin in a fully relativistic fashion. One particle states are introduced, following Wigner's method, as representations of the inhomogeneous Lorentz group. In order to perform the angular analyses, the reduction of the product of two representations of the inhomogeneous Lorentz group is studied. Clebsch-Gordan coefficients are computed for the following couplings: l-s coupling, helicity coupling, multipolar coupling, and symmetric coupling for more than two particles. Massless and massive particles are handled simultaneously. On the way we construct spinorial amplitudes and free fields; we recall how to establish convergence theorems for angular expansions from analyticity hypothesis. Finally we substitute these hypotheses to the idea of 'potential radius', which gives at low energy the usual 'centrifugal barrier' factors. The presence of such factors had never been deduced from hypotheses compatible with relativistic invariance. (author) [French] On decrit un formalisme permettant de tenir compte de l'invariance relativiste, dans l'analyse angulaire des amplitudes de reaction entre particules de spin quelconque. Suivant Wigner, les etats a une particule sont introduits a l'aide des representations du groupe de Lorentz inhomogene. Pour effectuer les analyses angulaires, on etudie la reduction du produit de deux representations du groupe de Lorentz inhomogene. Les coefficients de Clebsch-Gordan correspondants sont calcules dans les couplages suivants: couplage l-s couplage d'helicite, couplage multipolaire, couplage symetrique pour plus de deux particules. Les particules de masse nulle et de masse non nulle sont traitees simultanement. Au passage, on introduit les amplitudes spinorielles et on construit les champs libres, on rappelle comment des hypotheses d'analyticite permettent d'etablir des theoremes de convergence pour les developpements angulaires. Enfin on fournit un substitut a la

  6. Transient Seepage for Levee Engineering Analyses

    Science.gov (United States)

    Tracy, F. T.

    2017-12-01

    Historically, steady-state seepage analyses have been a key tool for designing levees by practicing engineers. However, with the advances in computer modeling, transient seepage analysis has become a potentially viable tool. A complication is that the levees usually have partially saturated flow, and this is significantly more complicated in transient flow. This poster illustrates four elements of our research in partially saturated flow relating to the use of transient seepage for levee design: (1) a comparison of results from SEEP2D, SEEP/W, and SLIDE for a generic levee cross section common to the southeastern United States; (2) the results of a sensitivity study of varying saturated hydraulic conductivity, the volumetric water content function (as represented by van Genuchten), and volumetric compressibility; (3) a comparison of when soils do and do not exhibit hysteresis, and (4) a description of proper and improper use of transient seepage in levee design. The variables considered for the sensitivity and hysteresis studies are pore pressure beneath the confining layer at the toe, the flow rate through the levee system, and a levee saturation coefficient varying between 0 and 1. Getting results for SEEP2D, SEEP/W, and SLIDE to match proved more difficult than expected. After some effort, the results matched reasonably well. Differences in results were caused by various factors, including bugs, different finite element meshes, different numerical formulations of the system of nonlinear equations to be solved, and differences in convergence criteria. Varying volumetric compressibility affected the above test variables the most. The levee saturation coefficient was most affected by the use of hysteresis. The improper use of pore pressures from a transient finite element seepage solution imported into a slope stability computation was found to be the most grievous mistake in using transient seepage in the design of levees.

  7. The geography of post-disaster mental health: spatial patterning of psychological vulnerability and resilience factors in New York City after Hurricane Sandy.

    Science.gov (United States)

    Gruebner, Oliver; Lowe, Sarah R; Sampson, Laura; Galea, Sandro

    2015-06-10

    Only very few studies have investigated the geographic distribution of psychological resilience and associated mental health outcomes after natural or man made disasters. Such information is crucial for location-based interventions that aim to promote recovery in the aftermath of disasters. The purpose of this study therefore was to investigate geographic variability of (1) posttraumatic stress (PTS) and depression in a Hurricane Sandy affected population in NYC and (2) psychological vulnerability and resilience factors among affected areas in NYC boroughs. Cross-sectional telephone survey data were collected 13 to 16 months post-disaster from household residents (N = 418 adults) in NYC communities that were most heavily affected by the hurricane. The Posttraumatic Stress Checklist for DSM-5 (PCL-5) was applied for measuring posttraumatic stress and the nine-item Patient Health Questionnaire (PHQ-9) was used for measuring depression. We applied spatial autocorrelation and spatial regimes regression analyses, to test for spatial clusters of mental health outcomes and to explore whether associations between vulnerability and resilience factors and mental health differed among New York City's five boroughs. Mental health problems clustered predominantly in neighborhoods that are geographically more exposed towards the ocean indicating a spatial variation of risk within and across the boroughs. We further found significant variation in associations between vulnerability and resilience factors and mental health. Race/ethnicity (being Asian or non-Hispanic black) and disaster-related stressors were vulnerability factors for mental health symptoms in Queens, and being employed and married were resilience factors for these symptoms in Manhattan and Staten Island. In addition, parental status was a vulnerability factor in Brooklyn and a resilience factor in the Bronx. We conclude that explanatory characteristics may manifest as psychological vulnerability and resilience

  8. An MDE Approach for Modular Program Analyses

    NARCIS (Netherlands)

    Yildiz, Bugra Mehmet; Bockisch, Christoph; Aksit, Mehmet; Rensink, Arend

    Program analyses are an important tool to check if a system fulfills its specification. A typical implementation strategy for program analyses is to use an imperative, general-purpose language like Java, and access the program to be analyzed through libraries that offer an API for reading, writing

  9. Random error in cardiovascular meta-analyses

    DEFF Research Database (Denmark)

    Albalawi, Zaina; McAlister, Finlay A; Thorlund, Kristian

    2013-01-01

    BACKGROUND: Cochrane reviews are viewed as the gold standard in meta-analyses given their efforts to identify and limit systematic error which could cause spurious conclusions. The potential for random error to cause spurious conclusions in meta-analyses is less well appreciated. METHODS: We exam...

  10. Diversity of primary care systems analysed.

    NARCIS (Netherlands)

    Kringos, D.; Boerma, W.; Bourgueil, Y.; Cartier, T.; Dedeu, T.; Hasvold, T.; Hutchinson, A.; Lember, M.; Oleszczyk, M.; Pavlick, D.R.

    2015-01-01

    This chapter analyses differences between countries and explains why countries differ regarding the structure and process of primary care. The components of primary care strength that are used in the analyses are health policy-making, workforce development and in the care process itself (see Fig.

  11. Approximate analyses of inelastic effects in pipework

    International Nuclear Information System (INIS)

    Jobson, D.A.

    1983-01-01

    This presentation shows figures concerned with analyses of inelastic effects in pipework as follows: comparison of experimental and calculated simplified analyses results for free end rotation and for circumferential strain; interrupted stress relaxation; regenerated relaxation caused by reversed yield; buckling of straight pipe under combined bending and torsion; results of fatigues test of pipe bend

  12. High performance liquid chromatography in pharmaceutical analyses

    Directory of Open Access Journals (Sweden)

    Branko Nikolin

    2004-05-01

    Full Text Available In testing the pre-sale procedure the marketing of drugs and their control in the last ten years, high performance liquid chromatographyreplaced numerous spectroscopic methods and gas chromatography in the quantitative and qualitative analysis. In the first period of HPLC application it was thought that it would become a complementary method of gas chromatography, however, today it has nearly completely replaced gas chromatography in pharmaceutical analysis. The application of the liquid mobile phase with the possibility of transformation of mobilized polarity during chromatography and all other modifications of mobile phase depending upon the characteristics of substance which are being tested, is a great advantage in the process of separation in comparison to other methods. The greater choice of stationary phase is the next factor which enables realization of good separation. The separation line is connected to specific and sensitive detector systems, spectrafluorimeter, diode detector, electrochemical detector as other hyphernated systems HPLC-MS and HPLC-NMR, are the basic elements on which is based such wide and effective application of the HPLC method. The purpose high performance liquid chromatography(HPLC analysis of any drugs is to confirm the identity of a drug and provide quantitative results and also to monitor the progress of the therapy of a disease.1 Measuring presented on the Fig. 1. is chromatogram obtained for the plasma of depressed patients 12 h before oral administration of dexamethasone. It may also be used to further our understanding of the normal and disease process in the human body trough biomedical and therapeutically research during investigation before of the drugs registration. The analyses of drugs and metabolites in biological fluids, particularly plasma, serum or urine is one of the most demanding but one of the most common uses of high performance of liquid chromatography. Blood, plasma or

  13. High perfomance liquid chromatography in pharmaceutical analyses.

    Science.gov (United States)

    Nikolin, Branko; Imamović, Belma; Medanhodzić-Vuk, Saira; Sober, Miroslav

    2004-05-01

    In testing the pre-sale procedure the marketing of drugs and their control in the last ten years, high performance liquid chromatography replaced numerous spectroscopic methods and gas chromatography in the quantitative and qualitative analysis. In the first period of HPLC application it was thought that it would become a complementary method of gas chromatography, however, today it has nearly completely replaced gas chromatography in pharmaceutical analysis. The application of the liquid mobile phase with the possibility of transformation of mobilized polarity during chromatography and all other modifications of mobile phase depending upon the characteristics of substance which are being tested, is a great advantage in the process of separation in comparison to other methods. The greater choice of stationary phase is the next factor which enables realization of good separation. The separation line is connected to specific and sensitive detector systems, spectrafluorimeter, diode detector, electrochemical detector as other hyphernated systems HPLC-MS and HPLC-NMR, are the basic elements on which is based such wide and effective application of the HPLC method. The purpose high performance liquid chromatography (HPLC) analysis of any drugs is to confirm the identity of a drug and provide quantitative results and also to monitor the progress of the therapy of a disease.1) Measuring presented on the Fig. 1. is chromatogram obtained for the plasma of depressed patients 12 h before oral administration of dexamethasone. It may also be used to further our understanding of the normal and disease process in the human body trough biomedical and therapeutically research during investigation before of the drugs registration. The analyses of drugs and metabolites in biological fluids, particularly plasma, serum or urine is one of the most demanding but one of the most common uses of high performance of liquid chromatography. Blood, plasma or serum contains numerous endogenous

  14. [Application of big data analyses for musculoskeletal cell differentiation].

    Science.gov (United States)

    Imai, Yuuki

    2016-04-01

    Next generation sequencer has strongly progress big data analyses in life science. Among various kinds of sequencing data sets, epigenetic platform has just been important key to clarify the questions on broad and detail phenomenon in various forms of life. In this report, it is introduced that the research on identification of novel transcription factors in osteoclastogenesis using DNase-seq. Big data on musculoskeletal research will be organized by IFMRS and is getting more crucial.

  15. Exploration of underground basement structures in Kanto plain using the spatial autocorrelation method. 1. S-wave velocity structure along the line from Hatoyama, Saitama to Noda, Chiba; Kukan jiko sokanho ni yoru Kanto heiya no kiban kozo tansa. 1. Saitamaken Hatoyama machi - Chibaken Nodashi kan no S ha sokudo kozo

    Energy Technology Data Exchange (ETDEWEB)

    Matsuoka, T; Umezawa, N; Shiraishi, H [Saitama Institute of Environmental Pollution, Saitama (Japan)

    1997-05-27

    The Saitama prefectural government has been conducting basement structure exploration using the spatial autocorrelation method by dividing the entire plain area into meshes, for the purpose of improving the accuracy of estimating large-scale seismic damages. This paper reports the result of explorations on meshes in the east-west direction in the central part of Saitama Prefecture. The present exploration was intended on ten meshes in the east-west direction along the north latitude 36-degree line. The number of exploration points is 13 comprising three points on the hilly area bordering on the eastern edge of the Kanto mountainous area and ten points on the plain area. The arrangement constitutes a traverse line with a total distance of about 33 km from the west edge (Hatoyama-machi in Saitama Prefecture) to the east edge (Noda City in Chiba Prefecture). The phase velocities were estimated from the result of the array microtremor observations using the spatial autocorrelation method applied with the FET. The phase velocities were used to estimate underground structures by using an inverse analysis. As a result, detailed two-dimensional S-wave velocity structures were revealed on the traverse line. The velocity cross section expresses change in the basement structures with sufficient resolution, and at the same time the information is judged highly harmonious with existing deep boring data and the result of artificial earthquake exploration. 15 refs., 6 figs.

  16. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  17. HLA region excluded by linkage analyses of early onset periodontitis

    Energy Technology Data Exchange (ETDEWEB)

    Sun, C.; Wang, S.; Lopez, N.

    1994-09-01

    Previous studies suggested that HLA genes may influence susceptibility to early-onset periodontitis (EOP). Segregation analyses indicate that EOP may be due to a single major gene. We conducted linkage analyses to assess possible HLA effects on EOP. Fifty families with two or more close relatives affected by EOP were ascertained in Virginia and Chile. A microsatellite polymorphism within the HLA region (at the tumor necrosis factor beta locus) was typed using PCR. Linkage analyses used a donimant model most strongly supported by previous studies. Assuming locus homogeneity, our results exclude a susceptibility gene within 10 cM on either side of our marker locus. This encompasses all of the HLA region. Analyses assuming alternative models gave qualitatively similar results. Allowing for locus heterogeneity, our data still provide no support for HLA-region involvement. However, our data do not statistically exclude (LOD <-2.0) hypotheses of disease-locus heterogeneity, including models where up to half of our families could contain an EOP disease gene located in the HLA region. This is due to the limited power of even our relatively large collection of families and the inherent difficulties of mapping genes for disorders that have complex and heterogeneous etiologies. Additional statistical analyses, recruitment of families, and typing of flanking DNA markers are planned to more conclusively address these issues with respect to the HLA region and other candidate locations in the human genome. Additional results for markers covering most of the human genome will also be presented.

  18. Integrated Field Analyses of Thermal Springs

    Science.gov (United States)

    Shervais, K.; Young, B.; Ponce-Zepeda, M. M.; Rosove, S.

    2011-12-01

    A group of undergraduate researchers through the SURE internship offered by the Southern California Earthquake Center (SCEC) have examined thermal springs in southern Idaho, northern Utah as well as mud volcanoes in the Salton Sea, California. We used an integrated approach to estimate the setting and maximum temperature, including water chemistry, Ipad-based image and data-base management, microbiology, and gas analyses with a modified Giggenbach sampler.All springs were characterized using GISRoam (tmCogent3D). We are performing geothermometry calculations as well as comparisons with temperature gradient data on the results while also analyzing biological samples. Analyses include water temperature, pH, electrical conductivity, and TDS measured in the field. Each sample is sealed and chilled and delivered to a water lab within 12 hours.Temperatures are continuously monitored with the use of Solinst Levelogger Juniors. Through partnership with a local community college geology club, we receive results on a monthly basis and are able to process initial data earlier in order to evaluate data over a longer time span. The springs and mudpots contained microbial organisms which were analyzed using methods of single colony isolation, polymerase chain reaction, and DNA sequencing showing the impact of the organisms on the springs or vice versa. Soon we we will collect gas samples at sites that show signs of gas. This will be taken using a hybrid of the Giggenbach method and our own methods. Drawing gas samples has proven a challenge, however we devised a method to draw out gas samples utilizing the Giggenbach flask, transferring samples to glass blood sample tubes, replacing NaOH in the Giggenbach flask, and evacuating it in the field for multiple samples using a vacuum pump. We also use a floating platform devised to carry and lower a levelogger, to using an in-line fuel filter from a tractor in order to keep mud from contaminating the equipment.The use of raster

  19. Level II Ergonomic Analyses, Dover AFB, DE

    Science.gov (United States)

    1999-02-01

    IERA-RS-BR-TR-1999-0002 UNITED STATES AIR FORCE IERA Level II Ergonomie Analyses, Dover AFB, DE Andrew Marcotte Marilyn Joyce The Joyce...Project (070401881, Washington, DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE 4. TITLE AND SUBTITLE Level II Ergonomie Analyses, Dover...1.0 INTRODUCTION 1-1 1.1 Purpose Of The Level II Ergonomie Analyses : 1-1 1.2 Approach 1-1 1.2.1 Initial Shop Selection and Administration of the

  20. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic...... programs and evaluated using incremental tabled evaluation, a technique for efficiently updating memo tables in response to changes in facts and rules. The approach has been implemented and integrated into the Eclipse IDE. Our measurements show that this technique is effective for automatically...