WorldWideScience

Sample records for anti-correlated signals quantified

  1. Early anti-correlated BOLD signal changes of physiologic origin.

    Science.gov (United States)

    Bright, Molly G; Bianciardi, Marta; de Zwart, Jacco A; Murphy, Kevin; Duyn, Jeff H

    2014-02-15

    Negative BOLD signals that are synchronous with resting state fluctuations have been observed in large vessels in the cortical sulci and surrounding the ventricles. In this study, we investigated the origin of these negative BOLD signals by applying a Cued Deep Breathing (CDB) task to create transient hypocapnia and a resultant global fMRI signal decrease. We hypothesized that a global stimulus would amplify the effect in large vessels and that using a global negative (vasoconstrictive) stimulus would test whether these voxels exhibit either inherently negative or simply anti-correlated BOLD responses. Significantly anti-correlated, but positive, BOLD signal changes during respiratory challenges were identified in voxels primarily located near edges of brain spaces containing CSF. These positive BOLD responses occurred earlier than the negative CDB response across most of gray matter voxels. These findings confirm earlier suggestions that in some brain regions, local, fractional changes in CSF volume may overwhelm BOLD-related signal changes, leading to signal anti-correlation. We show that regions with CDB anti-correlated signals coincide with most, but not all, of the regions with negative BOLD signal changes observed during a visual and motor stimulus task. Thus, the addition of a physiological challenge to fMRI experiments can help identify which negative BOLD signals are passive physiological anti-correlations and which may have a putative neuronal origin. Published by Elsevier Inc.

  2. Effects of coarse-graining on the scaling behavior of long-range correlated and anti-correlated signals

    OpenAIRE

    Xu, Yinlin; Ma, Qianli D.Y.; Schmitt, Daniel T.; Bernaola-Galván, Pedro; Ivanov, Plamen Ch.

    2011-01-01

    We investigate how various coarse-graining methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find, that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and th...

  3. Effects of coarse-graining on the scaling behavior of long-range correlated and anti-correlated signals.

    Science.gov (United States)

    Xu, Yinlin; Ma, Qianli D Y; Schmitt, Daniel T; Bernaola-Galván, Pedro; Ivanov, Plamen Ch

    2011-11-01

    We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences.

  4. The impact of global signal regression on resting state correlations: are anti-correlated networks introduced?

    Science.gov (United States)

    Murphy, Kevin; Birn, Rasmus M; Handwerker, Daniel A; Jones, Tyler B; Bandettini, Peter A

    2009-02-01

    Low-frequency fluctuations in fMRI signal have been used to map several consistent resting state networks in the brain. Using the posterior cingulate cortex as a seed region, functional connectivity analyses have found not only positive correlations in the default mode network but negative correlations in another resting state network related to attentional processes. The interpretation is that the human brain is intrinsically organized into dynamic, anti-correlated functional networks. Global variations of the BOLD signal are often considered nuisance effects and are commonly removed using a general linear model (GLM) technique. This global signal regression method has been shown to introduce negative activation measures in standard fMRI analyses. The topic of this paper is whether such a correction technique could be the cause of anti-correlated resting state networks in functional connectivity analyses. Here we show that, after global signal regression, correlation values to a seed voxel must sum to a negative value. Simulations also show that small phase differences between regions can lead to spurious negative correlation values. A combination breath holding and visual task demonstrates that the relative phase of global and local signals can affect connectivity measures and that, experimentally, global signal regression leads to bell-shaped correlation value distributions, centred on zero. Finally, analyses of negatively correlated networks in resting state data show that global signal regression is most likely the cause of anti-correlations. These results call into question the interpretation of negatively correlated regions in the brain when using global signal regression as an initial processing step.

  5. Anti-correlated networks, global signal regression, and the effects of caffeine in resting-state functional MRI.

    Science.gov (United States)

    Wong, Chi Wah; Olafsson, Valur; Tal, Omer; Liu, Thomas T

    2012-10-15

    Resting-state functional connectivity magnetic resonance imaging is proving to be an essential tool for the characterization of functional networks in the brain. Two of the major networks that have been identified are the default mode network (DMN) and the task positive network (TPN). Although prior work indicates that these two networks are anti-correlated, the findings are controversial because the anti-correlations are often found only after the application of a pre-processing step, known as global signal regression, that can produce artifactual anti-correlations. In this paper, we show that, for subjects studied in an eyes-closed rest state, caffeine can significantly enhance the detection of anti-correlations between the DMN and TPN without the need for global signal regression. In line with these findings, we find that caffeine also leads to widespread decreases in connectivity and global signal amplitude. Using a recently introduced geometric model of global signal effects, we demonstrate that these decreases are consistent with the removal of an additive global signal confound. In contrast to the effects observed in the eyes-closed rest state, caffeine did not lead to significant changes in global functional connectivity in the eyes-open rest state. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Continuous detection of weak sensory signals in afferent spike trains: the role of anti-correlated interspike intervals in detection performance.

    Science.gov (United States)

    Goense, J B M; Ratnam, R

    2003-10-01

    An important problem in sensory processing is deciding whether fluctuating neural activity encodes a stimulus or is due to variability in baseline activity. Neurons that subserve detection must examine incoming spike trains continuously, and quickly and reliably differentiate signals from baseline activity. Here we demonstrate that a neural integrator can perform continuous signal detection, with performance exceeding that of trial-based procedures, where spike counts in signal- and baseline windows are compared. The procedure was applied to data from electrosensory afferents of weakly electric fish (Apteronotus leptorhynchus), where weak perturbations generated by small prey add approximately 1 spike to a baseline of approximately 300 spikes s(-1). The hypothetical postsynaptic neuron, modeling an electrosensory lateral line lobe cell, could detect an added spike within 10-15 ms, achieving near ideal detection performance (80-95%) at false alarm rates of 1-2 Hz, while trial-based testing resulted in only 30-35% correct detections at that false alarm rate. The performance improvement was due to anti-correlations in the afferent spike train, which reduced both the amplitude and duration of fluctuations in postsynaptic membrane activity, and so decreased the number of false alarms. Anti-correlations can be exploited to improve detection performance only if there is memory of prior decisions.

  7. Mimicking anti-correlations with classical interference

    International Nuclear Information System (INIS)

    Godoy, S; Seifert, B; Wallentowitz, S

    2013-01-01

    It is shown how classical laser light impinging on a beam splitter with internal reflections may mimic anti-correlations of the detected outputs, similar to those observed for anti-bunched light. The experimentally observed anti-correlation may be interpreted as a classical Hong–Ou–Mandel dip. (paper)

  8. Random Walks with Anti-Correlated Steps

    OpenAIRE

    Wagner, Dirk; Noga, John

    2005-01-01

    We conjecture the expected value of random walks with anti-correlated steps to be exactly 1. We support this conjecture with 2 plausibility arguments and experimental data. The experimental analysis includes the computation of the expected values of random walks for steps up to 22. The result shows the expected value asymptotically converging to 1.

  9. Anti-correlated cortical networks of intrinsic connectivity in the rat brain.

    Science.gov (United States)

    Schwarz, Adam J; Gass, Natalia; Sartorius, Alexander; Risterucci, Celine; Spedding, Michael; Schenker, Esther; Meyer-Lindenberg, Andreas; Weber-Fahr, Wolfgang

    2013-01-01

    In humans, resting-state blood oxygen level-dependent (BOLD) signals in the default mode network (DMN) are temporally anti-correlated with those from a lateral cortical network involving the frontal eye fields, secondary somatosensory and posterior insular cortices. Here, we demonstrate the existence of an analogous lateral cortical network in the rat brain, extending laterally from anterior secondary sensorimotor regions to the insular cortex and exhibiting low-frequency BOLD fluctuations that are temporally anti-correlated with a midline "DMN-like" network comprising posterior/anterior cingulate and prefrontal cortices. The primary nexus for this anti-correlation relationship was the anterior secondary motor cortex, close to regions that have been identified with frontal eye fields in the rat brain. The anti-correlation relationship was corroborated after global signal removal, underscoring this finding as a robust property of the functional connectivity signature in the rat brain. These anti-correlated networks demonstrate strong anatomical homology to networks identified in human and monkey connectivity studies, extend the known preserved functional connectivity relationships between rodent and primates, and support the use of resting-state functional magnetic resonance imaging as a translational imaging method between rat models and humans.

  10. Quantifying signal changes in nano-wire based biosensors

    DEFF Research Database (Denmark)

    De Vico, Luca; Sørensen, Martin Hedegård; Iversen, Lars

    2011-01-01

    In this work, we present a computational methodology for predicting the change in signal (conductance sensitivity) of a nano-BioFET sensor (a sensor based on a biomolecule binding another biomolecule attached to a nano-wire field effect transistor) upon binding its target molecule. The methodolog...

  11. Stress Redistribution Explains Anti-correlated Subglacial Pressure Variations

    Directory of Open Access Journals (Sweden)

    Pierre-Marie Lefeuvre

    2018-01-01

    Full Text Available We used a finite element model to interpret anti-correlated pressure variations at the base of a glacier to demonstrate the importance of stress redistribution in the basal ice. We first investigated two pairs of load cells installed 20 m apart at the base of the 210 m thick Engabreen glacier in Northern Norway. The load cell data for July 2003 showed that pressurisation of a subglacial channel located over one load cell pair led to anti-correlation in pressure between the two pairs. To investigate the cause of this anti-correlation, we used a full Stokes 3D model of a 210 m thick and 25–200 m wide glacier with a pressurised subglacial channel represented as a pressure boundary condition. The model reproduced the anti-correlated pressure response at the glacier bed and variations in pressure of the same order of magnitude as the load cell observations. The anti-correlation pattern was shown to depend on the bed/surface slope. On a flat bed with laterally constrained cross-section, the resulting bridging effect diverted some of the normal forces acting on the bed to the sides. The anti-correlated pressure variations were then reproduced at a distance >10–20 m from the channel. In contrast, when the bed was inclined, the channel support of the overlying ice was vertical only, causing a reduction of the normal stress on the bed. With a bed slope of 5 degrees, the anti-correlation occurred within 10 m of the channel. The model thus showed that the effect of stress redistribution can lead to an opposite response in pressure at the same distance from the channel and that anti-correlation in pressure is reproduced without invoking cavity expansion caused by sliding.

  12. Anti-correlations in the degree distribution increase stimulus detection performance in noisy spiking neural networks.

    Science.gov (United States)

    Martens, Marijn B; Houweling, Arthur R; E Tiesinga, Paul H

    2017-02-01

    Neuronal circuits in the rodent barrel cortex are characterized by stable low firing rates. However, recent experiments show that short spike trains elicited by electrical stimulation in single neurons can induce behavioral responses. Hence, the underlying neural networks provide stability against internal fluctuations in the firing rate, while simultaneously making the circuits sensitive to small external perturbations. Here we studied whether stability and sensitivity are affected by the connectivity structure in recurrently connected spiking networks. We found that anti-correlation between the number of afferent (in-degree) and efferent (out-degree) synaptic connections of neurons increases stability against pathological bursting, relative to networks where the degrees were either positively correlated or uncorrelated. In the stable network state, stimulation of a few cells could lead to a detectable change in the firing rate. To quantify the ability of networks to detect the stimulation, we used a receiver operating characteristic (ROC) analysis. For a given level of background noise, networks with anti-correlated degrees displayed the lowest false positive rates, and consequently had the highest stimulus detection performance. We propose that anti-correlation in the degree distribution may be a computational strategy employed by sensory cortices to increase the detectability of external stimuli. We show that networks with anti-correlated degrees can in principle be formed by applying learning rules comprised of a combination of spike-timing dependent plasticity, homeostatic plasticity and pruning to networks with uncorrelated degrees. To test our prediction we suggest a novel experimental method to estimate correlations in the degree distribution.

  13. Diametrical clustering for identifying anti-correlated gene clusters.

    Science.gov (United States)

    Dhillon, Inderjit S; Marcotte, Edward M; Roshan, Usman

    2003-09-01

    Clustering genes based upon their expression patterns allows us to predict gene function. Most existing clustering algorithms cluster genes together when their expression patterns show high positive correlation. However, it has been observed that genes whose expression patterns are strongly anti-correlated can also be functionally similar. Biologically, this is not unintuitive-genes responding to the same stimuli, regardless of the nature of the response, are more likely to operate in the same pathways. We present a new diametrical clustering algorithm that explicitly identifies anti-correlated clusters of genes. Our algorithm proceeds by iteratively (i). re-partitioning the genes and (ii). computing the dominant singular vector of each gene cluster; each singular vector serving as the prototype of a 'diametric' cluster. We empirically show the effectiveness of the algorithm in identifying diametrical or anti-correlated clusters. Testing the algorithm on yeast cell cycle data, fibroblast gene expression data, and DNA microarray data from yeast mutants reveals that opposed cellular pathways can be discovered with this method. We present systems whose mRNA expression patterns, and likely their functions, oppose the yeast ribosome and proteosome, along with evidence for the inverse transcriptional regulation of a number of cellular systems.

  14. Anti-correlation and subsector structure in financial systems

    Science.gov (United States)

    Jiang, X. F.; Zheng, B.

    2012-02-01

    With the random matrix theory, we study the spatial structure of the Chinese stock market, the American stock market and global market indices. After taking into account the signs of the components in the eigenvectors of the cross-correlation matrix, we detect the subsector structure of the financial systems. The positive and negative subsectors are anti-correlated with respect to each other in the corresponding eigenmode. The subsector structure is strong in the Chinese stock market, while somewhat weaker in the American stock market and global market indices. Characteristics of the subsector structures in different markets are revealed.

  15. Quantification of the impact of a confounding variable on functional connectivity confirms anti-correlated networks in the resting-state.

    Science.gov (United States)

    Carbonell, F; Bellec, P; Shmuel, A

    2014-02-01

    The effect of regressing out the global average signal (GAS) in resting state fMRI data has become a concern for interpreting functional connectivity analyses. It is not clear whether the reported anti-correlations between the Default Mode and the Dorsal Attention Networks are intrinsic to the brain, or are artificially created by regressing out the GAS. Here we introduce a concept, Impact of the Global Average on Functional Connectivity (IGAFC), for quantifying the sensitivity of seed-based correlation analyses to the regression of the GAS. This voxel-wise IGAFC index is defined as the product of two correlation coefficients: the correlation between the GAS and the fMRI time course of a voxel, times the correlation between the GAS and the seed time course. This definition enables the calculation of a threshold at which the impact of regressing-out the GAS would be large enough to introduce spurious negative correlations. It also yields a post-hoc impact correction procedure via thresholding, which eliminates spurious correlations introduced by regressing out the GAS. In addition, we introduce an Artificial Negative Correlation Index (ANCI), defined as the absolute difference between the IGAFC index and the impact threshold. The ANCI allows a graded confidence scale for ranking voxels according to their likelihood of showing artificial correlations. By applying this method, we observed regions in the Default Mode and Dorsal Attention Networks that were anti-correlated. These findings confirm that the previously reported negative correlations between the Dorsal Attention and Default Mode Networks are intrinsic to the brain and not the result of statistical manipulations. Our proposed quantification of the impact that a confound may have on functional connectivity can be generalized to global effect estimators other than the GAS. It can be readily applied to other confounds, such as systemic physiological or head movement interferences, in order to quantify their

  16. Quantifying the non-Gaussianity in the EoR 21-cm signal through bispectrum

    Science.gov (United States)

    Majumdar, Suman; Pritchard, Jonathan R.; Mondal, Rajesh; Watkinson, Catherine A.; Bharadwaj, Somnath; Mellema, Garrelt

    2018-05-01

    The epoch of reionization (EoR) 21-cm signal is expected to be highly non-Gaussian in nature and this non-Gaussianity is also expected to evolve with the progressing state of reionization. Therefore the signal will be correlated between different Fourier modes (k). The power spectrum will not be able capture this correlation in the signal. We use a higher order estimator - the bispectrum - to quantify this evolving non-Gaussianity. We study the bispectrum using an ensemble of simulated 21-cm signal and with a large variety of k triangles. We observe two competing sources driving the non-Gaussianity in the signal: fluctuations in the neutral fraction (x_{H I}) field and fluctuations in the matter density field. We find that the non-Gaussian contribution from these two sources varies, depending on the stage of reionization and on which k modes are being studied. We show that the sign of the bispectrum works as a unique marker to identify which among these two components is driving the non-Gaussianity. We propose that the sign change in the bispectrum, when plotted as a function of triangle configuration cos θ and at a certain stage of the EoR can be used as a confirmative test for the detection of the 21-cm signal. We also propose a new consolidated way to visualize the signal evolution (with evolving \\bar{x}_{H I} or redshift), through the trajectories of the signal in a power spectrum and equilateral bispectrum i.e. P(k) - B(k, k, k) space.

  17. Quantifying uncertainties of climate signals related to the 11-year solar cycle

    Science.gov (United States)

    Kruschke, T.; Kunze, M.; Matthes, K. B.; Langematz, U.; Wahl, S.

    2017-12-01

    Although state-of-the-art reconstructions based on proxies and (semi-)empirical models converge in terms of total solar irradiance, they still significantly differ in terms of spectral solar irradiance (SSI) with respect to the mean spectral distribution of energy input and temporal variability. This study aims at quantifying uncertainties for the Earth's climate related to the 11-year solar cycle by forcing two chemistry-climate models (CCMs) - CESM1(WACCM) and EMAC - with five different SSI reconstructions (NRLSSI1, NRLSSI2, SATIRE-T, SATIRE-S, CMIP6-SSI) and the reference spectrum RSSV1-ATLAS3, derived from observations. We conduct a unique set of timeslice experiments. External forcings and boundary conditions are fixed and identical for all experiments, except for the solar forcing. The set of analyzed simulations consists of one solar minimum simulation, employing RSSV1-ATLAS3 and five solar maximum experiments. The latter are a result of adding the amplitude of solar cycle 22 according to the five reconstructions to RSSV1-ATLAS3. Our results show that the climate response to the 11y solar cycle is generally robust across CCMs and SSI forcings. However, analyzing the variance of the solar maximum ensemble by means of ANOVA-statistics reveals additional information on the uncertainties of the mean climate signals. The annual mean response agrees very well between the two CCMs for most parts of the lower and middle atmosphere. Only the upper mesosphere is subject to significant differences related to the choice of the model. However, the different SSI forcings lead to significant differences in ozone concentrations, shortwave heating rates, and temperature throughout large parts of the mesosphere and upper stratosphere. Regarding the seasonal evolution of the climate signals, our findings for short wave heating rates, and temperature are similar to the annual means with respect to the relative importance of the choice of the model or the SSI forcing for the

  18. Frontal Parietal Control Network Regulates the Anti-Correlated Default and Dorsal Attention Networks

    OpenAIRE

    Gao, Wei; Lin, Weili

    2011-01-01

    Recent reports demonstrate the anti-correlated behaviors between the default and the dorsal attention (DA) networks. We aimed to investigate the roles of the frontal parietal control (FPC) network in regulating the two anti-correlated networks through three experimental conditions, including resting, continuous self-paced/attended sequential finger tapping (FT), and natural movie watching (MW), respectively. The two goal-directed tasks were chosen to engage either one of the two competing net...

  19. Decoherence-induced transition from photon correlation to anti-correlation

    International Nuclear Information System (INIS)

    Xu, Q

    2014-01-01

    Decoherence tends to induce the quantum-to-classical transition, which leads to a crucial obstacle in the realization of reliable quantum information processing. Counterintuitively, we propose that the decoherence due to phase decay brings about the switch from photon correlation to anti-correlation. Stronger decoherence also gives rise to an enhancement of the transition from photon correlation to anti-correlation. This breaks the conventional correlation of strong decoherence with fast decorrelation. (letters)

  20. Signatures of Quantum Transport Through Two-Dimensional Structures With Correlated and Anti-Correlated Interfaces

    OpenAIRE

    Low, Tony; Ansari, Davood

    2008-01-01

    Electronic transport through a 2D deca-nanometer length channel with correlated and anti-correlated surfaces morphologies is studied using the Keldysh non-equilibrium Green function technique. Due to the pseudo-periodicity of these structures, the energy-resolved transmission possesses pseudo-bands and pseudo-gaps. Channels with correlated surfaces exhibit wider pseudo-bands than their anti-correlated counterparts. By surveying channels with various combinations of material parameters, we fou...

  1. Reversed stereo depth and motion direction with anti-correlated stimuli.

    Science.gov (United States)

    Read, J C; Eagle, R A

    2000-01-01

    We used anti-correlated stimuli to compare the correspondence problem in stereo and motion. Subjects performed a two-interval forced-choice disparity/motion direction discrimination task for different displacements. For anti-correlated 1d band-pass noise, we found weak reversed depth and motion. With 2d anti-correlated stimuli, stereo performance was impaired, but the perception of reversed motion was enhanced. We can explain the main features of our data in terms of channels tuned to different spatial frequencies and orientation. We suggest that a key difference between the solution of the correspondence problem by the motion and stereo systems concerns the integration of information at different orientations.

  2. Toward quantifying metrics for rail-system resilience : Identification and analysis of performance weak resilience signals

    NARCIS (Netherlands)

    Regt, A. de; Siegel, A.W.; Schraagen, J.M.C.

    2016-01-01

    This paper aims to enhance tangibility of the resilience engineering concept by facilitating understanding and operationalization of weak resilience signals (WRSs) in the rail sector. Within complex socio-technical systems, accidents can be seen as unwanted outcomes emerging from uncontrolled

  3. Effects of Perfectly Correlated and Anti-Correlated Noise in a Logistic Growth Model

    International Nuclear Information System (INIS)

    Zhang Li; Cao Li

    2011-01-01

    The logistic growth model with correlated additive and multiplicative Gaussian white noise is used to analyze tumor cell population. The effects of perfectly correlated and anti-correlated noise on the stationary properties of tumor cell population are studied. As in both cases the diffusion coefficient has zero point in real number field, some special features of the system are arisen. It is found that in both cases, the increase of the multiplicative noise intensity cause tumor cell extinction. In the perfectly anti-correlated case, the stationary probability distribution as a function of tumor cell population exhibit two extrema. (general)

  4. Simultaneous stability and sensitivity in model cortical networks is achieved through anti-correlations between the in- and out-degree of connectivity

    Directory of Open Access Journals (Sweden)

    Juan Carlos Vasquez

    2013-11-01

    Full Text Available Neuronal networks in rodent barrel cortex are characterized by stable low baseline firing rates. However, they are sensitive to the action potentials of single neurons as suggested by recent single-cell stimulation experiments that report quantifiable behavioral responses in response to short spike trains elicited in single neurons. Hence, these networks are stable against internally generated fluctuations in firing rate but at the same time remain sensitive to similarly-sized externally induced perturbations. We investigated stability and sensitivity in a simple recurrent network of stochastic binary neurons and determined numerically the effects of correlation between the number of afferent (‘in-degree’ and efferent (‘out-degree’ connections in neurons. The key advance reported in this work is that anti-correlation between in-/out-degree distributions increased the stability of the network in comparison to networks with no correlation or positive correlations, while being able to achieve the same level of sensitivity. The experimental characterization of degree distributions is difficult because all presynaptic and postsynaptic neurons have to be identified and counted. We explored whether the statistics of network motifs, which requires the characterization of connections between small subsets of neurons, could be used to detect evidence for degree anti-correlations. We find that the sample frequency of the 3-neuron ‘ring’ motif (1→2→3→1, can be used to detect degree anti-correlation for sub-networks of size 30 using about 50 samples, which is of significance because the necessary measurements are achievable experimentally in the near future.Taken together, we hypothesize that barrel cortex networks exhibit degree anti-correlations and specific network motif statistics.

  5. Cognitive reserve moderates the association between functional network anti-correlations and memory in MCI.

    Science.gov (United States)

    Franzmeier, Nicolai; Buerger, Katharina; Teipel, Stefan; Stern, Yaakov; Dichgans, Martin; Ewers, Michael

    2017-02-01

    Cognitive reserve (CR) shows protective effects on cognitive function in older adults. Here, we focused on the effects of CR at the functional network level. We assessed in patients with amnestic mild cognitive impairment (aMCI) whether higher CR moderates the association between low internetwork cross-talk on memory performance. In 2 independent aMCI samples (n = 76 and 93) and healthy controls (HC, n = 36), CR was assessed via years of education and intelligence (IQ). We focused on the anti-correlation between the dorsal attention network (DAN) and an anterior and posterior default mode network (DMN), assessed via sliding time window analysis of resting-state functional magnetic resonance imaging (fMRI). The DMN-DAN anti-correlation was numerically but not significantly lower in aMCI compared to HC. However, in aMCI, lower anterior DMN-DAN anti-correlation was associated with lower memory performance. This association was moderated by CR proxies, where the association between the internetwork anti-correlation and memory performance was alleviated at higher levels of education or IQ. In conclusion, lower DAN-DMN cross-talk is associated with lower memory in aMCI, where such effects are buffered by higher CR. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Influence of meditation on anti-correlated networks in the brain

    Directory of Open Access Journals (Sweden)

    Zoran eJosipovic

    2012-01-01

    Full Text Available Human experience can be broadly divided into those that are external and related to interaction with the environment, and experiences that are internal and self-related. The cerebral cortex appears to be divided into two corresponding systems: an extrinsic system composed of brain areas that respond more to external stimuli and tasks and an intrinsic system composed of brain areas that respond less to external stimuli and tasks. These two broad brain systems seem to compete with each other, such that their activity levels over time is usually anti-correlated, even when subjects are at rest and not performing any task. This study used meditation as an experimental manipulation to test whether this competition (anti-correlation can be modulated by cognitive strategy. Participants either fixated without meditation (fixation, or engaged in nondual awareness (NDA or focused attention (FA meditations. We computed inter-area correlations (functional connectivity between pairs of brain regions within each system, and between the entire extrinsic and intrinsic systems. Anti-correlation between extrinsic vs. intrinsic systems was stronger during FA meditation and weaker during NDA meditation in comparison to fixation (without mediation. However, correlation between areas within each system did not change across conditions. These results suggest that the anti-correlation found between extrinsic and intrinsic systems is not an immutable property of brain organization and that practicing different forms of meditation can modulate this gross functional organization in profoundly different ways.

  7. Influence of meditation on anti-correlated networks in the brain.

    Science.gov (United States)

    Josipovic, Zoran; Dinstein, Ilan; Weber, Jochen; Heeger, David J

    2011-01-01

    Human experiences can be broadly divided into those that are external and related to interaction with the environment, and experiences that are internal and self-related. The cerebral cortex appears to be divided into two corresponding systems: an "extrinsic" system composed of brain areas that respond more to external stimuli and tasks and an "intrinsic" system composed of brain areas that respond less to external stimuli and tasks. These two broad brain systems seem to compete with each other, such that their activity levels over time is usually anti-correlated, even when subjects are "at rest" and not performing any task. This study used meditation as an experimental manipulation to test whether this competition (anti-correlation) can be modulated by cognitive strategy. Participants either fixated without meditation (fixation), or engaged in non-dual awareness (NDA) or focused attention (FA) meditations. We computed inter-area correlations ("functional connectivity") between pairs of brain regions within each system, and between the entire extrinsic and intrinsic systems. Anti-correlation between extrinsic vs. intrinsic systems was stronger during FA meditation and weaker during NDA meditation in comparison to fixation (without mediation). However, correlation between areas within each system did not change across conditions. These results suggest that the anti-correlation found between extrinsic and intrinsic systems is not an immutable property of brain organization and that practicing different forms of meditation can modulate this gross functional organization in profoundly different ways.

  8. Frontal parietal control network regulates the anti-correlated default and dorsal attention networks.

    Science.gov (United States)

    Gao, Wei; Lin, Weili

    2012-01-01

    Recent reports demonstrate the anti-correlated behaviors between the default (DF) and the dorsal attention (DA) networks. We aimed to investigate the roles of the frontal parietal control (FPC) network in regulating the two anti-correlated networks through three experimental conditions, including resting, continuous self-paced/attended sequential finger tapping (FT), and natural movie watching (MW), respectively. The two goal-directed tasks were chosen to engage either one of the two competing networks-FT for DA whereas MW for default. We hypothesized that FPC will selectively augment/suppress either network depending on how the task targets the specific network; FPC will positively correlate with the target network, but negatively correlate with the network anti-correlated with the target network. We further hypothesized that significant causal links from FPC to both DA and DF are present during all three experimental conditions, supporting the initiative regulating role of FPC over the two opposing systems. Consistent with our hypotheses, FPC exhibited a significantly higher positive correlation with DA (P = 0.0095) whereas significantly more negative correlation with default (P = 0.0025) during FT when compared to resting. Completely opposite to that observed during FT, the FPC was significantly anti-correlated with DA (P = 2.1e-6) whereas positively correlated with default (P = 0.0035) during MW. Furthermore, extensive causal links from FPC to both DA and DF were observed across all three experimental states. Together, our results strongly support the notion that the FPC regulates the anti-correlated default and DA networks. Copyright © 2011 Wiley Periodicals, Inc.

  9. A method for quantifying intervertebral disc signal intensity on T2-weighted imaging

    International Nuclear Information System (INIS)

    Nagashima, Masaki; Abe, Hitoshi; Amaya, Kenji; Matsumoto, Hideo; Yanaihara, Hisashi; Nishiwaki, Yuji; Toyama, Yoshiaki; Matsumoto, Morio

    2012-01-01

    Background Quantification of intervertebral disc degeneration based on intensity of the nucleus pulposus in magnetic resonance imaging (MRI) often uses the mean intensity of the region of interest (ROI) within the nucleus pulposus. However, the location and size of ROI have varied in different reports, and none of the reported methods can be considered fully objective. Purpose To develop a more objective method of establishing ROIs for quantitative evaluation of signal intensity in the nucleus pulposus using T2-weighted MRI. Material and Methods A 1.5-T scanner was used to obtain T2-weighted mid-sagittal images. A total of 288 intervertebral discs from 48 patients (25 men, 23 women) were analyzed. Mean age was 47.4 years (range, 17-69 years). All discs were classified into five grades according to Pfirrmann et al. Discs in grades I and II were defined as bright discs, and discs in grades IV and V were defined as dark discs. Eight candidate methods of ROI determination were devised. The method offering the highest degree of discrimination between bright and dark discs was investigated among these eight methods. Results The method with the greatest degree of discrimination was as follows. The quadrangle formed by anterior and posterior edges of the upper and lower end plates in contact with the intervertebral disc to be measured was defined as the intervertebral area. A shape similar to the intervertebral area but with one-quarter the area was drawn. The geometrical center of the shape was matched to the center of intensity, and this shape was then used as the ROI. Satisfactory validity and reproducibility were obtained using this method. Conclusion The present method offers adequate discrimination and could be useful for longitudinal tracking of intervertebral disc degeneration with sufficient reproducibility

  10. A method for quantifying intervertebral disc signal intensity on T2-weighted imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nagashima, Masaki [Dept. of Orthopaedic Surgery, Keio Univ. School of Medicine, Tokyo (Japan); Dept. of Orthopaedic Surgery, Kitasato Univ. Kitasato Inst. Hospital, Tokyo (Japan); Abe, Hitoshi [Dept. of Orthopaedic Surgery, Kitasato Univ. Kitasato Inst. Hospital, Tokyo (Japan)], E-mail: hit-abe@insti.kitasato-u.ac.jp; Amaya, Kenji [Graduate School of Information Science and Engineering, Tokyo Inst. of Technology, Tokyo (Japan); Matsumoto, Hideo [Inst. for Integrated Sports Medicine, Keio Univ. School of Medicine, Tokyo (Japan); Yanaihara, Hisashi [Dept. of Diagnostic Radiology, Kitasato Univ. Kitasato Inst. Hospital, Tokyo (Japan); Nishiwaki, Yuji [Dept. of Environmental and Occupational Health, Toho Univ. School of Medicine, Tokyo (Japan); Toyama, Yoshiaki; Matsumoto, Morio [Dept. of Orthopaedic Surgery, Keio Univ. School of Medicine, Tokyo (Japan)

    2012-11-15

    Background Quantification of intervertebral disc degeneration based on intensity of the nucleus pulposus in magnetic resonance imaging (MRI) often uses the mean intensity of the region of interest (ROI) within the nucleus pulposus. However, the location and size of ROI have varied in different reports, and none of the reported methods can be considered fully objective. Purpose To develop a more objective method of establishing ROIs for quantitative evaluation of signal intensity in the nucleus pulposus using T2-weighted MRI. Material and Methods A 1.5-T scanner was used to obtain T2-weighted mid-sagittal images. A total of 288 intervertebral discs from 48 patients (25 men, 23 women) were analyzed. Mean age was 47.4 years (range, 17-69 years). All discs were classified into five grades according to Pfirrmann et al. Discs in grades I and II were defined as bright discs, and discs in grades IV and V were defined as dark discs. Eight candidate methods of ROI determination were devised. The method offering the highest degree of discrimination between bright and dark discs was investigated among these eight methods. Results The method with the greatest degree of discrimination was as follows. The quadrangle formed by anterior and posterior edges of the upper and lower end plates in contact with the intervertebral disc to be measured was defined as the intervertebral area. A shape similar to the intervertebral area but with one-quarter the area was drawn. The geometrical center of the shape was matched to the center of intensity, and this shape was then used as the ROI. Satisfactory validity and reproducibility were obtained using this method. Conclusion The present method offers adequate discrimination and could be useful for longitudinal tracking of intervertebral disc degeneration with sufficient reproducibility.

  11. Comparison of algorithms to quantify muscle fatigue in upper limb muscles based on sEMG signals.

    Science.gov (United States)

    Kahl, Lorenz; Hofmann, Ulrich G

    2016-11-01

    This work compared the performance of six different fatigue detection algorithms quantifying muscle fatigue based on electromyographic signals. Surface electromyography (sEMG) was obtained by an experiment from upper arm contractions at three different load levels from twelve volunteers. Fatigue detection algorithms mean frequency (MNF), spectral moments ratio (SMR), the wavelet method WIRM1551, sample entropy (SampEn), fuzzy approximate entropy (fApEn) and recurrence quantification analysis (RQA%DET) were calculated. The resulting fatigue signals were compared considering the disturbances incorporated in fatiguing situations as well as according to the possibility to differentiate the load levels based on the fatigue signals. Furthermore we investigated the influence of the electrode locations on the fatigue detection quality and whether an optimized channel set is reasonable. The results of the MNF, SMR, WIRM1551 and fApEn algorithms fell close together. Due to the small amount of subjects in this study significant differences could not be found. In terms of disturbances the SMR algorithm showed a slight tendency to out-perform the others. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  12. Anti-correlated cortical networks arise from spontaneous neuronal dynamics at slow timescales.

    Science.gov (United States)

    Kodama, Nathan X; Feng, Tianyi; Ullett, James J; Chiel, Hillel J; Sivakumar, Siddharth S; Galán, Roberto F

    2018-01-12

    In the highly interconnected architectures of the cerebral cortex, recurrent intracortical loops disproportionately outnumber thalamo-cortical inputs. These networks are also capable of generating neuronal activity without feedforward sensory drive. It is unknown, however, what spatiotemporal patterns may be solely attributed to intrinsic connections of the local cortical network. Using high-density microelectrode arrays, here we show that in the isolated, primary somatosensory cortex of mice, neuronal firing fluctuates on timescales from milliseconds to tens of seconds. Slower firing fluctuations reveal two spatially distinct neuronal ensembles, which correspond to superficial and deeper layers. These ensembles are anti-correlated: when one fires more, the other fires less and vice versa. This interplay is clearest at timescales of several seconds and is therefore consistent with shifts between active sensing and anticipatory behavioral states in mice.

  13. Anti-correlated spectral motion in bisphthalocyanines: evidence for vibrational modulation of electronic mixing.

    Science.gov (United States)

    Prall, Bradley S; Parkinson, Dilworth Y; Ishikawa, Naoto; Fleming, Graham R

    2005-12-08

    We exploit a coherently excited nuclear wave packet to study nuclear motion modulation of electronic structure in a metal bridged phthalocyanine dimer, lutetium bisphthalocyanine, which displays two visible absorption bands. We find that the nuclear coordinate influences the energies of the underlying exciton and charge resonance states as well as their interaction; the interplay of the various couplings creates unusual anti-correlated spectral motion in the two bands. Excited state relaxation dynamics are the same regardless of which transition is pumped, with decay time constants of 1.5 and 11 ps. The dynamics are analyzed using a three-state kinetic model after relaxation from one or two additional states faster than the experimental time resolution of 50-100 fs.

  14. Exploring anti-correlated radio/X-ray modes in transitional millisecond pulsars

    Science.gov (United States)

    Jaodand, Amruta

    2017-09-01

    Recently, using coordinated VLA+Chandra observations, Bogdanov et al.(2017) have uncovered a stunning anti-correlation in the LMXB state of the tMSP PSR J1023+0038. They see that radio luminosity consistently peaks during the X-ray `low' luminosity modes. Also, we have found a promising candidate tMSP, 3FGL J1544-1125(J1544) (Bogdanov and Halpern 2015; currently only tMSP candidate apart from J1023 in a persistent LMXB state). Using VLA and simultaneous Swift observations we see that it lies on the proposed tMSP track in radio vs. X-ray luminosity (L_ R/L_X) diagram. This finding strengthens its classification as a tMSP and provides an excellent opportunity to a)determine universality of radio/X-ray brightness anti-correlatio and b)understand jet/outflow formation in tMSPs.

  15. Anti-Correlated Cerebrospinal Fluid Biomarker Trajectories in Preclinical Alzheimer's Disease.

    Science.gov (United States)

    Gomar, Jesus J; Conejero-Goldberg, Concepcion; Davies, Peter; Goldberg, Terry E

    2016-01-01

    The earliest stage of preclinical Alzheimer's disease (AD) is defined by low levels of cerebrospinal fluid (CSF) amyloid-β (Aβ42). However, covariance in longitudinal dynamic change of Aβ42 and tau in incipient preclinical AD is poorly understood. To examine dynamic interrelationships between Aβ42 and tau in preclinical AD. We followed 47 cognitively intact participants (CI) with available CSF data over four years in ADNI. Based on longitudinal Aβ42 levels in CSF, CI were classified into three groups: 1) Aβ42 stable with normal levels of Aβ42 over time (n = 15); 2) Aβ42 declining with normal Aβ42 levels at baseline but showing decline over time (n = 14); and 3) Aβ42 levels consistently abnormal (n = 18). In the Aβ42 declining group, suggestive of incipient preclinical AD, CSF phosphorylated tau (p-tau) showed a similar longitudinal pattern of increasing abnormality over time (p = 0.0001). Correlation between longitudinal slopes of Aβ42 and p-tau confirmed that both trajectories were anti-correlated (rho = -0.60; p = 0.02). Regression analysis showed that Aβ42 slope (decreasing Aβ42) predicted p-tau slope (increasing p-tau) (R2 = 0.47, p = 0.03). Atrophy in the hippocampus was predicted by the interaction of Aβ42 and p-tau slopes (p anti-correlated trajectory, i.e., as Aβ42 declined, p-tau increased, and thus was suggestive of strong temporal coincidence. Rapid pathogenic cross-talk between Aβ42 and p-tau thus may be evident in very early stages of preclinical AD.

  16. Acupuncture mobilizes the brain's default mode and its anti-correlated network in healthy subjects.

    Science.gov (United States)

    Hui, Kathleen K S; Marina, Ovidiu; Claunch, Joshua D; Nixon, Erika E; Fang, Jiliang; Liu, Jing; Li, Ming; Napadow, Vitaly; Vangel, Mark; Makris, Nikos; Chan, Suk-Tak; Kwong, Kenneth K; Rosen, Bruce R

    2009-09-01

    Previous work has shown that acupuncture stimulation evokes deactivation of a limbic-paralimbic-neocortical network (LPNN) as well as activation of somatosensory brain regions. This study explores the activity and functional connectivity of these regions during acupuncture vs. tactile stimulation and vs. acupuncture associated with inadvertent sharp pain. Acupuncture during 201 scans and tactile stimulation during 74 scans for comparison at acupoints LI4, ST36 and LV3 was monitored with fMRI and psychophysical response in 48 healthy subjects. Clusters of deactivated regions in the medial prefrontal, medial parietal and medial temporal lobes as well as activated regions in the sensorimotor and a few paralimbic structures can be identified during acupuncture by general linear model analysis and seed-based cross correlation analysis. Importantly, these clusters showed virtual identity with the default mode network and the anti-correlated task-positive network in response to stimulation. In addition, the amygdala and hypothalamus, structures not routinely reported in the default mode literature, were frequently involved in acupuncture. When acupuncture induced sharp pain, the deactivation was attenuated or became activated instead. Tactile stimulation induced greater activation of the somatosensory regions but less extensive deactivation of the LPNN. These results indicate that the deactivation of the LPNN during acupuncture cannot be completely explained by the demand of attention that is commonly proposed in the default mode literature. Our results suggest that acupuncture mobilizes the anti-correlated functional networks of the brain to mediate its actions, and that the effect is dependent on the psychophysical response.

  17. Anti-correlated Soft Lags in the Intermediate State of Black Hole Source GX 339-4

    OpenAIRE

    Sriram, K.; Rao, A. R.; Choi, C. S.

    2010-01-01

    We report the few hundred second anti-correlated soft lags between soft and hard energy bands in the source GX 339-4 using RXTE observations. In one observation, anti-correlated soft lags were observed using the ISGRI/INTEGRAL hard energy band and the PCA/RXTE soft energy band light curves. The lags were observed when the source was in hard and soft intermediate states, i.e., in a steep power-law state.We found that the temporal and spectral properties were changed during the lag timescale. T...

  18. Global and system-specific resting-state fMRI fluctuations are uncorrelated: principal component analysis reveals anti-correlated networks.

    Science.gov (United States)

    Carbonell, Felix; Bellec, Pierre; Shmuel, Amir

    2011-01-01

    The influence of the global average signal (GAS) on functional-magnetic resonance imaging (fMRI)-based resting-state functional connectivity is a matter of ongoing debate. The global average fluctuations increase the correlation between functional systems beyond the correlation that reflects their specific functional connectivity. Hence, removal of the GAS is a common practice for facilitating the observation of network-specific functional connectivity. This strategy relies on the implicit assumption of a linear-additive model according to which global fluctuations, irrespective of their origin, and network-specific fluctuations are super-positioned. However, removal of the GAS introduces spurious negative correlations between functional systems, bringing into question the validity of previous findings of negative correlations between fluctuations in the default-mode and the task-positive networks. Here we present an alternative method for estimating global fluctuations, immune to the complications associated with the GAS. Principal components analysis was applied to resting-state fMRI time-series. A global-signal effect estimator was defined as the principal component (PC) that correlated best with the GAS. The mean correlation coefficient between our proposed PC-based global effect estimator and the GAS was 0.97±0.05, demonstrating that our estimator successfully approximated the GAS. In 66 out of 68 runs, the PC that showed the highest correlation with the GAS was the first PC. Since PCs are orthogonal, our method provides an estimator of the global fluctuations, which is uncorrelated to the remaining, network-specific fluctuations. Moreover, unlike the regression of the GAS, the regression of the PC-based global effect estimator does not introduce spurious anti-correlations beyond the decrease in seed-based correlation values allowed by the assumed additive model. After regressing this PC-based estimator out of the original time-series, we observed robust anti-correlations

  19. Musical Imagery Involves Wernicke's Area in Bilateral and Anti-Correlated Network Interactions in Musicians.

    Science.gov (United States)

    Zhang, Yizhen; Chen, Gang; Wen, Haiguang; Lu, Kun-Han; Liu, Zhongming

    2017-12-06

    Musical imagery is the human experience of imagining music without actually hearing it. The neural basis of this mental ability is unclear, especially for musicians capable of engaging in accurate and vivid musical imagery. Here, we created a visualization of an 8-minute symphony as a silent movie and used it as real-time cue for musicians to continuously imagine the music for repeated and synchronized sessions during functional magnetic resonance imaging (fMRI). The activations and networks evoked by musical imagery were compared with those elicited by the subjects directly listening to the same music. Musical imagery and musical perception resulted in overlapping activations at the anterolateral belt and Wernicke's area, where the responses were correlated with the auditory features of the music. Whereas Wernicke's area interacted within the intrinsic auditory network during musical perception, it was involved in much more complex networks during musical imagery, showing positive correlations with the dorsal attention network and the motor-control network and negative correlations with the default-mode network. Our results highlight the important role of Wernicke's area in forming vivid musical imagery through bilateral and anti-correlated network interactions, challenging the conventional view of segregated and lateralized processing of music versus language.

  20. Dynamic brain glucose metabolism identifies anti-correlated cortical-cerebellar networks at rest.

    Science.gov (United States)

    Tomasi, Dardo G; Shokri-Kojori, Ehsan; Wiers, Corinde E; Kim, Sunny W; Demiral, Şukru B; Cabrera, Elizabeth A; Lindgren, Elsa; Miller, Gregg; Wang, Gene-Jack; Volkow, Nora D

    2017-12-01

    It remains unclear whether resting state functional magnetic resonance imaging (rfMRI) networks are associated with underlying synchrony in energy demand, as measured by dynamic 2-deoxy-2-[ 18 F]fluoroglucose (FDG) positron emission tomography (PET). We measured absolute glucose metabolism, temporal metabolic connectivity (t-MC) and rfMRI patterns in 53 healthy participants at rest. Twenty-two rfMRI networks emerged from group independent component analysis (gICA). In contrast, only two anti-correlated t-MC emerged from FDG-PET time series using gICA or seed-voxel correlations; one included frontal, parietal and temporal cortices, the other included the cerebellum and medial temporal regions. Whereas cerebellum, thalamus, globus pallidus and calcarine cortex arose as the strongest t-MC hubs, the precuneus and visual cortex arose as the strongest rfMRI hubs. The strength of the t-MC linearly increased with the metabolic rate of glucose suggesting that t-MC measures are strongly associated with the energy demand of the brain tissue, and could reflect regional differences in glucose metabolism, counterbalanced metabolic network demand, and/or differential time-varying delivery of FDG. The mismatch between metabolic and functional connectivity patterns computed as a function of time could reflect differences in the temporal characteristics of glucose metabolism as measured with PET-FDG and brain activation as measured with rfMRI.

  1. Development and Validation of a Novel Dual Luciferase Reporter Gene Assay to Quantify Ebola Virus VP24 Inhibition of IFN Signaling

    Directory of Open Access Journals (Sweden)

    Elisa Fanunza

    2018-02-01

    Full Text Available The interferon (IFN system is the first line of defense against viral infections. Evasion of IFN signaling by Ebola viral protein 24 (VP24 is a critical event in the pathogenesis of the infection and, hence, VP24 is a potential target for drug development. Since no drugs target VP24, the identification of molecules able to inhibit VP24, restoring and possibly enhancing the IFN response, is a goal of concern. Accordingly, we developed a dual signal firefly and Renilla luciferase cell-based drug screening assay able to quantify IFN-mediated induction of Interferon Stimulated Genes (ISGs and its inhibition by VP24. Human Embryonic Kidney 293T (HEK293T cells were transiently transfected with a luciferase reporter gene construct driven by the promoter of ISGs, Interferon-Stimulated Response Element (ISRE. Stimulation of cells with IFN-α activated the IFN cascade leading to the expression of ISRE. Cotransfection of cells with a plasmid expressing VP24 cloned from a virus isolated during the last 2014 outbreak led to the inhibition of ISRE transcription, quantified by a luminescent signal. To adapt this system to test a large number of compounds, we performed it in 96-well plates; optimized the assay analyzing different parameters; and validated the system by calculating the Z′- and Z-factor, which showed values of 0.62 and 0.53 for IFN-α stimulation assay and VP24 inhibition assay, respectively, indicative of robust assay performance.

  2. A new method for quantifying the performance of EEG blind source separation algorithms by referencing a simultaneously recorded ECoG signal.

    Science.gov (United States)

    Oosugi, Naoya; Kitajo, Keiichi; Hasegawa, Naomi; Nagasaka, Yasuo; Okanoya, Kazuo; Fujii, Naotaka

    2017-09-01

    Blind source separation (BSS) algorithms extract neural signals from electroencephalography (EEG) data. However, it is difficult to quantify source separation performance because there is no criterion to dissociate neural signals and noise in EEG signals. This study develops a method for evaluating BSS performance. The idea is neural signals in EEG can be estimated by comparison with simultaneously measured electrocorticography (ECoG). Because the ECoG electrodes cover the majority of the lateral cortical surface and should capture most of the original neural sources in the EEG signals. We measured real EEG and ECoG data and developed an algorithm for evaluating BSS performance. First, EEG signals are separated into EEG components using the BSS algorithm. Second, the EEG components are ranked using the correlation coefficients of the ECoG regression and the components are grouped into subsets based on their ranks. Third, canonical correlation analysis estimates how much information is shared between the subsets of the EEG components and the ECoG signals. We used our algorithm to compare the performance of BSS algorithms (PCA, AMUSE, SOBI, JADE, fastICA) via the EEG and ECoG data of anesthetized nonhuman primates. The results (Best case >JADE = fastICA >AMUSE = SOBI ≥ PCA >random separation) were common to the two subjects. To encourage the further development of better BSS algorithms, our EEG and ECoG data are available on our Web site (http://neurotycho.org/) as a common testing platform. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  3. Depth perception not found in human observers for static or dynamic anti-correlated random dot stereograms.

    Directory of Open Access Journals (Sweden)

    Paul B Hibbard

    Full Text Available One of the greatest challenges in visual neuroscience is that of linking neural activity with perceptual experience. In the case of binocular depth perception, important insights have been achieved through comparing neural responses and the perception of depth, for carefully selected stimuli. One of the most important types of stimulus that has been used here is the anti-correlated random dot stereogram (ACRDS. In these stimuli, the contrast polarity of one half of a stereoscopic image is reversed. While neurons in cortical area V1 respond reliably to the binocular disparities in ACRDS, they do not create a sensation of depth. This discrepancy has been used to argue that depth perception must rely on neural activity elsewhere in the brain. Currently, the psychophysical results on which this argument rests are not clear-cut. While it is generally assumed that ACRDS do not support the perception of depth, some studies have reported that some people, some of the time, perceive depth in some types of these stimuli. Given the importance of these results for understanding the neural correlates of stereopsis, we studied depth perception in ACRDS using a large number of observers, in order to provide an unambiguous conclusion about the extent to which these stimuli support the perception of depth. We presented observers with random dot stereograms in which correlated dots were presented in a surrounding annulus and correlated or anti-correlated dots were presented in a central circular region. While observers could reliably report the depth of the central region for correlated stimuli, we found no evidence for depth perception in static or dynamic anti-correlated stimuli. Confidence ratings for stereoscopic perception were uniformly low for anti-correlated stimuli, but showed normal variation with disparity for correlated stimuli. These results establish that the inability of observers to perceive depth in ACRDS is a robust phenomenon.

  4. Depth perception not found in human observers for static or dynamic anti-correlated random dot stereograms.

    Science.gov (United States)

    Hibbard, Paul B; Scott-Brown, Kenneth C; Haigh, Emma C; Adrain, Melanie

    2014-01-01

    One of the greatest challenges in visual neuroscience is that of linking neural activity with perceptual experience. In the case of binocular depth perception, important insights have been achieved through comparing neural responses and the perception of depth, for carefully selected stimuli. One of the most important types of stimulus that has been used here is the anti-correlated random dot stereogram (ACRDS). In these stimuli, the contrast polarity of one half of a stereoscopic image is reversed. While neurons in cortical area V1 respond reliably to the binocular disparities in ACRDS, they do not create a sensation of depth. This discrepancy has been used to argue that depth perception must rely on neural activity elsewhere in the brain. Currently, the psychophysical results on which this argument rests are not clear-cut. While it is generally assumed that ACRDS do not support the perception of depth, some studies have reported that some people, some of the time, perceive depth in some types of these stimuli. Given the importance of these results for understanding the neural correlates of stereopsis, we studied depth perception in ACRDS using a large number of observers, in order to provide an unambiguous conclusion about the extent to which these stimuli support the perception of depth. We presented observers with random dot stereograms in which correlated dots were presented in a surrounding annulus and correlated or anti-correlated dots were presented in a central circular region. While observers could reliably report the depth of the central region for correlated stimuli, we found no evidence for depth perception in static or dynamic anti-correlated stimuli. Confidence ratings for stereoscopic perception were uniformly low for anti-correlated stimuli, but showed normal variation with disparity for correlated stimuli. These results establish that the inability of observers to perceive depth in ACRDS is a robust phenomenon.

  5. ANTI-CORRELATED SOFT LAGS IN THE INTERMEDIATE STATE OF BLACK HOLE SOURCE GX 339-4

    International Nuclear Information System (INIS)

    Sriram, K.; Choi, C. S.; Rao, A. R.

    2010-01-01

    We report the few hundred second anti-correlated soft lags between soft and hard energy bands in the source GX 339-4 using RXTE observations. In one observation, anti-correlated soft lags were observed using the ISGRI/INTEGRAL hard energy band and the PCA/RXTE soft energy band light curves. The lags were observed when the source was in hard and soft intermediate states, i.e., in a steep power-law state. We found that the temporal and spectral properties were changed during the lag timescale. The anti-correlated soft lags are associated with spectral variability during which the geometry of the accretion disk is changed. The observed temporal and spectral variations are explained using the framework of truncated disk geometry. We found that during the lag timescale, the centroid frequency of quasi-periodic oscillation is decreased, the soft flux is decreased along with an increase in the hard flux, and the power-law index steepens together with a decrease in the disk normalization parameter. We argue that these changes could be explained if we assume that the hot corona condenses and forms a disk in the inner region of the accretion disk. The overall spectral and temporal changes support the truncated geometry of the accretion disk in the steep power-law state or in the intermediate state.

  6. Habitat-induced degradation of sound signals: Quantifying the effects of communication sounds and bird location on blur ratio, excess attenuation, and signal-to-noise ratio in blackbird song

    DEFF Research Database (Denmark)

    Dabelsteen, T.; Larsen, O N; Pedersen, Simon Boel

    1993-01-01

    measures were calculated from changes of the amplitude functions (i.e., envelopes) of the degraded songs using a new technique which allowed a compensation for the contribution of the background noise to the amplitude values. Representative songs were broadcast in a deciduous forest without leaves......The habitat-induced degradation of the full song of the blackbird (Turdus merula) was quantified by measuring excess attenuation, reduction of the signal-to-noise ratio, and blur ratio, the latter measure representing the degree of blurring of amplitude and frequency patterns over time. All three...

  7. Anti-correlation between X-ray luminosity and pulsed fraction in the Small Magellanic Cloud pulsar SXP 1323

    Science.gov (United States)

    Yang, Jun; Zezas, Andreas; Coe, Malcolm J.; Drake, Jeremy J.; Hong, JaeSub; Laycock, Silas G. T.; Wik, Daniel R.

    2018-05-01

    We report the evidence for the anti-correlation between pulsed fraction (PF) and luminosity of the X-ray pulsar SXP 1323, found for the first time in a luminosity range 1035-1037 erg s-1 from observations spanning 15 years. The phenomenon of a decrease in X-ray PF when the source flux increases has been observed in our pipeline analysis of other X-ray pulsars in the Small Magellanic Cloud (SMC). It is expected that the luminosity under a certain value decreases as the PF decreases due to the propeller effect. Above the propeller region, an anti-correlation between the PF and flux might occur either as a result of an increase in the un-pulsed component of the total emission or a decrease of the pulsed component. Additional modes of accretion may also be possible, such as spherical accretion and a change in emission geometry. At higher mass accretion rates, the accretion disk could also extend closer to the neutron star (NS) surface, where a reduced inner radius leads to hotter inner disk emission. These modes of plasma accretion may affect the change in the beam configuration to fan-beam dominant emission.

  8. Quantifying the passive gamma signal from spent nuclear fuel in support of determining the plutonium content in spent nuclear fuel with nondestructive assay

    Energy Technology Data Exchange (ETDEWEB)

    Fensin, Michael L [Los Alamos National Laboratory; Tobin, Steven J [Los Alamos National Laboratory; Menlove, Howard O [Los Alamos National Laboratory; Swinhoe, Martyn T [Los Alamos National Laboratory

    2009-01-01

    The objective of safeguarding nuclear material is to deter diversions of significant quantities of nuclear materials by timely monitoring and detection. There are a variety of motivations for quantifying plutonium in spent fuel (SF), by means of nondestructive assay (NDA), in order to meet this goal. These motivations include the following: strengthening the capabilities of the International Atomic Energy Agencies ability to safeguard nuclear facilities, shipper/receiver difference, input accountability at reprocessing facilities and burnup credit at repositories. Many NDA techniques exist for measuring signatures from SF; however, no single NDA technique can, in isolation, quantify elemental plutonium in SF. A study has been undertaken to determine the best integrated combination of 13 NDA techniques for characterizing Pu mass in spent fuel. This paper focuses on the development of a passive gamma measurement system in support the spent fuel assay system. Gamma ray detection for fresh nuclear fuel focuses on gamma ray emissions that directly coincide with the actinides of interest to the assay. For example, the 186-keV gamma ray is generally used for {sup 235}U assay and the 384-keV complex is generally used for assaying plutonium. In spent nuclear fuel, these signatures cannot be detected as the Compton continuum created from the fission products dominates the signal in this energy range. For SF, the measured gamma signatures from key fission products ({sup 134}Cs, {sup 137}Cs, {sup 154}Eu) are used to ascertain burnup, cooling time, and fissile content information. In this paper the Monte Carlo modeling set-up for a passive gamma spent fuel assay system will be described. The set-up of the system includes a germanium detector and an ion chamber and will be used to gain passive gamma information that will be integrated into a system for determining Pu in SF. The passive gamma signal will be determined from a library of {approx} 100 assemblies that have been

  9. Quantifying Matter

    CERN Document Server

    Angelo, Joseph A

    2011-01-01

    Quantifying Matter explains how scientists learned to measure matter and quantify some of its most fascinating and useful properties. It presents many of the most important intellectual achievements and technical developments that led to the scientific interpretation of substance. Complete with full-color photographs, this exciting new volume describes the basic characteristics and properties of matter. Chapters include:. -Exploring the Nature of Matter. -The Origin of Matter. -The Search for Substance. -Quantifying Matter During the Scientific Revolution. -Understanding Matter's Electromagnet

  10. Anti-correlated X-ray and Radio Variability in the Transitional Millisecond Pulsar PSR J1023+0038

    Science.gov (United States)

    Bogdanov, Slavko; Deller, Adam; Miller-Jones, James; Archibald, Anne; Hessels, Jason W. T.; Jaodand, Amruta; Patruno, Alessandro; Bassa, Cees; D'Angelo, Caroline

    2018-01-01

    The PSR J1023+0038 binary system hosts a 1.69-ms neutron star and a low-mass, main-sequence-like star. The system underwent a transformation from a rotation-powered to a low-luminosity accreting state in 2013 June, in which it has remained since. We present an unprecedented set of strictly simultaneous Chandra X-ray Observatory and Karl G. Jansky Very Large Array observations, which for the first time reveal a highly reproducible, anti-correlated variability pattern. Rapid declines in X-ray flux are always accompanied by a radio brightening with duration that closely matches the low X-ray flux mode intervals. We discuss these findings in the context of accretion and jet outflow physics and their implications for using the radio/X-ray luminosity plane to distinguish low-luminosity candidate black hole binary systems from accreting transitional millisecond pulsars.

  11. Correlation and anti-correlation of the East Asian summer and winter monsoons during the last 21,000 years.

    Science.gov (United States)

    Wen, Xinyu; Liu, Zhengyu; Wang, Shaowu; Cheng, Jun; Zhu, Jiang

    2016-06-22

    Understanding the past significant changes of the East Asia Summer Monsoon (EASM) and Winter Monsoon (EAWM) is critical for improving the projections of future climate over East Asia. One key issue that has remained outstanding from the paleo-climatic records is whether the evolution of the EASM and EAWM are correlated. Here, using a set of long-term transient simulations of the climate evolution of the last 21,000 years, we show that the EASM and EAWM are positively correlated on the orbital timescale in response to the precessional forcing, but are anti-correlated on millennial timescales in response to North Atlantic melt water forcing. The relation between EASM and EAWM can differ dramatically for different timescales because of the different response mechanisms, highlighting the complex dynamics of the East Asian monsoon system and the challenges for future projection.

  12. Discriminative analysis of early Alzheimer's disease based on two intrinsically anti-correlated networks with resting-state fMRI.

    Science.gov (United States)

    Wang, Kun; Jiang, Tianzi; Liang, Meng; Wang, Liang; Tian, Lixia; Zhang, Xinqing; Li, Kuncheng; Liu, Zhening

    2006-01-01

    In this work, we proposed a discriminative model of Alzheimer's disease (AD) on the basis of multivariate pattern classification and functional magnetic resonance imaging (fMRI). This model used the correlation/anti-correlation coefficients of two intrinsically anti-correlated networks in resting brains, which have been suggested by two recent studies, as the feature of classification. Pseudo-Fisher Linear Discriminative Analysis (pFLDA) was then performed on the feature space and a linear classifier was generated. Using leave-one-out (LOO) cross validation, our results showed a correct classification rate of 83%. We also compared the proposed model with another one based on the whole brain functional connectivity. Our proposed model outperformed the other one significantly, and this implied that the two intrinsically anti-correlated networks may be a more susceptible part of the whole brain network in the early stage of AD.

  13. Misfit-guided self-organization of anti-correlated Ge quantum dot arrays on Si nanowires

    Science.gov (United States)

    Kwon, Soonshin; Chen, Zack C.Y.; Kim, Ji-Hun; Xiang, Jie

    2012-01-01

    Misfit-strain guided growth of periodic quantum dot (QD) arrays in planar thin film epitaxy has been a popular nanostructure fabrication method. Engineering misfit-guided QD growth on a nanoscale substrate such as the small curvature surface of a nanowire represents a new approach to self-organized nanostructure preparation. Perhaps more profoundly, the periodic stress underlying each QD and the resulting modulation of electro-optical properties inside the nanowire backbone promise to provide a new platform for novel mechano-electronic, thermoelectronic, and optoelectronic devices. Herein, we report a first experimental demonstration of self-organized and self-limited growth of coherent, periodic Ge QDs on a one dimensional Si nanowire substrate. Systematic characterizations reveal several distinctively different modes of Ge QD ordering on the Si nanowire substrate depending on the core diameter. In particular, Ge QD arrays on Si nanowires of around 20 nm diameter predominantly exhibit an anti-correlated pattern whose wavelength agrees with theoretical predictions. The correlated pattern can be attributed to propagation and correlation of misfit strain across the diameter of the thin nanowire substrate. The QD array growth is self-limited as the wavelength of the QDs remains unchanged even after prolonged Ge deposition. Furthermore, we demonstrate a direct kinetic transformation from a uniform Ge shell layer to discrete QD arrays by a post-growth annealing process. PMID:22889063

  14. ON THE ANTI-CORRELATION BETWEEN SPECTRAL LINE BROADENING AND INTENSITY IN CORONAL STRUCTURES OBSERVED WITH EIS

    International Nuclear Information System (INIS)

    Scott, J. T.; Martens, P. C. H.

    2011-01-01

    The advance in spectral resolution of the Extreme Ultraviolet Imaging (EIS) spectrometer on board Hinode has allowed for more detailed analysis of coronal spectral lines. Large line broadening and blueshifted velocities have been found in the periphery of active region (AR) cores and near the footpoints of coronal loops. This line broadening is yet to be understood. We study the correlation of intensity and line width for entire ARs and sub-regions selected to include coronal features. The results show that although a slight positive correlation can be found when considering whole images, many sub-regions have a negative correlation between intensity and line width. Sections of a coronal loop display some of the largest anti-correlations found for this study with the increased line broadening occurring directly adjacent to the footpoint section of the loop structure, not at the footpoint itself. The broadened lines may be due to a second Doppler-shifted component that is separate from the main emitting feature such as a coronal loop, but related in their excitation. The small size of these features forces the considerations of investigator and instrumental effects. Preliminary analyses are shown that indicate the possibility of a point-spread function that is not azimuthally symmetric and may affect velocity and line profile measurements.

  15. Recent ice cap snowmelt in Russian High Arctic and anti-correlation with late summer sea ice extent

    International Nuclear Information System (INIS)

    Zhao, Meng; Ramage, Joan; Semmens, Kathryn; Obleitner, Friedrich

    2014-01-01

    Glacier surface melt dynamics throughout Novaya Zemlya (NovZ) and Severnaya Zemlya (SevZ) serve as a good indicator of ice mass ablation and regional climate change in the Russian High Arctic. Here we report trends of surface melt onset date (MOD) and total melt days (TMD) by combining multiple resolution-enhanced active and passive microwave satellite datasets and analyze the TMD correlations with local temperature and regional sea ice extent. The glacier surface snowpack on SevZ melted significantly earlier (−7.3 days/decade) from 1992 to 2012 and significantly longer (7.7 days/decade) from 1995 to 2011. NovZ experienced large interannual variability in MOD, but its annual mean TMD increased. The snowpack melt on NovZ is more sensitive to temperature fluctuations than SevZ in recent decades. After ruling out the regional temperature influence using partial correlation analysis, the TMD on both archipelagoes is statistically anti-correlated with regional late summer sea ice extent, linking land ice snowmelt dynamics to regional sea ice extent variations. (letter)

  16. Quantifying Transmission.

    Science.gov (United States)

    Woolhouse, Mark

    2017-07-01

    Transmissibility is the defining characteristic of infectious diseases. Quantifying transmission matters for understanding infectious disease epidemiology and designing evidence-based disease control programs. Tracing individual transmission events can be achieved by epidemiological investigation coupled with pathogen typing or genome sequencing. Individual infectiousness can be estimated by measuring pathogen loads, but few studies have directly estimated the ability of infected hosts to transmit to uninfected hosts. Individuals' opportunities to transmit infection are dependent on behavioral and other risk factors relevant given the transmission route of the pathogen concerned. Transmission at the population level can be quantified through knowledge of risk factors in the population or phylogeographic analysis of pathogen sequence data. Mathematical model-based approaches require estimation of the per capita transmission rate and basic reproduction number, obtained by fitting models to case data and/or analysis of pathogen sequence data. Heterogeneities in infectiousness, contact behavior, and susceptibility can have substantial effects on the epidemiology of an infectious disease, so estimates of only mean values may be insufficient. For some pathogens, super-shedders (infected individuals who are highly infectious) and super-spreaders (individuals with more opportunities to transmit infection) may be important. Future work on quantifying transmission should involve integrated analyses of multiple data sources.

  17. ANTI-CORRELATED TIME LAGS IN THE Z SOURCE GX 5-1: POSSIBLE EVIDENCE FOR A TRUNCATED ACCRETION DISK

    Energy Technology Data Exchange (ETDEWEB)

    Sriram, K.; Choi, C. S. [Korea Astronomy and Space Science Institute, Daejeon 305-348 (Korea, Republic of); Rao, A. R., E-mail: astrosriram@yahoo.co.in [Tata Institute of Fundamental Research, Mumbai 400005 (India)

    2012-06-01

    We investigate the nature of the inner accretion disk in the neutron star source GX 5-1 by making a detailed study of time lags between X-rays of different energies. Using the cross-correlation analysis, we found anti-correlated hard and soft time lags of the order of a few tens to a few hundred seconds and the corresponding intensity states were mostly the horizontal branch (HB) and upper normal branch. The model independent and dependent spectral analysis showed that during these time lags the structure of the accretion disk significantly varied. Both eastern and western approaches were used to unfold the X-ray continuum and systematic changes were observed in soft and hard spectral components. These changes along with a systematic shift in the frequency of quasi-periodic oscillations (QPOs) made it substantially evident that the geometry of the accretion disk is truncated. Simultaneous energy spectral and power density spectral study shows that the production of the horizontal branch oscillations (HBOs) is closely related to the Comptonizing region rather than the disk component in the accretion disk. We found that as the HBO frequency decreases from the hard apex to upper HB, the disk temperature increases along with an increase in the coronal temperature, which is in sharp contrast with the changes found in black hole binaries where the decrease in the QPO frequency is accompanied by a decrease in the disk temperature and a simultaneous increase in the coronal temperature. We discuss the results in the context of re-condensation of coronal material in the inner region of the disk.

  18. Quantifying sound quality in loudspeaker reproduction

    NARCIS (Netherlands)

    Beerends, John G.; van Nieuwenhuizen, Kevin; van den Broek, E.L.

    2016-01-01

    We present PREQUEL: Perceptual Reproduction Quality Evaluation for Loudspeakers. Instead of quantifying the loudspeaker system itself, PREQUEL quantifies the overall loudspeakers' perceived sound quality by assessing their acoustic output using a set of music signals. This approach introduces a

  19. The default mode network and the working memory network are not anti-correlated during all phases of a working memory task.

    Science.gov (United States)

    Piccoli, Tommaso; Valente, Giancarlo; Linden, David E J; Re, Marta; Esposito, Fabrizio; Sack, Alexander T; Di Salle, Francesco

    2015-01-01

    The default mode network and the working memory network are known to be anti-correlated during sustained cognitive processing, in a load-dependent manner. We hypothesized that functional connectivity among nodes of the two networks could be dynamically modulated by task phases across time. To address the dynamic links between default mode network and the working memory network, we used a delayed visuo-spatial working memory paradigm, which allowed us to separate three different phases of working memory (encoding, maintenance, and retrieval), and analyzed the functional connectivity during each phase within and between the default mode network and the working memory network networks. We found that the two networks are anti-correlated only during the maintenance phase of working memory, i.e. when attention is focused on a memorized stimulus in the absence of external input. Conversely, during the encoding and retrieval phases, when the external stimulation is present, the default mode network is positively coupled with the working memory network, suggesting the existence of a dynamically switching of functional connectivity between "task-positive" and "task-negative" brain networks. Our results demonstrate that the well-established dichotomy of the human brain (anti-correlated networks during rest and balanced activation-deactivation during cognition) has a more nuanced organization than previously thought and engages in different patterns of correlation and anti-correlation during specific sub-phases of a cognitive task. This nuanced organization reinforces the hypothesis of a direct involvement of the default mode network in cognitive functions, as represented by a dynamic rather than static interaction with specific task-positive networks, such as the working memory network.

  20. Application of lag-k autocorrelation coefficient and the TGA signals approach to detecting and quantifying adulterations of extra virgin olive oil with inferior edible oils

    Energy Technology Data Exchange (ETDEWEB)

    Torrecilla, Jose S., E-mail: jstorre@quim.ucm.es [Department of Chemical Engineering, Faculty of Chemistry, University Complutense of Madrid, 28040 Madrid (Spain); Garcia, Julian; Garcia, Silvia; Rodriguez, Francisco [Department of Chemical Engineering, Faculty of Chemistry, University Complutense of Madrid, 28040 Madrid (Spain)

    2011-03-04

    The combination of lag-k autocorrelation coefficients (LCCs) and thermogravimetric analyzer (TGA) equipment is defined here as a tool to detect and quantify adulterations of extra virgin olive oil (EVOO) with refined olive (ROO), refined olive pomace (ROPO), sunflower (SO) or corn (CO) oils, when the adulterating agents concentration are less than 14%. The LCC is calculated from TGA scans of adulterated EVOO samples. Then, the standardized skewness of this coefficient has been applied to classify pure and adulterated samples of EVOO. In addition, this chaotic parameter has also been used to quantify the concentration of adulterant agents, by using successful linear correlation of LCCs and ROO, ROPO, SO or CO in 462 EVOO adulterated samples. In the case of detection, more than 82% of adulterated samples have been correctly classified. In the case of quantification of adulterant concentration, by an external validation process, the LCC/TGA approach estimates the adulterant agents concentration with a mean correlation coefficient (estimated versus real adulterant agent concentration) greater than 0.90 and a mean square error less than 4.9%.

  1. Application of lag-k autocorrelation coefficient and the TGA signals approach to detecting and quantifying adulterations of extra virgin olive oil with inferior edible oils

    International Nuclear Information System (INIS)

    Torrecilla, Jose S.; Garcia, Julian; Garcia, Silvia; Rodriguez, Francisco

    2011-01-01

    The combination of lag-k autocorrelation coefficients (LCCs) and thermogravimetric analyzer (TGA) equipment is defined here as a tool to detect and quantify adulterations of extra virgin olive oil (EVOO) with refined olive (ROO), refined olive pomace (ROPO), sunflower (SO) or corn (CO) oils, when the adulterating agents concentration are less than 14%. The LCC is calculated from TGA scans of adulterated EVOO samples. Then, the standardized skewness of this coefficient has been applied to classify pure and adulterated samples of EVOO. In addition, this chaotic parameter has also been used to quantify the concentration of adulterant agents, by using successful linear correlation of LCCs and ROO, ROPO, SO or CO in 462 EVOO adulterated samples. In the case of detection, more than 82% of adulterated samples have been correctly classified. In the case of quantification of adulterant concentration, by an external validation process, the LCC/TGA approach estimates the adulterant agents concentration with a mean correlation coefficient (estimated versus real adulterant agent concentration) greater than 0.90 and a mean square error less than 4.9%.

  2. A general framework to quantify the effect of restricted diffusion on the NMR signal with applications to double pulsed field gradient NMR experiments.

    Science.gov (United States)

    Ozarslan, Evren; Shemesh, Noam; Basser, Peter J

    2009-03-14

    Based on a description introduced by Robertson, Grebenkov recently introduced a powerful formalism to represent the diffusion-attenuated NMR signal for simple pore geometries such as slabs, cylinders, and spheres analytically. In this work, we extend this multiple correlation function formalism by allowing for possible variations in the direction of the magnetic field gradient waveform. This extension is necessary, for example, to incorporate the effects of imaging gradients in diffusion-weighted NMR imaging scans and in characterizing anisotropy at different length scales via double pulsed field gradient (PFG) experiments. In cylindrical and spherical pores, respectively, two- and three-dimensional vector operators are employed whose form is deduced from Grebenkov's results via elementary operator algebra for the case of cylinders and the Wigner-Eckart theorem for the case of spheres. The theory was validated by comparison with known findings and with experimental double-PFG data obtained from water-filled microcapillaries.

  3. A general framework to quantify the effect of restricted diffusion on the NMR signal with applications to double pulsed field gradient NMR experiments

    Science.gov (United States)

    Özarslan, Evren; Shemesh, Noam; Basser, Peter J.

    2009-03-01

    Based on a description introduced by Robertson, Grebenkov recently introduced a powerful formalism to represent the diffusion-attenuated NMR signal for simple pore geometries such as slabs, cylinders, and spheres analytically. In this work, we extend this multiple correlation function formalism by allowing for possible variations in the direction of the magnetic field gradient waveform. This extension is necessary, for example, to incorporate the effects of imaging gradients in diffusion-weighted NMR imaging scans and in characterizing anisotropy at different length scales via double pulsed field gradient (PFG) experiments. In cylindrical and spherical pores, respectively, two- and three-dimensional vector operators are employed whose form is deduced from Grebenkov's results via elementary operator algebra for the case of cylinders and the Wigner-Eckart theorem for the case of spheres. The theory was validated by comparison with known findings and with experimental double-PFG data obtained from water-filled microcapillaries.

  4. Use of Fourier-transform infrared spectroscopy to quantify immunoglobulin G concentration and an analysis of the effect of signalment on levels in canine serum.

    Science.gov (United States)

    Seigneur, A; Hou, S; Shaw, R A; McClure, Jt; Gelens, H; Riley, C B

    2015-01-15

    Deficiency in immunoglobulin G (IgG) is associated with an increased susceptibility to infections in humans and animals, and changes in IgG levels occur in many disease states. In companion animals, failure of transfer of passive immunity is uncommonly diagnosed but mortality rates in puppies are high and more than 30% of these deaths are secondary to septicemia. Currently, radial immunodiffusion (RID) and enzyme-linked immunosorbent assays are the most commonly used methods for quantitative measurement of IgG in dogs. In this study, a Fourier-transform infrared spectroscopy (FTIR) assay for canine serum IgG was developed and compared to the RID assay as the reference standard. Basic signalment data and health status of the dogs were also analyzed to determine if they correlated with serum IgG concentrations based on RID results. Serum samples were collected from 207 dogs during routine hematological evaluation, and IgG concentrations determined by RID. The FTIR assay was developed using partial least squares regression analysis and its performance evaluated using RID assay as the reference test. The concordance correlation coefficient was 0.91 for the calibration model data set and 0.85 for the prediction set. A Bland-Altman plot showed a mean difference of -89 mg/dL and no systematic bias. The modified mean coefficient of variation (CV) for RID was 6.67%, and for FTIR was 18.76%. The mean serum IgG concentration using RID was 1943 ± 880 mg/dL based on the 193 dogs with complete signalment and health data. When age class, gender, breed size and disease status were analyzed by multivariable ANOVA, dogs < 2 years of age (p = 0.0004) and those classified as diseased (p = 0.03) were found to have significantly lower IgG concentrations than older and healthy dogs, respectively. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. On the Nature of the mHz X-Ray QPOs from ULX M82 X-1: Evidence for Timing-Spectral (anti) Correlation

    Science.gov (United States)

    Pasham, Dheeraj R.; Strohmayer, Tod E.

    2013-01-01

    Using all the archival XMM-Newton X-ray (3-10 keV) observations of the ultraluminous X-ray source (ULX) M82 X-1 we searched for a correlation between its variable mHz quasi-periodic oscillation (QPO) frequency and its energy spectral power-law index. These quantities are known to correlate in stellar mass black holes (StMBHs) exhibiting Type-C QPOs (approx 0.2-15 Hz). The detection of such a correlation would strengthen the identification of its mHz QPOs as Type-C and enable a more reliable mass estimate by scaling its QPO frequencies to those of Type-C QPOs in StMBHs of known mass. We resolved the count rates of M82 X-1 and a nearby bright ULX (source 5/X42.3+59) through surface brightness modeling and identify observations in which M82 X-1 was at least as bright as source 5. Using only those observations, we detect QPOs in the frequency range of 36-210 mHz during which the energy spectral power-law index varied from 1.7-2.2. Interestingly, we find evidence for an anti-correlation (Pearsons correlation coefficient = -0.95) between the power-law index and the QPO centroid frequency. While such an anti-correlation is observed in StMBHs at high Type-C QPO frequencies (approx 5-15 Hz), the frequency range over which it holds in StMBHs is significantly smaller (factor of approx 1.5-3) than the QPO range reported here from M82 X-1 (factor of 6). However, it remains possible that contamination from source 5 can bias our result. Joint Chandra/XMM-Newton observations in the future can resolve this problem and confirm the timing-spectral anti-correlation reported here.

  6. Aging-related changes in the default mode network and its anti-correlated networks: a resting-state fMRI study.

    Science.gov (United States)

    Wu, Jing-Tao; Wu, Hui-Zhen; Yan, Chao-Gan; Chen, Wen-Xin; Zhang, Hong-Ying; He, Yong; Yang, Hai-Shan

    2011-10-17

    Intrinsic brain activity in a resting state incorporates components of the task negative network called default mode network (DMN) and task-positive networks called attentional networks. In the present study, the reciprocal neuronal networks in the elder group were compared with the young group to investigate the differences of the intrinsic brain activity using a method of temporal correlation analysis based on seed regions of posterior cingulate cortex (PCC) and ventromedial prefrontal cortex (vmPFC). We found significant decreased positive correlations and negative correlations with the seeds of PCC and vmPFC in the old group. The decreased coactivations in the DMN network components and their negative networks in the old group may reflect age-related alterations in various brain functions such as attention, motor control and inhibition modulation in cognitive processing. These alterations in the resting state anti-correlative networks could provide neuronal substrates for the aging brain. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  7. Detailed L3 measurements of Bose-Einstein correlations and a region of anti-correlations in hadronic $Z^0$ decays at LEP

    CERN Document Server

    Csorgo, T.; Kittel, W.; Novak, T.

    2010-01-01

    L3 preliminary data of two-particle Bose-Einstein correlations are reported for hadronic Z^0 decays in e+e- annihilation at LEP. The invariant relative momentum Q is identified as the eigenvariable of the measured correlation function. Significant anti-correlations are observed in the Bose-Einstein correlation function in a broad region of 0.5 - 1.6 GeV with a minimum at Q close to 0.8 GeV. Absence of Bose-Einstein correlations is demonstrated in the region above Q >= 1.6 GeV. The effective source size is found to decrease with increasing value of the transverse mass of the pair, similarly to hadron-hadron and heavy ion reactions. These feautes and our data are described well by the non-thermal tau-model, which is based on strong space-time momentum-correlations.

  8. Simultaneous Chandra and VLA Observations of the Transitional Millisecond Pulsar PSR J1023+0038: Anti-correlated X-Ray and Radio Variability

    Science.gov (United States)

    Bogdanov, Slavko; Deller, Adam T.; Miller-Jones, James C. A.; Archibald, Anne M.; Hessels, Jason W. T.; Jaodand, Amruta; Patruno, Alessandro; Bassa, Cees; D’Angelo, Caroline

    2018-03-01

    We present coordinated Chandra X-ray Observatory and Karl G. Jansky Very Large Array observations of the transitional millisecond pulsar PSR J1023+0038 in its low-luminosity accreting state. The unprecedented five hours of strictly simultaneous X-ray and radio continuum coverage for the first time unambiguously show a highly reproducible, anti-correlated variability pattern. The characteristic switches from the X-ray high mode into a low mode are always accompanied by a radio brightening with a duration that closely matches the X-ray low mode interval. This behavior cannot be explained by a canonical inflow/outflow accretion model where the radiated emission and the jet luminosity are powered by, and positively correlated with, the available accretion energy. We interpret this phenomenology as alternating episodes of low-level accretion onto the neutron star during the X-ray high mode that are interrupted by rapid ejections of plasma by the active rotation-powered pulsar, possibly initiated by a reconfiguration of the pulsar magnetosphere, that cause a transition to a less X-ray luminous mode. The observed anti-correlation between radio and X-ray luminosity has an additional consequence: transitional MSPs can make excursions into a region of the radio/X-ray luminosity plane previously thought to be occupied solely by black hole X-ray binary sources. This complicates the use of this luminosity relation for identifying candidate black holes, suggesting the need for additional discriminants when attempting to establish the true nature of the accretor.

  9. Quantifiers and working memory

    NARCIS (Netherlands)

    Szymanik, J.; Zajenkowski, M.

    2010-01-01

    The paper presents a study examining the role of working memory in quantifier verification. We created situations similar to the span task to compare numerical quantifiers of low and high rank, parity quantifiers and proportional quantifiers. The results enrich and support the data obtained

  10. Quantifiers and working memory

    NARCIS (Netherlands)

    Szymanik, J.; Zajenkowski, M.

    2009-01-01

    The paper presents a study examining the role of working memory in quantifier verification. We created situations similar to the span task to compare numerical quantifiers of low and high rank, parity quantifiers and proportional quantifiers. The results enrich and support the data obtained

  11. Neuron-Enriched Gene Expression Patterns are Regionally Anti-Correlated with Oligodendrocyte-Enriched Patterns in the Adult Mouse and Human Brain.

    Science.gov (United States)

    Tan, Powell Patrick Cheng; French, Leon; Pavlidis, Paul

    2013-01-01

    An important goal in neuroscience is to understand gene expression patterns in the brain. The recent availability of comprehensive and detailed expression atlases for mouse and human creates opportunities to discover global patterns and perform cross-species comparisons. Recently we reported that the major source of variation in gene transcript expression in the adult normal mouse brain can be parsimoniously explained as reflecting regional variation in glia to neuron ratios, and is correlated with degree of connectivity and location in the brain along the anterior-posterior axis. Here we extend this investigation to two gene expression assays of adult normal human brains that consisted of over 300 brain region samples, and perform comparative analyses of brain-wide expression patterns to the mouse. We performed principal components analysis (PCA) on the regional gene expression of the adult human brain to identify the expression pattern that has the largest variance. As in the mouse, we observed that the first principal component is composed of two anti-correlated patterns enriched in oligodendrocyte and neuron markers respectively. However, we also observed interesting discordant patterns between the two species. For example, a few mouse neuron markers show expression patterns that are more correlated with the human oligodendrocyte-enriched pattern and vice-versa. In conclusion, our work provides insights into human brain function and evolution by probing global relationships between regional cell type marker expression patterns in the human and mouse brain.

  12. Quantifiers for quantum logic

    OpenAIRE

    Heunen, Chris

    2008-01-01

    We consider categorical logic on the category of Hilbert spaces. More generally, in fact, any pre-Hilbert category suffices. We characterise closed subobjects, and prove that they form orthomodular lattices. This shows that quantum logic is just an incarnation of categorical logic, enabling us to establish an existential quantifier for quantum logic, and conclude that there cannot be a universal quantifier.

  13. Connected Car: Quantified Self becomes Quantified Car

    Directory of Open Access Journals (Sweden)

    Melanie Swan

    2015-02-01

    Full Text Available The automotive industry could be facing a situation of profound change and opportunity in the coming decades. There are a number of influencing factors such as increasing urban and aging populations, self-driving cars, 3D parts printing, energy innovation, and new models of transportation service delivery (Zipcar, Uber. The connected car means that vehicles are now part of the connected world, continuously Internet-connected, generating and transmitting data, which on the one hand can be helpfully integrated into applications, like real-time traffic alerts broadcast to smartwatches, but also raises security and privacy concerns. This paper explores the automotive connected world, and describes five killer QS (Quantified Self-auto sensor applications that link quantified-self sensors (sensors that measure the personal biometrics of individuals like heart rate and automotive sensors (sensors that measure driver and passenger biometrics or quantitative automotive performance metrics like speed and braking activity. The applications are fatigue detection, real-time assistance for parking and accidents, anger management and stress reduction, keyless authentication and digital identity verification, and DIY diagnostics. These kinds of applications help to demonstrate the benefit of connected world data streams in the automotive industry and beyond where, more fundamentally for human progress, the automation of both physical and now cognitive tasks is underway.

  14. Is Time Predictability Quantifiable?

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    Computer architects and researchers in the realtime domain start to investigate processors and architectures optimized for real-time systems. Optimized for real-time systems means time predictable, i.e., architectures where it is possible to statically derive a tight bound of the worst......-case execution time. To compare different approaches we would like to quantify time predictability. That means we need to measure time predictability. In this paper we discuss the different approaches for these measurements and conclude that time predictability is practically not quantifiable. We can only...... compare the worst-case execution time bounds of different architectures....

  15. Thermosensory reversal effect quantified

    NARCIS (Netherlands)

    Bergmann Tiest, W.M.; Kappers, A.M.L.

    2008-01-01

    At room temperature, some materials feel colder than others due to differences in thermal conductivity, heat capacity and geometry. When the ambient temperature is well above skin temperature, the roles of 'cold' and 'warm' materials are reversed. In this paper, this effect is quantified by

  16. Thermosensory reversal effect quantified

    NARCIS (Netherlands)

    Bergmann Tiest, W.M.; Kappers, A.M.L.

    2008-01-01

    At room temperature, some materials feel colder than others due to differences in thermal conductivity, heat capacity and geometry. When the ambient temperature is well above skin temperature, the roles of ‘cold’ and ‘warm’ materials are reversed. In this paper, this effect is quantified by

  17. Quantifying requirements volatility effects

    NARCIS (Netherlands)

    Kulk, G.P.; Verhoef, C.

    2008-01-01

    In an organization operating in the bancassurance sector we identified a low-risk IT subportfolio of 84 IT projects comprising together 16,500 function points, each project varying in size and duration, for which we were able to quantify its requirements volatility. This representative portfolio

  18. The quantified relationship

    NARCIS (Netherlands)

    Danaher, J.; Nyholm, S.R.; Earp, B.

    2018-01-01

    The growth of self-tracking and personal surveillance has given rise to the Quantified Self movement. Members of this movement seek to enhance their personal well-being, productivity, and self-actualization through the tracking and gamification of personal data. The technologies that make this

  19. Quantifying IT estimation risks

    NARCIS (Netherlands)

    Kulk, G.P.; Peters, R.J.; Verhoef, C.

    2009-01-01

    A statistical method is proposed for quantifying the impact of factors that influence the quality of the estimation of costs for IT-enabled business projects. We call these factors risk drivers as they influence the risk of the misestimation of project costs. The method can effortlessly be

  20. Quantifying light pollution

    International Nuclear Information System (INIS)

    Cinzano, P.; Falchi, F.

    2014-01-01

    In this paper we review new available indicators useful to quantify and monitor light pollution, defined as the alteration of the natural quantity of light in the night environment due to introduction of manmade light. With the introduction of recent radiative transfer methods for the computation of light pollution propagation, several new indicators become available. These indicators represent a primary step in light pollution quantification, beyond the bare evaluation of the night sky brightness, which is an observational effect integrated along the line of sight and thus lacking the three-dimensional information. - Highlights: • We review new available indicators useful to quantify and monitor light pollution. • These indicators are a primary step in light pollution quantification. • These indicators allow to improve light pollution mapping from a 2D to a 3D grid. • These indicators allow carrying out a tomography of light pollution. • We show an application of this technique to an Italian region

  1. Quantifying linguistic coordination

    DEFF Research Database (Denmark)

    Fusaroli, Riccardo; Tylén, Kristian

    task (Bahrami et al 2010, Fusaroli et al. 2012) we extend to linguistic coordination dynamical measures of recurrence employed in the analysis of sensorimotor coordination (such as heart-rate (Konvalinka et al 2011), postural sway (Shockley 2005) and eye-movements (Dale, Richardson and Kirkham 2012......). We employ nominal recurrence analysis (Orsucci et al 2005, Dale et al 2011) on the decision-making conversations between the participants. We report strong correlations between various indexes of recurrence and collective performance. We argue this method allows us to quantify the qualities...

  2. Altered Global Signal Topography in Schizophrenia.

    Science.gov (United States)

    Yang, Genevieve J; Murray, John D; Glasser, Matthew; Pearlson, Godfrey D; Krystal, John H; Schleifer, Charlie; Repovs, Grega; Anticevic, Alan

    2017-11-01

    Schizophrenia (SCZ) is a disabling neuropsychiatric disease associated with disruptions across distributed neural systems. Resting-state functional magnetic resonance imaging has identified extensive abnormalities in the blood-oxygen level-dependent signal in SCZ patients, including alterations in the average signal over the brain-i.e. the "global" signal (GS). It remains unknown, however, if these "global" alterations occur pervasively or follow a spatially preferential pattern. This study presents the first network-by-network quantification of GS topography in healthy subjects and SCZ patients. We observed a nonuniform GS contribution in healthy comparison subjects, whereby sensory areas exhibited the largest GS component. In SCZ patients, we identified preferential GS representation increases across association regions, while sensory regions showed preferential reductions. GS representation in sensory versus association cortices was strongly anti-correlated in healthy subjects. This anti-correlated relationship was markedly reduced in SCZ. Such shifts in GS topography may underlie profound alterations in neural information flow in SCZ, informing development of pharmacotherapies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Characterizing scaling properties of complex signals with missed data segments using the multifractal analysis

    Science.gov (United States)

    Pavlov, A. N.; Pavlova, O. N.; Abdurashitov, A. S.; Sindeeva, O. A.; Semyachkina-Glushkovskaya, O. V.; Kurths, J.

    2018-01-01

    The scaling properties of complex processes may be highly influenced by the presence of various artifacts in experimental recordings. Their removal produces changes in the singularity spectra and the Hölder exponents as compared with the original artifacts-free data, and these changes are significantly different for positively correlated and anti-correlated signals. While signals with power-law correlations are nearly insensitive to the loss of significant parts of data, the removal of fragments of anti-correlated signals is more crucial for further data analysis. In this work, we study the ability of characterizing scaling features of chaotic and stochastic processes with distinct correlation properties using a wavelet-based multifractal analysis, and discuss differences between the effect of missed data for synchronous and asynchronous oscillatory regimes. We show that even an extreme data loss allows characterizing physiological processes such as the cerebral blood flow dynamics.

  4. Quantifying global exergy resources

    International Nuclear Information System (INIS)

    Hermann, Weston A.

    2006-01-01

    Exergy is used as a common currency to assess and compare the reservoirs of theoretically extractable work we call energy resources. Resources consist of matter or energy with properties different from the predominant conditions in the environment. These differences can be classified as physical, chemical, or nuclear exergy. This paper identifies the primary exergy reservoirs that supply exergy to the biosphere and quantifies the intensive and extensive exergy of their derivative secondary reservoirs, or resources. The interconnecting accumulations and flows among these reservoirs are illustrated to show the path of exergy through the terrestrial system from input to its eventual natural or anthropogenic destruction. The results are intended to assist in evaluation of current resource utilization, help guide fundamental research to enable promising new energy technologies, and provide a basis for comparing the resource potential of future energy options that is independent of technology and cost

  5. Quantifying the Adaptive Cycle.

    Directory of Open Access Journals (Sweden)

    David G Angeler

    Full Text Available The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994-2011 data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems.

  6. Quantifying Anthropogenic Dust Emissions

    Science.gov (United States)

    Webb, Nicholas P.; Pierre, Caroline

    2018-02-01

    Anthropogenic land use and land cover change, including local environmental disturbances, moderate rates of wind-driven soil erosion and dust emission. These human-dust cycle interactions impact ecosystems and agricultural production, air quality, human health, biogeochemical cycles, and climate. While the impacts of land use activities and land management on aeolian processes can be profound, the interactions are often complex and assessments of anthropogenic dust loads at all scales remain highly uncertain. Here, we critically review the drivers of anthropogenic dust emission and current evaluation approaches. We then identify and describe opportunities to: (1) develop new conceptual frameworks and interdisciplinary approaches that draw on ecological state-and-transition models to improve the accuracy and relevance of assessments of anthropogenic dust emissions; (2) improve model fidelity and capacity for change detection to quantify anthropogenic impacts on aeolian processes; and (3) enhance field research and monitoring networks to support dust model applications to evaluate the impacts of disturbance processes on local to global-scale wind erosion and dust emissions.

  7. Quantifying loopy network architectures.

    Directory of Open Access Journals (Sweden)

    Eleni Katifori

    Full Text Available Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes from the metric topology (connectivity and edge weight and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs.

  8. Cooperative ethylene receptor signaling

    OpenAIRE

    Liu, Qian; Wen, Chi-Kuang

    2012-01-01

    The gaseous plant hormone ethylene is perceived by a family of five ethylene receptor members in the dicotyledonous model plant Arabidopsis. Genetic and biochemical studies suggest that the ethylene response is suppressed by ethylene receptor complexes, but the biochemical nature of the receptor signal is unknown. Without appropriate biochemical measures to trace the ethylene receptor signal and quantify the signal strength, the biological significance of the modulation of ethylene responses ...

  9. The Fallacy of Quantifying Risk

    Science.gov (United States)

    2012-09-01

    Defense AT&L: September–October 2012 18 The Fallacy of Quantifying Risk David E. Frick, Ph.D. Frick is a 35-year veteran of the Department of...a key to risk analysis was “choosing the right technique” of quantifying risk . The weakness in this argument stems not from the assertion that one...of information about the enemy), yet achiev- ing great outcomes. Attempts at quantifying risk are not, in and of themselves, objectionable. Prudence

  10. Multidominance, ellipsis, and quantifier scope

    NARCIS (Netherlands)

    Temmerman, Tanja Maria Hugo

    2012-01-01

    This dissertation provides a novel perspective on the interaction between quantifier scope and ellipsis. It presents a detailed investigation of the scopal interaction between English negative indefinites, modals, and quantified phrases in ellipsis. One of the crucial observations is that a negative

  11. Quantifying the strength of quorum sensing crosstalk within microbial communities.

    Directory of Open Access Journals (Sweden)

    Kalinga Pavan T Silva

    2017-10-01

    Full Text Available In multispecies microbial communities, the exchange of signals such as acyl-homoserine lactones (AHL enables communication within and between species of Gram-negative bacteria. This process, commonly known as quorum sensing, aids in the regulation of genes crucial for the survival of species within heterogeneous populations of microbes. Although signal exchange was studied extensively in well-mixed environments, less is known about the consequences of crosstalk in spatially distributed mixtures of species. Here, signaling dynamics were measured in a spatially distributed system containing multiple strains utilizing homologous signaling systems. Crosstalk between strains containing the lux, las and rhl AHL-receptor circuits was quantified. In a distributed population of microbes, the impact of community composition on spatio-temporal dynamics was characterized and compared to simulation results using a modified reaction-diffusion model. After introducing a single term to account for crosstalk between each pair of signals, the model was able to reproduce the activation patterns observed in experiments. We quantified the robustness of signal propagation in the presence of interacting signals, finding that signaling dynamics are largely robust to interference. The ability of several wild isolates to participate in AHL-mediated signaling was investigated, revealing distinct signatures of crosstalk for each species. Our results present a route to characterize crosstalk between species and predict systems-level signaling dynamics in multispecies communities.

  12. Stimfit: quantifying electrophysiological data with Python

    Directory of Open Access Journals (Sweden)

    Segundo Jose Guzman

    2014-02-01

    Full Text Available Intracellular electrophysiological recordings provide crucial insights into elementary neuronal signals such as action potentials and synaptic currents. Analyzing and interpreting these signals is essential for a quantitative understanding of neuronal information processing, and requires both fast data visualization and ready access to complex analysis routines. To achieve this goal, we have developed Stimfit, a free software package for cellular neurophysiology with a Python scripting interface and a built-in Python shell. The program supports most standard file formats for cellular neurophysiology and other biomedical signals through the Biosig library. To quantify and interpret the activity of single neurons and communication between neurons, the program includes algorithms to characterize the kinetics of presynaptic action potentials and postsynaptic currents, estimate latencies between pre- and postsynaptic events, and detect spontaneously occurring events. We validate and benchmark these algorithms, give estimation errors, and provide sample use cases, showing that Stimfit represents an efficient, accessible and extensible way to accurately analyze and interpret neuronal signals.

  13. Quantifiers in Russian Sign Language

    NARCIS (Netherlands)

    Kimmelman, V.; Paperno, D.; Keenan, E.L.

    2017-01-01

    After presenting some basic genetic, historical and typological information about Russian Sign Language, this chapter outlines the quantification patterns it expresses. It illustrates various semantic types of quantifiers, such as generalized existential, generalized universal, proportional,

  14. Quantified Self in de huisartsenpraktijk

    NARCIS (Netherlands)

    de Groot, Martijn; Timmers, Bart; Kooiman, Thea; van Ittersum, Miriam

    2015-01-01

    Quantified Self staat voor de zelfmetende mens. Het aantal mensen dat met zelf gegeneerde gezondheidsgegevens het zorgproces binnenwandelt gaat de komende jaren groeien. Verschillende soorten activity trackers en gezondheidsapplicaties voor de smartphone maken het relatief eenvoudig om persoonlijke

  15. A compact clinical instrument for quantifying suppression.

    Science.gov (United States)

    Black, Joanne M; Thompson, Benjamin; Maehara, Goro; Hess, Robert F

    2011-02-01

    We describe a compact and convenient clinical apparatus for the measurement of suppression based on a previously reported laboratory-based approach. In addition, we report and validate a novel, rapid psychophysical method for measuring suppression using this apparatus, which makes the technique more applicable to clinical practice. By using a Z800 dual pro head-mounted display driven by a MAC laptop, we provide dichoptic stimulation. Global motion stimuli composed of arrays of moving dots are presented to each eye. One set of dots move in a coherent direction (termed signal) whereas another set of dots move in a random direction (termed noise). To quantify performance, we measure the signal/noise ratio corresponding to a direction-discrimination threshold. Suppression is quantified by assessing the extent to which it matters which eye sees the signal and which eye sees the noise. A space-saving, head-mounted display using current video technology offers an ideal solution for clinical practice. In addition, our optimized psychophysical method provided results that were in agreement with those produced using the original technique. We made measures of suppression on a group of nine adult amblyopic participants using this apparatus with both the original and new psychophysical paradigms. All participants had measurable suppression ranging from mild to severe. The two different psychophysical methods gave a strong correlation for the strength of suppression (rho = -0.83, p = 0.006). Combining the new apparatus and new psychophysical method creates a convenient and rapid technique for parametric measurement of interocular suppression. In addition, this apparatus constitutes the ideal platform for suppressors to combine information between their eyes in a similar way to binocularly normal people. This provides a convenient way for clinicians to implement the newly proposed binocular treatment of amblyopia that is based on antisuppression training.

  16. Quantifying the uncertainty in heritability.

    Science.gov (United States)

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-05-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.

  17. Quantifying brain microstructure with diffusion MRI

    DEFF Research Database (Denmark)

    Novikov, Dmitry S.; Jespersen, Sune N.; Kiselev, Valerij G.

    2016-01-01

    the potential to quantify the relevant length scales for neuronal tissue, such as the packing correlation length for neuronal fibers, the degree of neuronal beading, and compartment sizes. The second avenue corresponds to the long-time limit, when the observed signal can be approximated as a sum of multiple non......-exchanging anisotropic Gaussian components. Here the challenge lies in parameter estimation and in resolving its hidden degeneracies. The third avenue employs multiple diffusion encoding techniques, able to access information not contained in the conventional diffusion propagator. We conclude with our outlook...... on the future research directions which can open exciting possibilities for developing markers of pathology and development based on methods of studying mesoscopic transport in disordered systems....

  18. A Generalizable Methodology for Quantifying User Satisfaction

    Science.gov (United States)

    Huang, Te-Yuan; Chen, Kuan-Ta; Huang, Polly; Lei, Chin-Laung

    Quantifying user satisfaction is essential, because the results can help service providers deliver better services. In this work, we propose a generalizable methodology, based on survival analysis, to quantify user satisfaction in terms of session times, i. e., the length of time users stay with an application. Unlike subjective human surveys, our methodology is based solely on passive measurement, which is more cost-efficient and better able to capture subconscious reactions. Furthermore, by using session times, rather than a specific performance indicator, such as the level of distortion of voice signals, the effects of other factors like loudness and sidetone, can also be captured by the developed models. Like survival analysis, our methodology is characterized by low complexity and a simple model-developing process. The feasibility of our methodology is demonstrated through case studies of ShenZhou Online, a commercial MMORPG in Taiwan, and the most prevalent VoIP application in the world, namely Skype. Through the model development process, we can also identify the most significant performance factors and their impacts on user satisfaction and discuss how they can be exploited to improve user experience and optimize resource allocation.

  19. Quantifying and simulating human sensation

    DEFF Research Database (Denmark)

    Quantifying and simulating human sensation – relating science and technology of indoor climate research Abstract In his doctoral thesis from 1970 civil engineer Povl Ole Fanger proposed that the understanding of indoor climate should focus on the comfort of the individual rather than averaged...... this understanding of human sensation was adjusted to technology. I will look into the construction of the equipment, what it measures and the relationship between theory, equipment and tradition....

  20. Quantifying emissions from spontaneous combustion

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-09-01

    Spontaneous combustion can be a significant problem in the coal industry, not only due to the obvious safety hazard and the potential loss of valuable assets, but also with respect to the release of gaseous pollutants, especially CO2, from uncontrolled coal fires. This report reviews methodologies for measuring emissions from spontaneous combustion and discusses methods for quantifying, estimating and accounting for the purpose of preparing emission inventories.

  1. Signal Words

    Science.gov (United States)

    SIGNAL WORDS TOPIC FACT SHEET NPIC fact sheets are designed to answer questions that are commonly asked by the ... making decisions about pesticide use. What are Signal Words? Signal words are found on pesticide product labels, ...

  2. Quantifying Quantum-Mechanical Processes.

    Science.gov (United States)

    Hsieh, Jen-Hsiang; Chen, Shih-Hsuan; Li, Che-Ming

    2017-10-19

    The act of describing how a physical process changes a system is the basis for understanding observed phenomena. For quantum-mechanical processes in particular, the affect of processes on quantum states profoundly advances our knowledge of the natural world, from understanding counter-intuitive concepts to the development of wholly quantum-mechanical technology. Here, we show that quantum-mechanical processes can be quantified using a generic classical-process model through which any classical strategies of mimicry can be ruled out. We demonstrate the success of this formalism using fundamental processes postulated in quantum mechanics, the dynamics of open quantum systems, quantum-information processing, the fusion of entangled photon pairs, and the energy transfer in a photosynthetic pigment-protein complex. Since our framework does not depend on any specifics of the states being processed, it reveals a new class of correlations in the hierarchy between entanglement and Einstein-Podolsky-Rosen steering and paves the way for the elaboration of a generic method for quantifying physical processes.

  3. Clinical relevance of quantified fundus autofluorescence in diabetic macular oedema.

    Science.gov (United States)

    Yoshitake, S; Murakami, T; Uji, A; Unoki, N; Dodo, Y; Horii, T; Yoshimura, N

    2015-05-01

    To quantify the signal intensity of fundus autofluorescence (FAF) and evaluate its association with visual function and optical coherence tomography (OCT) findings in diabetic macular oedema (DMO). We reviewed 103 eyes of 78 patients with DMO and 30 eyes of 22 patients without DMO. FAF images were acquired using Heidelberg Retina Angiograph 2, and the signal levels of FAF in the individual subfields of the Early Treatment Diabetic Retinopathy Study grid were measured. We evaluated the association between quantified FAF and the logMAR VA and OCT findings. One hundred and three eyes with DMO had lower FAF signal intensity levels in the parafoveal subfields compared with 30 eyes without DMO. The autofluorescence intensity in the parafoveal subfields was associated negatively with logMAR VA and the retinal thickness in the corresponding subfields. The autofluorescence levels in the parafoveal subfield, except the nasal subfield, were lower in eyes with autofluorescent cystoid spaces in the corresponding subfield than in those without autofluorescent cystoid spaces. The autofluorescence level in the central subfield was related to foveal cystoid spaces but not logMAR VA or retinal thickness in the corresponding area. Quantified FAF in the parafovea has diagnostic significance and is clinically relevant in DMO.

  4. Quantifying Evaporation in a Permeable Pavement System

    Science.gov (United States)

    Studies quantifying evaporation from permeable pavement systems are limited to a few laboratory studies and one field application. This research quantifies evaporation for a larger-scale field application by measuring the water balance from lined permeable pavement sections. Th...

  5. Quantifier Scope in Categorical Compositional Distributional Semantics

    Directory of Open Access Journals (Sweden)

    Mehrnoosh Sadrzadeh

    2016-08-01

    Full Text Available In previous work with J. Hedges, we formalised a generalised quantifiers theory of natural language in categorical compositional distributional semantics with the help of bialgebras. In this paper, we show how quantifier scope ambiguity can be represented in that setting and how this representation can be generalised to branching quantifiers.

  6. Quantifying the vitamin D economy.

    Science.gov (United States)

    Heaney, Robert P; Armas, Laura A G

    2015-01-01

    Vitamin D enters the body through multiple routes and in a variety of chemical forms. Utilization varies with input, demand, and genetics. Vitamin D and its metabolites are carried in the blood on a Gc protein that has three principal alleles with differing binding affinities and ethnic prevalences. Three major metabolites are produced, which act via two routes, endocrine and autocrine/paracrine, and in two compartments, extracellular and intracellular. Metabolic consumption is influenced by physiological controls, noxious stimuli, and tissue demand. When administered as a supplement, varying dosing schedules produce major differences in serum metabolite profiles. To understand vitamin D's role in human physiology, it is necessary both to identify the foregoing entities, mechanisms, and pathways and, specifically, to quantify them. This review was performed to delineate the principal entities and transitions involved in the vitamin D economy, summarize the status of present knowledge of the applicable rates and masses, draw inferences about functions that are implicit in these quantifications, and point out implications for the determination of adequacy. © The Author(s) 2014. Published by Oxford University Press on behalf of the International Life Sciences Institute. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Quantify the complexity of turbulence

    Science.gov (United States)

    Tao, Xingtian; Wu, Huixuan

    2017-11-01

    Many researchers have used Reynolds stress, power spectrum and Shannon entropy to characterize a turbulent flow, but few of them have measured the complexity of turbulence. Yet as this study shows, conventional turbulence statistics and Shannon entropy have limits when quantifying the flow complexity. Thus, it is necessary to introduce new complexity measures- such as topology complexity and excess information-to describe turbulence. Our test flow is a classic turbulent cylinder wake at Reynolds number 8100. Along the stream-wise direction, the flow becomes more isotropic and the magnitudes of normal Reynolds stresses decrease monotonically. These seem to indicate the flow dynamics becomes simpler downstream. However, the Shannon entropy keeps increasing along the flow direction and the dynamics seems to be more complex, because the large-scale vortices cascade to small eddies, the flow is less correlated and more unpredictable. In fact, these two contradictory observations partially describe the complexity of a turbulent wake. Our measurements (up to 40 diameters downstream the cylinder) show that the flow's degree-of-complexity actually increases firstly and then becomes a constant (or drops slightly) along the stream-wise direction. University of Kansas General Research Fund.

  8. Quantifying Cancer Risk from Radiation.

    Science.gov (United States)

    Keil, Alexander P; Richardson, David B

    2017-12-06

    Complex statistical models fitted to data from studies of atomic bomb survivors are used to estimate the human health effects of ionizing radiation exposures. We describe and illustrate an approach to estimate population risks from ionizing radiation exposure that relaxes many assumptions about radiation-related mortality. The approach draws on developments in methods for causal inference. The results offer a different way to quantify radiation's effects and show that conventional estimates of the population burden of excess cancer at high radiation doses are driven strongly by projecting outside the range of current data. Summary results obtained using the proposed approach are similar in magnitude to those obtained using conventional methods, although estimates of radiation-related excess cancers differ for many age, sex, and dose groups. At low doses relevant to typical exposures, the strength of evidence in data is surprisingly weak. Statements regarding human health effects at low doses rely strongly on the use of modeling assumptions. © 2017 Society for Risk Analysis.

  9. Quantifying China's regional economic complexity

    Science.gov (United States)

    Gao, Jian; Zhou, Tao

    2018-02-01

    China has experienced an outstanding economic expansion during the past decades, however, literature on non-monetary metrics that reveal the status of China's regional economic development are still lacking. In this paper, we fill this gap by quantifying the economic complexity of China's provinces through analyzing 25 years' firm data. First, we estimate the regional economic complexity index (ECI), and show that the overall time evolution of provinces' ECI is relatively stable and slow. Then, after linking ECI to the economic development and the income inequality, we find that the explanatory power of ECI is positive for the former but negative for the latter. Next, we compare different measures of economic diversity and explore their relationships with monetary macroeconomic indicators. Results show that the ECI index and the non-linear iteration based Fitness index are comparative, and they both have stronger explanatory power than other benchmark measures. Further multivariate regressions suggest the robustness of our results after controlling other socioeconomic factors. Our work moves forward a step towards better understanding China's regional economic development and non-monetary macroeconomic indicators.

  10. Quantifying and Reducing Light Pollution

    Science.gov (United States)

    Gokhale, Vayujeet; Caples, David; Goins, Jordan; Herdman, Ashley; Pankey, Steven; Wren, Emily

    2018-06-01

    We describe the current level of light pollution in and around Kirksville, Missouri and around Anderson Mesa near Flagstaff, Arizona. We quantify the amount of light that is projected up towards the sky, instead of the ground, using Unihedron sky quality meters installed at various locations. We also present results from DSLR photometry of several standard stars, and compare the photometric quality of the data collected at locations with varying levels of light pollution. Presently, light fixture shields and ‘warm-colored’ lights are being installed on Truman State University’s campus in order to reduce light pollution. We discuss the experimental procedure we use to test the effectiveness of the different light fixtures shields in a controlled setting inside the Del and Norma Robison Planetarium.Apart from negatively affecting the quality of the night sky for astronomers, light pollution adversely affects migratory patterns of some animals and sleep-patterns in humans, increases our carbon footprint, and wastes resources and money. This problem threatens to get particularly acute with the increasing use of outdoor LED lamps. We conclude with a call to action to all professional and amateur astronomers to act against the growing nuisance of light pollution.

  11. Quantifying meniscal kinematics in dogs.

    Science.gov (United States)

    Park, Brian H; Banks, Scott A; Pozzi, Antonio

    2017-11-06

    The dog has been used extensively as an experimental model to study meniscal treatments such as meniscectomy, meniscal repair, transplantation, and regeneration. However, there is very little information on meniscal kinematics in the dog. This study used MR imaging to quantify in vitro meniscal kinematics in loaded dog knees in four distinct poses: extension, flexion, internal, and external rotation. A new method was used to track the meniscal poses along the convex and posteriorly tilted tibial plateau. Meniscal displacements were large, displacing 13.5 and 13.7 mm posteriorly on average for the lateral and medial menisci during flexion (p = 0.90). The medial anterior horn and lateral posterior horns were the most mobile structures, showing average translations of 15.9 and 15.1 mm, respectively. Canine menisci are highly mobile and exhibit movements that correlate closely with the relative tibiofemoral positions. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  12. Quantifying the invasiveness of species

    Directory of Open Access Journals (Sweden)

    Robert Colautti

    2014-04-01

    Full Text Available The success of invasive species has been explained by two contrasting but non-exclusive views: (i intrinsic factors make some species inherently good invaders; (ii species become invasive as a result of extrinsic ecological and genetic influences such as release from natural enemies, hybridization or other novel ecological and evolutionary interactions. These viewpoints are rarely distinguished but hinge on distinct mechanisms leading to different management scenarios. To improve tests of these hypotheses of invasion success we introduce a simple mathematical framework to quantify the invasiveness of species along two axes: (i interspecific differences in performance among native and introduced species within a region, and (ii intraspecific differences between populations of a species in its native and introduced ranges. Applying these equations to a sample dataset of occurrences of 1,416 plant species across Europe, Argentina, and South Africa, we found that many species are common in their native range but become rare following introduction; only a few introduced species become more common. Biogeographical factors limiting spread (e.g. biotic resistance, time of invasion therefore appear more common than those promoting invasion (e.g. enemy release. Invasiveness, as measured by occurrence data, is better explained by inter-specific variation in invasion potential than biogeographical changes in performance. We discuss how applying these comparisons to more detailed performance data would improve hypothesis testing in invasion biology and potentially lead to more efficient management strategies.

  13. Integrated cosmological probes: concordance quantified

    Energy Technology Data Exchange (ETDEWEB)

    Nicola, Andrina; Amara, Adam; Refregier, Alexandre, E-mail: andrina.nicola@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch [Department of Physics, ETH Zürich, Wolfgang-Pauli-Strasse 27, CH-8093 Zürich (Switzerland)

    2017-10-01

    Assessing the consistency of parameter constraints derived from different cosmological probes is an important way to test the validity of the underlying cosmological model. In an earlier work [1], we computed constraints on cosmological parameters for ΛCDM from an integrated analysis of CMB temperature anisotropies and CMB lensing from Planck, galaxy clustering and weak lensing from SDSS, weak lensing from DES SV as well as Type Ia supernovae and Hubble parameter measurements. In this work, we extend this analysis and quantify the concordance between the derived constraints and those derived by the Planck Collaboration as well as WMAP9, SPT and ACT. As a measure for consistency, we use the Surprise statistic [2], which is based on the relative entropy. In the framework of a flat ΛCDM cosmological model, we find all data sets to be consistent with one another at a level of less than 1σ. We highlight that the relative entropy is sensitive to inconsistencies in the models that are used in different parts of the analysis. In particular, inconsistent assumptions for the neutrino mass break its invariance on the parameter choice. When consistent model assumptions are used, the data sets considered in this work all agree with each other and ΛCDM, without evidence for tensions.

  14. ATP signals

    DEFF Research Database (Denmark)

    Novak, Ivana

    2016-01-01

    The Department of Biology at the University of Copenhagen explains the function of ATP signalling in the pancreas......The Department of Biology at the University of Copenhagen explains the function of ATP signalling in the pancreas...

  15. Neural basis for generalized quantifier comprehension.

    Science.gov (United States)

    McMillan, Corey T; Clark, Robin; Moore, Peachie; Devita, Christian; Grossman, Murray

    2005-01-01

    Generalized quantifiers like "all cars" are semantically well understood, yet we know little about their neural representation. Our model of quantifier processing includes a numerosity device, operations that combine number elements and working memory. Semantic theory posits two types of quantifiers: first-order quantifiers identify a number state (e.g. "at least 3") and higher-order quantifiers additionally require maintaining a number state actively in working memory for comparison with another state (e.g. "less than half"). We used BOLD fMRI to test the hypothesis that all quantifiers recruit inferior parietal cortex associated with numerosity, while only higher-order quantifiers recruit prefrontal cortex associated with executive resources like working memory. Our findings showed that first-order and higher-order quantifiers both recruit right inferior parietal cortex, suggesting that a numerosity component contributes to quantifier comprehension. Moreover, only probes of higher-order quantifiers recruited right dorsolateral prefrontal cortex, suggesting involvement of executive resources like working memory. We also observed activation of thalamus and anterior cingulate that may be associated with selective attention. Our findings are consistent with a large-scale neural network centered in frontal and parietal cortex that supports comprehension of generalized quantifiers.

  16. Signaling aggression.

    Science.gov (United States)

    van Staaden, Moira J; Searcy, William A; Hanlon, Roger T

    2011-01-01

    From psychological and sociological standpoints, aggression is regarded as intentional behavior aimed at inflicting pain and manifested by hostility and attacking behaviors. In contrast, biologists define aggression as behavior associated with attack or escalation toward attack, omitting any stipulation about intentions and goals. Certain animal signals are strongly associated with escalation toward attack and have the same function as physical attack in intimidating opponents and winning contests, and ethologists therefore consider them an integral part of aggressive behavior. Aggressive signals have been molded by evolution to make them ever more effective in mediating interactions between the contestants. Early theoretical analyses of aggressive signaling suggested that signals could never be honest about fighting ability or aggressive intentions because weak individuals would exaggerate such signals whenever they were effective in influencing the behavior of opponents. More recent game theory models, however, demonstrate that given the right costs and constraints, aggressive signals are both reliable about strength and intentions and effective in influencing contest outcomes. Here, we review the role of signaling in lieu of physical violence, considering threat displays from an ethological perspective as an adaptive outcome of evolutionary selection pressures. Fighting prowess is conveyed by performance signals whose production is constrained by physical ability and thus limited to just some individuals, whereas aggressive intent is encoded in strategic signals that all signalers are able to produce. We illustrate recent advances in the study of aggressive signaling with case studies of charismatic taxa that employ a range of sensory modalities, viz. visual and chemical signaling in cephalopod behavior, and indicators of aggressive intent in the territorial calls of songbirds. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Removal of artifacts in knee joint vibroarthrographic signals using ensemble empirical mode decomposition and detrended fluctuation analysis

    International Nuclear Information System (INIS)

    Wu, Yunfeng; Yang, Shanshan; Zheng, Fang; Cai, Suxian; Lu, Meng; Wu, Meihong

    2014-01-01

    High-resolution knee joint vibroarthrographic (VAG) signals can help physicians accurately evaluate the pathological condition of a degenerative knee joint, in order to prevent unnecessary exploratory surgery. Artifact cancellation is vital to preserve the quality of VAG signals prior to further computer-aided analysis. This paper describes a novel method that effectively utilizes ensemble empirical mode decomposition (EEMD) and detrended fluctuation analysis (DFA) algorithms for the removal of baseline wander and white noise in VAG signal processing. The EEMD method first successively decomposes the raw VAG signal into a set of intrinsic mode functions (IMFs) with fast and low oscillations, until the monotonic baseline wander remains in the last residue. Then, the DFA algorithm is applied to compute the fractal scaling index parameter for each IMF, in order to identify the anti-correlation and the long-range correlation components. Next, the DFA algorithm can be used to identify the anti-correlated and the long-range correlated IMFs, which assists in reconstructing the artifact-reduced VAG signals. Our experimental results showed that the combination of EEMD and DFA algorithms was able to provide averaged signal-to-noise ratio (SNR) values of 20.52 dB (standard deviation: 1.14 dB) and 20.87 dB (standard deviation: 1.89 dB) for 45 normal signals in healthy subjects and 20 pathological signals in symptomatic patients, respectively. The combination of EEMD and DFA algorithms can ameliorate the quality of VAG signals with great SNR improvements over the raw signal, and the results were also superior to those achieved by wavelet matching pursuit decomposition and time-delay neural filter. (paper)

  18. Quantifying forecast quality of IT business value

    NARCIS (Netherlands)

    Eveleens, J.L.; van der Pas, M.; Verhoef, C.

    2012-01-01

    This article discusses how to quantify the forecasting quality of IT business value. We address a common economic indicator often used to determine the business value of project proposals, the Net Present Value (NPV). To quantify the forecasting quality of IT business value, we develop a generalized

  19. Signal detection

    International Nuclear Information System (INIS)

    Tholomier, M.

    1985-01-01

    In a scanning electron microscope, whatever is the measured signal, the same set is found: incident beam, sample, signal detection, signal amplification. The resulting signal is used to control the spot luminosity with the observer cathodoscope. This is synchronized with the beam scanning on the sample; on the cathodoscope, the image in secondary electrons, backscattered electrons,... of the sample surface is reconstituted. The best compromise must be found between a register time low enough to remove eventual variations (under the incident beam) of the nature of the observed phenomenon, and a good spatial resolution of the image and a signal-to-noise ratio high enough. The noise is one of the basic limitations of the scanning electron microscope performance. The whose measurement line must be optimized to reduce it [fr

  20. Bare quantifier fronting as contrastive topicalization

    Directory of Open Access Journals (Sweden)

    Ion Giurgea

    2015-11-01

    Full Text Available I argue that indefinites (in particular bare quantifiers such as ‘something’, ‘somebody’, etc. which are neither existentially presupposed nor in the restriction of a quantifier over situations, can undergo topicalization in a number of Romance languages (Catalan, Italian, Romanian, Spanish, but only if the sentence contains “verum” focus, i.e. focus on a high degree of certainty of the sentence. I analyze these indefinites as contrastive topics, using Büring’s (1999 theory (where the term ‘S-topic’ is used for what I call ‘contrastive topic’. I propose that the topic is evaluated in relation to a scalar set including generalized quantifiers such as {lP $x P(x, lP MANYx P(x, lP MOSTx P(x, lP “xP(x} or {lP $xP(x, lP P(a, lP P(b …}, and that the contrastive topic is the weakest generalized quantifier in this set. The verum focus, which is part of the “comment” that co-occurs with the “Topic”, introduces a set of alternatives including degrees of certainty of the assertion. The speaker asserts that his claim is certainly true or highly probable, contrasting it with stronger claims for which the degree of probability is unknown. This explains the observation that in downward entailing contexts, the fronted quantified DPs are headed by ‘all’ or ‘many’, whereas ‘some’, small numbers or ‘at least n’ appear in upward entailing contexts. Unlike other cases of non-specific topics, which are property topics, these are quantifier topics: the topic part is a generalized quantifier, the comment is a property of generalized quantifiers. This explains the narrow scope of the fronted quantified DP.

  1. Saturated excitation of Fluorescence to quantify excitation enhancement in aperture antennas

    KAUST Repository

    Aouani, Heykel

    2012-07-23

    Fluorescence spectroscopy is widely used to probe the electromagnetic intensity amplification on optical antennas, yet measuring the excitation intensity amplification is a challenge, as the detected fluorescence signal is an intricate combination of excitation and emission. Here, we describe a novel approach to quantify the electromagnetic amplification in aperture antennas by taking advantage of the intrinsic non linear properties of the fluorescence process. Experimental measurements of the fundamental f and second harmonic 2f amplitudes of the fluorescence signal upon excitation modulation are used to quantify the electromagnetic intensity amplification with plasmonic aperture antennas. © 2012 Optical Society of America.

  2. Saturated excitation of Fluorescence to quantify excitation enhancement in aperture antennas

    KAUST Repository

    Aouani, Heykel; Hostein, Richard; Mahboub, Oussama; Devaux, Eloï se; Rigneault, Hervé ; Ebbesen, Thomas W.; Wenger, Jé rô me

    2012-01-01

    Fluorescence spectroscopy is widely used to probe the electromagnetic intensity amplification on optical antennas, yet measuring the excitation intensity amplification is a challenge, as the detected fluorescence signal is an intricate combination of excitation and emission. Here, we describe a novel approach to quantify the electromagnetic amplification in aperture antennas by taking advantage of the intrinsic non linear properties of the fluorescence process. Experimental measurements of the fundamental f and second harmonic 2f amplitudes of the fluorescence signal upon excitation modulation are used to quantify the electromagnetic intensity amplification with plasmonic aperture antennas. © 2012 Optical Society of America.

  3. Quantify Risk to Manage Cost and Schedule

    National Research Council Canada - National Science Library

    Raymond, Fred

    1999-01-01

    Too many projects suffer from unachievable budget and schedule goals, caused by unrealistic estimates and the failure to quantify and communicate the uncertainty of these estimates to managers and sponsoring executives...

  4. Quantifying drug-protein binding in vivo

    International Nuclear Information System (INIS)

    Buchholz, B; Bench, G; Keating III, G; Palmblad, M; Vogel, J; Grant, P G; Hillegonds, D

    2004-01-01

    Accelerator mass spectrometry (AMS) provides precise quantitation of isotope labeled compounds that are bound to biological macromolecules such as DNA or proteins. The sensitivity is high enough to allow for sub-pharmacological (''micro-'') dosing to determine macromolecular targets without inducing toxicities or altering the system under study, whether it is healthy or diseased. We demonstrated an application of AMS in quantifying the physiologic effects of one dosed chemical compound upon the binding level of another compound in vivo at sub-toxic doses [4].We are using tissues left from this study to develop protocols for quantifying specific binding to isolated and identified proteins. We also developed a new technique to quantify nanogram to milligram amounts of isolated protein at precisions that are comparable to those for quantifying the bound compound by AMS

  5. New frontiers of quantified self 3

    DEFF Research Database (Denmark)

    Rapp, Amon; Cena, Federica; Kay, Judy

    2017-01-01

    Quantified Self (QS) field needs to start thinking of how situated needs may affect the use of self-tracking technologies. In this workshop we will focus on the idiosyncrasies of specific categories of users....

  6. Quantifying Short-Chain Chlorinated Paraffin Congener Groups.

    Science.gov (United States)

    Yuan, Bo; Bogdal, Christian; Berger, Urs; MacLeod, Matthew; Gebbink, Wouter A; Alsberg, Tomas; de Wit, Cynthia A

    2017-09-19

    Accurate quantification of short-chain chlorinated paraffins (SCCPs) poses an exceptional challenge to analytical chemists. SCCPs are complex mixtures of chlorinated alkanes with variable chain length and chlorination level; congeners with a fixed chain length (n) and number of chlorines (m) are referred to as a "congener group" C n Cl m . Recently, we resolved individual C n Cl m by mathematically deconvolving soft ionization high-resolution mass spectra of SCCP mixtures. Here we extend the method to quantifying C n Cl m by introducing C n Cl m specific response factors (RFs) that are calculated from 17 SCCP chain-length standards with a single carbon chain length and variable chlorination level. The signal pattern of each standard is measured on APCI-QTOF-MS. RFs of each C n Cl m are obtained by pairwise optimization of the normal distribution's fit to the signal patterns of the 17 chain-length standards. The method was verified by quantifying SCCP technical mixtures and spiked environmental samples with accuracies of 82-123% and 76-109%, respectively. The absolute differences between calculated and manufacturer-reported chlorination degrees were -0.9 to 1.0%Cl for SCCP mixtures of 49-71%Cl. The quantification method has been replicated with ECNI magnetic sector MS and ECNI-Q-Orbitrap-MS. C n Cl m concentrations determined with the three instruments were highly correlated (R 2 > 0.90) with each other.

  7. Determination of signal intensity affected by Gaussian noise

    International Nuclear Information System (INIS)

    Blostein, Jeronimo J.; Bennun, Leonardo

    1999-01-01

    A methodology based on maximum likelihood criteria, to identify and quantify an arbitrary signal affected by Gaussian noise is shown. To use this methodology it is necessary to know the position in the spectrum where the signal of interest should appear, and the shape of the signal when the background is null or unappreciable. (author)

  8. Signal Processing

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Signal processing techniques, extensively used nowadays to maximize the performance of audio and video equipment, have been a key part in the design of hardware and software for high energy physics detectors since pioneering applications in the UA1 experiment at CERN in 1979

  9. CONFOCAL MICROSCOPY SYSTEM PERFORMANCE: FOUNDATIONS FOR QUANTIFYING CYTOMETRIC APPLICATIONS WITH SPECTROSCOPIC INSTRUMENTS

    Science.gov (United States)

    The confocal laser-scanning microscopy (CLSM) has enormous potential in many biological fields. The goal of a CLSM is to acquire and quantify fluorescence and in some instruments acquire spectral characterization of the emitted signal. The accuracy of these measurements demands t...

  10. Quantifying graininess of glossy food products

    DEFF Research Database (Denmark)

    Møller, Flemming; Carstensen, Jens Michael

    The sensory quality of yoghurt can be altered when changing the milk composition or processing conditions. Part of the sensory quality may be assessed visually. It is described how a non-contact method for quantifying surface gloss and grains in yoghurt can be made. It was found that the standard...

  11. Quantifying antimicrobial resistance at veal calf farms

    NARCIS (Netherlands)

    Bosman, A.B.; Wagenaar, J.A.; Stegeman, A.; Vernooij, H.; Mevius, D.J.

    2012-01-01

    This study was performed to determine a sampling strategy to quantify the prevalence of antimicrobial resistance on veal calf farms, based on the variation in antimicrobial resistance within and between calves on five farms. Faecal samples from 50 healthy calves (10 calves/farm) were collected. From

  12. QS Spiral: Visualizing Periodic Quantified Self Data

    DEFF Research Database (Denmark)

    Larsen, Jakob Eg; Cuttone, Andrea; Jørgensen, Sune Lehmann

    2013-01-01

    In this paper we propose an interactive visualization technique QS Spiral that aims to capture the periodic properties of quantified self data and let the user explore those recurring patterns. The approach is based on time-series data visualized as a spiral structure. The interactivity includes ...

  13. Quantifying recontamination through factory environments - a review

    NARCIS (Netherlands)

    Asselt-den Aantrekker, van E.D.; Boom, R.M.; Zwietering, M.H.; Schothorst, van M.

    2003-01-01

    Recontamination of food products can be the origin of foodborne illnesses and should therefore be included in quantitative microbial risk assessment (MRA) studies. In order to do this, recontamination should be quantified using predictive models. This paper gives an overview of the relevant

  14. Quantifying quantum coherence with quantum Fisher information.

    Science.gov (United States)

    Feng, X N; Wei, L F

    2017-11-14

    Quantum coherence is one of the old but always important concepts in quantum mechanics, and now it has been regarded as a necessary resource for quantum information processing and quantum metrology. However, the question of how to quantify the quantum coherence has just been paid the attention recently (see, e.g., Baumgratz et al. PRL, 113. 140401 (2014)). In this paper we verify that the well-known quantum Fisher information (QFI) can be utilized to quantify the quantum coherence, as it satisfies the monotonicity under the typical incoherent operations and the convexity under the mixing of the quantum states. Differing from most of the pure axiomatic methods, quantifying quantum coherence by QFI could be experimentally testable, as the bound of the QFI is practically measurable. The validity of our proposal is specifically demonstrated with the typical phase-damping and depolarizing evolution processes of a generic single-qubit state, and also by comparing it with the other quantifying methods proposed previously.

  15. Interbank exposures: quantifying the risk of contagion

    OpenAIRE

    C. H. Furfine

    1999-01-01

    This paper examines the likelihood that failure of one bank would cause the subsequent collapse of a large number of other banks. Using unique data on interbank payment flows, the magnitude of bilateral federal funds exposures is quantified. These exposures are used to simulate the impact of various failure scenarios, and the risk of contagion is found to be economically small.

  16. Quantifying Productivity Gains from Foreign Investment

    NARCIS (Netherlands)

    C. Fons-Rosen (Christian); S. Kalemli-Ozcan (Sebnem); B.E. Sorensen (Bent); C. Villegas-Sanchez (Carolina)

    2013-01-01

    textabstractWe quantify the causal effect of foreign investment on total factor productivity (TFP) using a new global firm-level database. Our identification strategy relies on exploiting the difference in the amount of foreign investment by financial and industrial investors and simultaneously

  17. Power Curve Measurements, quantify the production increase

    DEFF Research Database (Denmark)

    Gómez Arranz, Paula; Vesth, Allan

    The purpose of this report is to quantify the production increase on a given turbine with respect to another given turbine. The used methodology is the “side by side” comparison method, provided by the client. This method involves the use of two neighboring turbines and it is based...

  18. Quantifying capital goods for waste landfilling

    DEFF Research Database (Denmark)

    Brogaard, Line Kai-Sørensen; Stentsøe, Steen; Willumsen, Hans Christian

    2013-01-01

    Materials and energy used for construction of a hill-type landfill of 4 million m3 were quantified in detail. The landfill is engineered with a liner and leachate collections system, as well as a gas collection and control system. Gravel and clay were the most common materials used, amounting...

  19. Quantifying interspecific coagulation efficiency of phytoplankton

    DEFF Research Database (Denmark)

    Hansen, J.L.S.; Kiørboe, Thomas

    1997-01-01

    . nordenskjoeldii. Mutual coagulation between Skeletonema costatum and the non-sticky cel:ls of Ditylum brightwellii also proceeded with hall the efficiency of S. costatum alone. The latex beads were suitable to be used as 'standard particles' to quantify the ability of phytoplankton to prime aggregation...

  20. New frontiers of quantified self 2

    DEFF Research Database (Denmark)

    Rapp, Amon; Cena, Federica; Kay, Judy

    2016-01-01

    While the Quantified Self (QS) community is described in terms of "self-knowledge through numbers" people are increasingly demanding value and meaning. In this workshop we aim at refocusing the QS debate on the value of data for providing new services....

  1. Quantifying temporal ventriloquism in audiovisual synchrony perception

    NARCIS (Netherlands)

    Kuling, I.A.; Kohlrausch, A.G.; Juola, J.F.

    2013-01-01

    The integration of visual and auditory inputs in the human brain works properly only if the components are perceived in close temporal proximity. In the present study, we quantified cross-modal interactions in the human brain for audiovisual stimuli with temporal asynchronies, using a paradigm from

  2. Reliability-How to Quantify and Improve?

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 5; Issue 5. Reliability - How to Quantify and Improve? - Improving the Health of Products. N K Srinivasan. General Article Volume 5 Issue 5 May 2000 pp 55-63. Fulltext. Click here to view fulltext PDF. Permanent link:

  3. Integrin Signalling

    OpenAIRE

    Schelfaut, Roselien

    2005-01-01

    Integrins are receptors presented on most cells. By binding ligand they can generate signalling pathways inside the cell. Those pathways are a linkage to proteins in the cytosol. It is known that tumor cells can survive and proliferate in the absence of a solid support while normal cells need to be bound to ligand. To understand why tumour cells act that way, we first have to know how ligand-binding to integrins affect the cell. This research field includes studies on activation of proteins b...

  4. Quantifying Stock Return Distributions in Financial Markets.

    Science.gov (United States)

    Botta, Federico; Moat, Helen Susannah; Stanley, H Eugene; Preis, Tobias

    2015-01-01

    Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales.

  5. A masking index for quantifying hidden glitches

    OpenAIRE

    Berti-Equille, Laure; Loh, J. M.; Dasu, T.

    2015-01-01

    Data glitches are errors in a dataset. They are complex entities that often span multiple attributes and records. When they co-occur in data, the presence of one type of glitch can hinder the detection of another type of glitch. This phenomenon is called masking. In this paper, we define two important types of masking and propose a novel, statistically rigorous indicator called masking index for quantifying the hidden glitches. We outline four cases of masking: outliers masked by missing valu...

  6. How are the catastrophical risks quantifiable

    International Nuclear Information System (INIS)

    Chakraborty, S.

    1985-01-01

    For the assessment and evaluation of industrial risks the question must be asked how are the catastrophical risks quantifiable. Typical real catastrophical risks and risk assessment based on modelling assumptions have been placed against each other in order to put the risks into proper perspective. However, the society is risk averse when there is a catastrophic potential of severe accidents in a large scale industrial facility even though there is extremely low probability of occurence. (orig.) [de

  7. Quantifying Distributional Model Risk via Optimal Transport

    OpenAIRE

    Blanchet, Jose; Murthy, Karthyek R. A.

    2016-01-01

    This paper deals with the problem of quantifying the impact of model misspecification when computing general expected values of interest. The methodology that we propose is applicable in great generality, in particular, we provide examples involving path dependent expectations of stochastic processes. Our approach consists in computing bounds for the expectation of interest regardless of the probability measure used, as long as the measure lies within a prescribed tolerance measured in terms ...

  8. Quantifying Anthropogenic Stress on Groundwater Resources

    OpenAIRE

    Ashraf, Batool; AghaKouchak, Amir; Alizadeh, Amin; Mousavi Baygi, Mohammad; R. Moftakhari, Hamed; Mirchi, Ali; Anjileli, Hassan; Madani, Kaveh

    2017-01-01

    This study explores a general framework for quantifying anthropogenic influences on groundwater budget based on normalized human outflow (hout) and inflow (hin). The framework is useful for sustainability assessment of groundwater systems and allows investigating the effects of different human water abstraction scenarios on the overall aquifer regime (e.g., depleted, natural flow-dominated, and human flow-dominated). We apply this approach to selected regions in the USA, Germany and Iran to e...

  9. Quantifying commuter exposures to volatile organic compounds

    Science.gov (United States)

    Kayne, Ashleigh

    Motor-vehicles can be a predominant source of air pollution in cities. Traffic-related air pollution is often unavoidable for people who live in populous areas. Commuters may have high exposures to traffic-related air pollution as they are close to vehicle tailpipes. Volatile organic compounds (VOCs) are one class of air pollutants of concern because exposure to VOCs carries risk for adverse health effects. Specific VOCs of interest for this work include benzene, toluene, ethylbenzene, and xylenes (BTEX), which are often found in gasoline and combustion products. Although methods exist to measure time-integrated personal exposures to BTEX, there are few practical methods to measure a commuter's time-resolved BTEX exposure which could identify peak exposures that could be concealed with a time-integrated measurement. This study evaluated the ability of a photoionization detector (PID) to measure commuters' exposure to BTEX using Tenax TA samples as a reference and quantified the difference in BTEX exposure between cyclists and drivers with windows open and closed. To determine the suitability of two measurement methods (PID and Tenax TA) for use in this study, the precision, linearity, and limits of detection (LODs) for both the PID and Tenax TA measurement methods were determined in the laboratory with standard BTEX calibration gases. Volunteers commuted from their homes to their work places by cycling or driving while wearing a personal exposure backpack containing a collocated PID and Tenax TA sampler. Volunteers completed a survey and indicated if the windows in their vehicle were open or closed. Comparing pairs of exposure data from the Tenax TA and PID sampling methods determined the suitability of the PID to measure the BTEX exposures of commuters. The difference between BTEX exposures of cyclists and drivers with windows open and closed in Fort Collins was determined. Both the PID and Tenax TA measurement methods were precise and linear when evaluated in the

  10. Quantifying camouflage: how to predict detectability from appearance.

    Science.gov (United States)

    Troscianko, Jolyon; Skelhorn, John; Stevens, Martin

    2017-01-06

    advance in our understanding of the measurement, mechanism and definition of disruptive camouflage. Our study also provides the first test of the efficacy of many established methods for quantifying how conspicuous animals are against particular backgrounds. The validation of these methods opens up new lines of investigation surrounding the form and function of different types of camouflage, and may apply more broadly to the evolution of any visual signal.

  11. Quantifying anisotropy and fiber orientation in human brain histological sections

    Directory of Open Access Journals (Sweden)

    Matthew D Budde

    2013-02-01

    Full Text Available Diffusion weighted imaging (DWI has provided unparalleled insight into the microscopic structure and organization of the central nervous system. Diffusion tensor imaging (DTI and other models of the diffusion MRI signal extract microstructural properties of tissues with relevance to the normal and injured brain. Despite the prevalence of such techniques and applications, accurate and large-scale validation has proven difficult, particularly in the human brain. In this report, human brain sections obtained from a digital public brain bank were employed to quantify anisotropy and fiber orientation using structure tensor analysis. The derived maps depict the intricate complexity of white matter fibers at a resolution not attainable with current DWI experiments. Moreover, the effects of multiple fiber bundles (i.e. crossing fibers and intravoxel fiber dispersion were demonstrated. Examination of the cortex and hippocampal regions validated specific features of previous in vivo and ex vivo DTI studies of the human brain. Despite the limitation to two dimensions, the resulting images provide a unique depiction of white matter organization at resolutions currently unattainable with DWI. The method of analysis may be used to validate tissue properties derived from DTI and alternative models of the diffusion signal.

  12. Quantifying the Arousal Threshold Using Polysomnography in Obstructive Sleep Apnea.

    Science.gov (United States)

    Sands, Scott A; Terrill, Philip I; Edwards, Bradley A; Taranto Montemurro, Luigi; Azarbarzin, Ali; Marques, Melania; de Melo, Camila M; Loring, Stephen H; Butler, James P; White, David P; Wellman, Andrew

    2018-01-01

    Precision medicine for obstructive sleep apnea (OSA) requires noninvasive estimates of each patient's pathophysiological "traits." Here, we provide the first automated technique to quantify the respiratory arousal threshold-defined as the level of ventilatory drive triggering arousal from sleep-using diagnostic polysomnographic signals in patients with OSA. Ventilatory drive preceding clinically scored arousals was estimated from polysomnographic studies by fitting a respiratory control model (Terrill et al.) to the pattern of ventilation during spontaneous respiratory events. Conceptually, the magnitude of the airflow signal immediately after arousal onset reveals information on the underlying ventilatory drive that triggered the arousal. Polysomnographic arousal threshold measures were compared with gold standard values taken from esophageal pressure and intraoesophageal diaphragm electromyography recorded simultaneously (N = 29). Comparisons were also made to arousal threshold measures using continuous positive airway pressure (CPAP) dial-downs (N = 28). The validity of using (linearized) nasal pressure rather than pneumotachograph ventilation was also assessed (N = 11). Polysomnographic arousal threshold values were correlated with those measured using esophageal pressure and diaphragm EMG (R = 0.79, p < .0001; R = 0.73, p = .0001), as well as CPAP manipulation (R = 0.73, p < .0001). Arousal threshold estimates were similar using nasal pressure and pneumotachograph ventilation (R = 0.96, p < .0001). The arousal threshold in patients with OSA can be estimated using polysomnographic signals and may enable more personalized therapeutic interventions for patients with a low arousal threshold. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  13. Quantifier spreading: children misled by ostensive cues

    Directory of Open Access Journals (Sweden)

    Katalin É. Kiss

    2017-04-01

    Full Text Available This paper calls attention to a methodological problem of acquisition experiments. It shows that the economy of the stimulus employed in child language experiments may lend an increased ostensive effect to the message communicated to the child. Thus, when the visual stimulus in a sentence-picture matching task is a minimal model abstracting away from the details of the situation, children often regard all the elements of the stimulus as ostensive clues to be represented in the corresponding sentence. The use of such minimal stimuli is mistaken when the experiment aims to test whether or not a certain element of the stimulus is relevant for the linguistic representation or interpretation. The paper illustrates this point by an experiment involving quantifier spreading. It is claimed that children find a universally quantified sentence like 'Every girl is riding a bicycle 'to be a false description of a picture showing three girls riding bicycles and a solo bicycle because they are misled to believe that all the elements in the visual stimulus are relevant, hence all of them are to be represented by the corresponding linguistic description. When the iconic drawings were replaced by photos taken in a natural environment rich in accidental details, the occurrence of quantifier spreading was radically reduced. It is shown that an extra object in the visual stimulus can lead to the rejection of the sentence also in the case of sentences involving no quantification, which gives further support to the claim that the source of the problem is not (or not only the grammatical or cognitive difficulty of quantification but the unintended ostensive effect of the extra object.  This article is part of the special collection: Acquisition of Quantification

  14. Quantifying information leakage of randomized protocols

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Malacaria, Pasquale

    2015-01-01

    The quantification of information leakage provides a quantitative evaluation of the security of a system. We propose the usage of Markovian processes to model deterministic and probabilistic systems. By using a methodology generalizing the lattice of information approach we model refined attackers...... capable to observe the internal behavior of the system, and quantify the information leakage of such systems. We also use our method to obtain an algorithm for the computation of channel capacity from our Markovian models. Finally, we show how to use the method to analyze timed and non-timed attacks...

  15. Characterization of autoregressive processes using entropic quantifiers

    Science.gov (United States)

    Traversaro, Francisco; Redelico, Francisco O.

    2018-01-01

    The aim of the contribution is to introduce a novel information plane, the causal-amplitude informational plane. As previous works seems to indicate, Bandt and Pompe methodology for estimating entropy does not allow to distinguish between probability distributions which could be fundamental for simulation or for probability analysis purposes. Once a time series is identified as stochastic by the causal complexity-entropy informational plane, the novel causal-amplitude gives a deeper understanding of the time series, quantifying both, the autocorrelation strength and the probability distribution of the data extracted from the generating processes. Two examples are presented, one from climate change model and the other from financial markets.

  16. Quantifying Heuristic Bias: Anchoring, Availability, and Representativeness.

    Science.gov (United States)

    Richie, Megan; Josephson, S Andrew

    2018-01-01

    Construct: Authors examined whether a new vignette-based instrument could isolate and quantify heuristic bias. Heuristics are cognitive shortcuts that may introduce bias and contribute to error. There is no standardized instrument available to quantify heuristic bias in clinical decision making, limiting future study of educational interventions designed to improve calibration of medical decisions. This study presents validity data to support a vignette-based instrument quantifying bias due to the anchoring, availability, and representativeness heuristics. Participants completed questionnaires requiring assignment of probabilities to potential outcomes of medical and nonmedical scenarios. The instrument randomly presented scenarios in one of two versions: Version A, encouraging heuristic bias, and Version B, worded neutrally. The primary outcome was the difference in probability judgments for Version A versus Version B scenario options. Of 167 participants recruited, 139 enrolled. Participants assigned significantly higher mean probability values to Version A scenario options (M = 9.56, SD = 3.75) than Version B (M = 8.98, SD = 3.76), t(1801) = 3.27, p = .001. This result remained significant analyzing medical scenarios alone (Version A, M = 9.41, SD = 3.92; Version B, M = 8.86, SD = 4.09), t(1204) = 2.36, p = .02. Analyzing medical scenarios by heuristic revealed a significant difference between Version A and B for availability (Version A, M = 6.52, SD = 3.32; Version B, M = 5.52, SD = 3.05), t(404) = 3.04, p = .003, and representativeness (Version A, M = 11.45, SD = 3.12; Version B, M = 10.67, SD = 3.71), t(396) = 2.28, p = .02, but not anchoring. Stratifying by training level, students maintained a significant difference between Version A and B medical scenarios (Version A, M = 9.83, SD = 3.75; Version B, M = 9.00, SD = 3.98), t(465) = 2.29, p = .02, but not residents or attendings. Stratifying by heuristic and training level, availability maintained

  17. An index for quantifying flocking behavior.

    Science.gov (United States)

    Quera, Vicenç; Herrando, Salvador; Beltran, Francesc S; Salas, Laura; Miñano, Meritxell

    2007-12-01

    One of the classic research topics in adaptive behavior is the collective displacement of groups of organisms such as flocks of birds, schools of fish, herds of mammals, and crowds of people. However, most agent-based simulations of group behavior do not provide a quantitative index for determining the point at which the flock emerges. An index was developed of the aggregation of moving individuals in a flock and an example was provided of how it can be used to quantify the degree to which a group of moving individuals actually forms a flock.

  18. Quantifying the efficiency of river regulation

    Directory of Open Access Journals (Sweden)

    R. Rödel

    2005-01-01

    Full Text Available Dam-affected hydrologic time series give rise to uncertainties when they are used for calibrating large-scale hydrologic models or for analysing runoff records. It is therefore necessary to identify and to quantify the impact of impoundments on runoff time series. Two different approaches were employed. The first, classic approach compares the volume of the dams that are located upstream from a station with the annual discharge. The catchment areas of the stations are calculated and then related to geo-referenced dam attributes. The paper introduces a data set of geo-referenced dams linked with 677 gauging stations in Europe. Second, the intensity of the impoundment impact on runoff times series can be quantified more exactly and directly when long-term runoff records are available. Dams cause a change in the variability of flow regimes. This effect can be measured using the model of linear single storage. The dam-caused storage change ΔS can be assessed through the volume of the emptying process between two flow regimes. As an example, the storage change ΔS is calculated for regulated long-term series of the Luleälven in northern Sweden.

  19. Quantifying meta-correlations in financial markets

    Science.gov (United States)

    Kenett, Dror Y.; Preis, Tobias; Gur-Gershgoren, Gitit; Ben-Jacob, Eshel

    2012-08-01

    Financial markets are modular multi-level systems, in which the relationships between the individual components are not constant in time. Sudden changes in these relationships significantly affect the stability of the entire system, and vice versa. Our analysis is based on historical daily closing prices of the 30 components of the Dow Jones Industrial Average (DJIA) from March 15th, 1939 until December 31st, 2010. We quantify the correlation among these components by determining Pearson correlation coefficients, to investigate whether mean correlation of the entire portfolio can be used as a precursor for changes in the index return. To this end, we quantify the meta-correlation - the correlation of mean correlation and index return. We find that changes in index returns are significantly correlated with changes in mean correlation. Furthermore, we study the relationship between the index return and correlation volatility - the standard deviation of correlations for a given time interval. This parameter provides further evidence of the effect of the index on market correlations and their fluctuations. Our empirical findings provide new information and quantification of the index leverage effect, and have implications to risk management, portfolio optimization, and to the increased stability of financial markets.

  20. Electromyographic permutation entropy quantifies diaphragmatic denervation and reinnervation.

    Directory of Open Access Journals (Sweden)

    Christopher Kramer

    Full Text Available Spontaneous reinnervation after diaphragmatic paralysis due to trauma, surgery, tumors and spinal cord injuries is frequently observed. A possible explanation could be collateral reinnervation, since the diaphragm is commonly double-innervated by the (accessory phrenic nerve. Permutation entropy (PeEn, a complexity measure for time series, may reflect a functional state of neuromuscular transmission by quantifying the complexity of interactions across neural and muscular networks. In an established rat model, electromyographic signals of the diaphragm after phrenicotomy were analyzed using PeEn quantifying denervation and reinnervation. Thirty-three anesthetized rats were unilaterally phrenicotomized. After 1, 3, 9, 27 and 81 days, diaphragmatic electromyographic PeEn was analyzed in vivo from sternal, mid-costal and crural areas of both hemidiaphragms. After euthanasia of the animals, both hemidiaphragms were dissected for fiber type evaluation. The electromyographic incidence of an accessory phrenic nerve was 76%. At day 1 after phrenicotomy, PeEn (normalized values was significantly diminished in the sternal (median: 0.69; interquartile range: 0.66-0.75 and mid-costal area (0.68; 0.66-0.72 compared to the non-denervated side (0.84; 0.78-0.90 at threshold p<0.05. In the crural area, innervated by the accessory phrenic nerve, PeEn remained unchanged (0.79; 0.72-0.86. During reinnervation over 81 days, PeEn normalized in the mid-costal area (0.84; 0.77-0.86, whereas it remained reduced in the sternal area (0.77; 0.70-0.81. Fiber type grouping, a histological sign for reinnervation, was found in the mid-costal area in 20% after 27 days and in 80% after 81 days. Collateral reinnervation can restore diaphragm activity after phrenicotomy. Electromyographic PeEn represents a new, distinctive assessment characterizing intramuscular function following denervation and reinnervation.

  1. Quantifying climate risk - the starting point

    International Nuclear Information System (INIS)

    Fairweather, Helen; Luo, Qunying; Liu, De Li; Wiles, Perry

    2007-01-01

    Full text: All natural systems have evolved to their current state as a result inter alia of the climate in which they developed. Similarly, man-made systems (such as agricultural production) have developed to suit the climate experienced over the last 100 or so years. The capacity of different systems to adapt to changes in climate that are outside those that have been experienced previously is largely unknown. This results in considerable uncertainty when predicting climate change impacts. However, it is possible to quantify the relative probabilities of a range of potential impacts of climate change. Quantifying current climate risks is an effective starting point for analysing the probable impacts of future climate change and guiding the selection of appropriate adaptation strategies. For a farming system to be viable within the current climate, its profitability must be sustained and, therefore, possible adaptation strategies need to be tested for continued viability in a changed climate. The methodology outlined in this paper examines historical patterns of key climate variables (rainfall and temperature) across the season and their influence on the productivity of wheat growing in NSW. This analysis is used to identify the time of year that the system is most vulnerable to climate variation, within the constraints of the current climate. Wheat yield is used as a measure of productivity, which is also assumed to be a surrogate for profitability. A time series of wheat yields is sorted into ascending order and categorised into five percentile groupings (i.e. 20th, 40th, 60th and 80th percentiles) for each shire across NSW (-100 years). Five time series of climate data (which are aggregated daily data from the years in each percentile) are analysed to determine the period that provides the greatest climate risk to the production system. Once this period has been determined, this risk is quantified in terms of the degree of separation of the time series

  2. Quantifying microbe-mineral interactions leading to remotely detectable induced polarization signals

    Energy Technology Data Exchange (ETDEWEB)

    Ntarlagiannis, Dimitrios; Moysey, Stephen; Dean, Delphine

    2013-11-14

    The objective of this project was to investigate controls on induced polarization responses in porous media. The approach taken in the project was to compare electrical measurements made on mineral surfaces with atomic force microscopy (AFM) techniques to observations made at the column-scale using traditional spectral induced polarization measurements. In the project we evaluated a number of techniques for investigating the surface properties of materials, including the development of a new AFM measurement protocol that utilizes an external electric field to induce grain-scale polarizations that can be probed using a charged AFM tip. The experiments we performed focused on idealized systems (i.e., glass beads and silica gel) where we could obtain the high degree of control needed to understand how changes in the pore environment, which are determined by biogeochemical controls in the subsurface, affect mechanisms contributing to complex electrical conductivity, i.e., conduction and polarization, responses. The studies we performed can be classified into those affecting the chemical versus physical properties of the grain surface and pore space. Chemical alterations of the surface focused on evaluating how changes in pore fluid pH and ionic composition control surface conduction. These were performed as column flow through experiments where the pore fluid was exchanged in a column of silica gel. Given that silica gel has a high surface area due to internal grain porosity, high-quality data could be obtained where the chemical influences on the surface are clearly apparent and qualitatively consistent with theories of grain (i.e., Stern layer) polarization controlled by electrostatic surface sorption processes (i.e., triple layer theory). Quantitative fitting of the results by existing process-based polarization models (e.g., Leroy et al., 2008) has been less successful, however, due to what we have attributed to differences between existing models developed for spherical grains versus the actual geometry associated with the nano-pores in the silica gel, though other polarization processes, e.g., proton hopping along the surface (Skold et al., 2013), may also be a contributing factor. As an alternative model-independent approach to confirming the link between surface sorption and SIP we initiated a study that will continue (unfunded) beyond the completion of this project to independently measure the accumulation of gamma emitting isotopes on the silica gel during the SIP monitoring experiments. Though our analyses of the project data are ongoing, our preliminary analyses are generally supportive of the grain (Stern layer) polarization theory of SIP. Experiments focused on evaluating the impact of physical modifications of the medium on polarization included etching and biotic and abiotic facilitated precipitation of carbonate and iron oxides to alter the roughness and electrical conductivity of the surfaces. These experiments were performed for both silica gel and glass beads, the latter of which lacked the interior porosity and high surface area of the silica gel. The results appear to be more nuanced that the chemical modifications of the system. In general, however, it was found that deposition of iron oxides and etching had relatively minimal or negative impacts on the polarization response of the medium, whereas carbonate coatings increased the polarization response. These results were generally consistent with changes in surface charge observed via AFM. Abiotic and biotic column flow through experiments demonstrated that precipitation of carbonate within the medium significantly impacted the real and imaginary conductivity over time in a manner generally consistent with the carbonate precipitation as observed from the batch grain coating experiments. Biotic effects were not observed to provide distinctly different signatures, but may have contributed to differences in the rate of changes observed with SIP. AFM was used in a variety of different ways to investigate the grain surfaces throughout the course of the project. Standard imaging methods were used to evaluate surface roughness and charge density, which showed that these data could provide qualitative insights about consistency between surface trends and the electrical behavior at the column scale (for the case of glass beads). Polarization and conductive force microscopy (PCFM) measurements were developed by the original project PI (Treavor Kendall), which illustrated the importance of the initial few monolayers of water on the mineral surface for producing surface conductivity. The technique allowed for initial local estimates of complex electrical conductivity on mineral surfaces, but could not be pursued after Kendall left the project due to phase locking limitations with the AFM instrument at Clemson and an inability to perform measurements in solution, which limited their value for linking the measurements to column-scale SIP responses. As a result, co-PI Dean developed a new methodology for making AFM measurements within an externally applied electric field. In this method, the charged tip of an AFM probe is brought within the proximity of a polarization domain while an external electric field is applied to the sample. The premise of the approach is that the tip will be attracted to or rebound from charge accumulations on the surface, which allow for detection of the local polarization response. Initial experiments showed promise in terms of the general trends of responses observed, though we have not yet been able to develop a quantitative interpretation technique that can be applied to predicting column scale responses.

  3. Quantifying Microbe-Mineral Interactions Leading to Remotely Detectable Induced Polarization Signals (Final Project Report)

    Energy Technology Data Exchange (ETDEWEB)

    Moysey, Stephen [Clemson University; Dean, Delphine [Clemson University; Dimitrios, Ntarlagiannis [Rutgers University

    2013-11-13

    The objective of this project was to investigate controls on induced polarization responses in porous media. The approach taken in the project was to compare electrical measurements made on mineral surfaces with atomic force microscopy (AFM) techniques to observations made at the column-scale using traditional spectral induced polarization measurements. In the project we evaluated a number of techniques for investigating the surface properties of materials, including the development of a new AFM measurement protocol that utilizes an external electric field to induce grain-scale polarizations that can be probed using a charged AFM tip. The experiments we performed focused on idealized systems (i.e., glass beads and silica gel) where we could obtain the high degree of control needed to understand how changes in the pore environment, which are determined by biogeochemical controls in the subsurface, affect mechanisms contributing to complex electrical conductivity, i.e., conduction and polarization, responses. The studies we performed can be classified into those affecting the chemical versus physical properties of the grain surface and pore space. Chemical alterations of the surface focused on evaluating how changes in pore fluid pH and ionic composition control surface conduction. These were performed as column flow through experiments where the pore fluid was exchanged in a column of silica gel. Given that silica gel has a high surface area due to internal grain porosity, high-quality data could be obtained where the chemical influences on the surface are clearly apparent and qualitatively consistent with theories of grain (i.e., Stern layer) polarization controlled by electrostatic surface sorption processes (i.e., triple layer theory). Quantitative fitting of the results by existing process-based polarization models (e.g., Leroy et al., 2008) has been less successful, however, due to what we have attributed to differences between existing models developed for spherical grains versus the actual geometry associated with the nano-pores in the silica gel, though other polarization processes, e.g., proton hopping along the surface (Skold et al., 2013), may also be a contributing factor. As an alternative model-independent approach to confirming the link between surface sorption and SIP we initiated a study that will continue (unfunded) beyond the completion of this project to independently measure the accumulation of gamma emitting isotopes on the silica gel during the SIP monitoring experiments. Though our analyses of the project data are ongoing, our preliminary analyses are generally supportive of the grain (Stern layer) polarization theory of SIP. Experiments focused on evaluating the impact of physical modifications of the medium on polarization included etching and biotic and abiotic facilitated precipitation of carbonate and iron oxides to alter the roughness and electrical conductivity of the surfaces. These experiments were performed for both silica gel and glass beads, the latter of which lacked the interior porosity and high surface area of the silica gel. The results appear to be more nuanced that the chemical modifications of the system. In general, however, it was found that deposition of iron oxides and etching had relatively minimal or negative impacts on the polarization response of the medium, whereas carbonate coatings increased the polarization response. These results were generally consistent with changes in surface charge observed via AFM. Abiotic and biotic column flow through experiments demonstrated that precipitation of carbonate within the medium significantly impacted the real and imaginary conductivity over time in a manner generally consistent with the carbonate precipitation as observed from the batch grain coating experiments. Biotic effects were not observed to provide distinctly different signatures, but may have contributed to differences in the rate of changes observed with SIP. AFM was used in a variety of different ways to investigate the grain surfaces throughout the course of the project. Standard imaging methods were used to evaluate surface roughness and charge density, which showed that these data could provide qualitative insights about consistency between surface trends and the electrical behavior at the column scale (for the case of glass beads). Polarization and conductive force microscopy (PCFM) measurements were developed by the original project PI (Treavor Kendall), which illustrated the importance of the initial few monolayers of water on the mineral surface for producing surface conductivity. The technique allowed for initial local estimates of complex electrical conductivity on mineral surfaces, but could not be pursued after Kendall left the project due to phase locking limitations with the AFM instrument at Clemson and an inability to perform measurements in solution, which limited their value for linking the measurements to column-scale SIP responses. As a result, co-PI Dean developed a new methodology for making AFM measurements within an externally applied electric field. In this method, the charged tip of an AFM probe is brought within the proximity of a polarization domain while an external electric field is applied to the sample. The premise of the approach is that the tip will be attracted to or rebound from charge accumulations on the surface, which allow for detection of the local polarization response. Initial experiments showed promise in terms of the general trends of responses observed, though we have not yet been able to develop a quantitative interpretation technique that can be applied to predicting column scale responses.

  4. Shirtless and Dangerous: Quantifying Linguistic Signals of Gender Bias in an Online Fiction Writing Community

    OpenAIRE

    Fast, Ethan; Vachovsky, Tina; Bernstein, Michael S.

    2016-01-01

    Imagine a princess asleep in a castle, waiting for her prince to slay the dragon and rescue her. Tales like the famous Sleeping Beauty clearly divide up gender roles. But what about more modern stories, borne of a generation increasingly aware of social constructs like sexism and racism? Do these stories tend to reinforce gender stereotypes, or counter them? In this paper, we present a technique that combines natural language processing with a crowdsourced lexicon of stereotypes to capture ge...

  5. Quantifying the Flexibility of Residential Electricity Demand in 2050: a Bottom-Up Approach

    OpenAIRE

    van Stiphout, Arne; Engels, Jonas; Guldentops, Dries; Deconinck, Geert

    2015-01-01

    This work presents a new method to quantify the flexibility of automatic demand response applied to residential electricity demand using price elasticities. A stochastic bottom-up model of flexible electricity demand in 2050 is presented. Three types of flexible devices are implemented: electrical heating, electric vehicles and wet appliances. Each house schedules its flexible demand w.r.t. a varying price signal, in order to minimize electricity cost. Own- and cross-price elasticities are ob...

  6. How to quantify conduits in wood?

    Science.gov (United States)

    Scholz, Alexander; Klepsch, Matthias; Karimi, Zohreh; Jansen, Steven

    2013-01-01

    Vessels and tracheids represent the most important xylem cells with respect to long distance water transport in plants. Wood anatomical studies frequently provide several quantitative details of these cells, such as vessel diameter, vessel density, vessel element length, and tracheid length, while important information on the three dimensional structure of the hydraulic network is not considered. This paper aims to provide an overview of various techniques, although there is no standard protocol to quantify conduits due to high anatomical variation and a wide range of techniques available. Despite recent progress in image analysis programs and automated methods for measuring cell dimensions, density, and spatial distribution, various characters remain time-consuming and tedious. Quantification of vessels and tracheids is not only important to better understand functional adaptations of tracheary elements to environment parameters, but will also be essential for linking wood anatomy with other fields such as wood development, xylem physiology, palaeobotany, and dendrochronology.

  7. Towards Quantifying a Wider Reality: Shannon Exonerata

    Directory of Open Access Journals (Sweden)

    Robert E. Ulanowicz

    2011-10-01

    Full Text Available In 1872 Ludwig von Boltzmann derived a statistical formula to represent the entropy (an apophasis of a highly simplistic system. In 1948 Claude Shannon independently formulated the same expression to capture the positivist essence of information. Such contradictory thrusts engendered decades of ambiguity concerning exactly what is conveyed by the expression. Resolution of widespread confusion is possible by invoking the third law of thermodynamics, which requires that entropy be treated in a relativistic fashion. Doing so parses the Boltzmann expression into separate terms that segregate apophatic entropy from positivist information. Possibly more importantly, the decomposition itself portrays a dialectic-like agonism between constraint and disorder that may provide a more appropriate description of the behavior of living systems than is possible using conventional dynamics. By quantifying the apophatic side of evolution, the Shannon approach to information achieves what no other treatment of the subject affords: It opens the window on a more encompassing perception of reality.

  8. Message passing for quantified Boolean formulas

    International Nuclear Information System (INIS)

    Zhang, Pan; Ramezanpour, Abolfazl; Zecchina, Riccardo; Zdeborová, Lenka

    2012-01-01

    We introduce two types of message passing algorithms for quantified Boolean formulas (QBF). The first type is a message passing based heuristics that can prove unsatisfiability of the QBF by assigning the universal variables in such a way that the remaining formula is unsatisfiable. In the second type, we use message passing to guide branching heuristics of a Davis–Putnam–Logemann–Loveland (DPLL) complete solver. Numerical experiments show that on random QBFs our branching heuristics give robust exponential efficiency gain with respect to state-of-the-art solvers. We also manage to solve some previously unsolved benchmarks from the QBFLIB library. Apart from this, our study sheds light on using message passing in small systems and as subroutines in complete solvers

  9. Quantifying decoherence in continuous variable systems

    Energy Technology Data Exchange (ETDEWEB)

    Serafini, A [Dipartimento di Fisica ' ER Caianiello' , Universita di Salerno, INFM UdR Salerno, INFN Sezione Napoli, Gruppo Collegato Salerno, Via S Allende, 84081 Baronissi, SA (Italy); Paris, M G A [Dipartimento di Fisica and INFM, Universita di Milano, Milan (Italy); Illuminati, F [Dipartimento di Fisica ' ER Caianiello' , Universita di Salerno, INFM UdR Salerno, INFN Sezione Napoli, Gruppo Collegato Salerno, Via S Allende, 84081 Baronissi, SA (Italy); De Siena, S [Dipartimento di Fisica ' ER Caianiello' , Universita di Salerno, INFM UdR Salerno, INFN Sezione Napoli, Gruppo Collegato Salerno, Via S Allende, 84081 Baronissi, SA (Italy)

    2005-04-01

    We present a detailed report on the decoherence of quantum states of continuous variable systems under the action of a quantum optical master equation resulting from the interaction with general Gaussian uncorrelated environments. The rate of decoherence is quantified by relating it to the decay rates of various, complementary measures of the quantum nature of a state, such as the purity, some non-classicality indicators in phase space, and, for two-mode states, entanglement measures and total correlations between the modes. Different sets of physically relevant initial configurations are considered, including one- and two-mode Gaussian states, number states, and coherent superpositions. Our analysis shows that, generally, the use of initially squeezed configurations does not help to preserve the coherence of Gaussian states, whereas it can be effective in protecting coherent superpositions of both number states and Gaussian wavepackets. (review article)

  10. Quantifying decoherence in continuous variable systems

    International Nuclear Information System (INIS)

    Serafini, A; Paris, M G A; Illuminati, F; De Siena, S

    2005-01-01

    We present a detailed report on the decoherence of quantum states of continuous variable systems under the action of a quantum optical master equation resulting from the interaction with general Gaussian uncorrelated environments. The rate of decoherence is quantified by relating it to the decay rates of various, complementary measures of the quantum nature of a state, such as the purity, some non-classicality indicators in phase space, and, for two-mode states, entanglement measures and total correlations between the modes. Different sets of physically relevant initial configurations are considered, including one- and two-mode Gaussian states, number states, and coherent superpositions. Our analysis shows that, generally, the use of initially squeezed configurations does not help to preserve the coherence of Gaussian states, whereas it can be effective in protecting coherent superpositions of both number states and Gaussian wavepackets. (review article)

  11. Crowdsourcing for quantifying transcripts: An exploratory study.

    Science.gov (United States)

    Azzam, Tarek; Harman, Elena

    2016-02-01

    This exploratory study attempts to demonstrate the potential utility of crowdsourcing as a supplemental technique for quantifying transcribed interviews. Crowdsourcing is the harnessing of the abilities of many people to complete a specific task or a set of tasks. In this study multiple samples of crowdsourced individuals were asked to rate and select supporting quotes from two different transcripts. The findings indicate that the different crowdsourced samples produced nearly identical ratings of the transcripts, and were able to consistently select the same supporting text from the transcripts. These findings suggest that crowdsourcing, with further development, can potentially be used as a mixed method tool to offer a supplemental perspective on transcribed interviews. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Animal biometrics: quantifying and detecting phenotypic appearance.

    Science.gov (United States)

    Kühl, Hjalmar S; Burghardt, Tilo

    2013-07-01

    Animal biometrics is an emerging field that develops quantified approaches for representing and detecting the phenotypic appearance of species, individuals, behaviors, and morphological traits. It operates at the intersection between pattern recognition, ecology, and information sciences, producing computerized systems for phenotypic measurement and interpretation. Animal biometrics can benefit a wide range of disciplines, including biogeography, population ecology, and behavioral research. Currently, real-world applications are gaining momentum, augmenting the quantity and quality of ecological data collection and processing. However, to advance animal biometrics will require integration of methodologies among the scientific disciplines involved. Such efforts will be worthwhile because the great potential of this approach rests with the formal abstraction of phenomics, to create tractable interfaces between different organizational levels of life. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Quantifying capital goods for waste incineration

    DEFF Research Database (Denmark)

    Brogaard, Line Kai-Sørensen; Riber, C.; Christensen, Thomas Højlund

    2013-01-01

    material used amounting to 19,000–26,000tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 4000–5000MWh. In terms of the environmental burden...... that, compared to data reported in the literature on direct emissions from the operation of incinerators, the environmental impacts caused by the construction of buildings and machinery (capital goods) could amount to 2–3% with respect to kg CO2 per tonne of waste combusted.......Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000–240,000tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main...

  14. Pendulum Underwater - An Approach for Quantifying Viscosity

    Science.gov (United States)

    Leme, José Costa; Oliveira, Agostinho

    2017-12-01

    The purpose of the experiment presented in this paper is to quantify the viscosity of a liquid. Viscous effects are important in the flow of fluids in pipes, in the bloodstream, in the lubrication of engine parts, and in many other situations. In the present paper, the authors explore the oscillations of a physical pendulum in the form of a long and lightweight wire that carries a ball at its lower end, which is totally immersed in water, so as to determine the water viscosity. The system used represents a viscous damped pendulum and we tried different theoretical models to describe it. The experimental part of the present paper is based on a very simple and low-cost image capturing apparatus that can easily be replicated in a physics classroom. Data on the pendulum's amplitude as a function of time were acquired using digital video analysis with the open source software Tracker.

  15. Quantifying gait patterns in Parkinson's disease

    Science.gov (United States)

    Romero, Mónica; Atehortúa, Angélica; Romero, Eduardo

    2017-11-01

    Parkinson's disease (PD) is constituted by a set of motor symptoms, namely tremor, rigidity, and bradykinesia, which are usually described but not quantified. This work proposes an objective characterization of PD gait patterns by approximating the single stance phase a single grounded pendulum. This model estimates the force generated by the gait during the single support from gait data. This force describes the motion pattern for different stages of the disease. The model was validated using recorded videos of 8 young control subjects, 10 old control subjects and 10 subjects with Parkinson's disease in different stages. The estimated force showed differences among stages of Parkinson disease, observing a decrease of the estimated force for the advanced stages of this illness.

  16. Quantifying Temporal Genomic Erosion in Endangered Species.

    Science.gov (United States)

    Díez-Del-Molino, David; Sánchez-Barreiro, Fatima; Barnes, Ian; Gilbert, M Thomas P; Dalén, Love

    2018-03-01

    Many species have undergone dramatic population size declines over the past centuries. Although stochastic genetic processes during and after such declines are thought to elevate the risk of extinction, comparative analyses of genomic data from several endangered species suggest little concordance between genome-wide diversity and current population sizes. This is likely because species-specific life-history traits and ancient bottlenecks overshadow the genetic effect of recent demographic declines. Therefore, we advocate that temporal sampling of genomic data provides a more accurate approach to quantify genetic threats in endangered species. Specifically, genomic data from predecline museum specimens will provide valuable baseline data that enable accurate estimation of recent decreases in genome-wide diversity, increases in inbreeding levels, and accumulation of deleterious genetic variation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Extraction of quantifiable information from complex systems

    CERN Document Server

    Dahmen, Wolfgang; Griebel, Michael; Hackbusch, Wolfgang; Ritter, Klaus; Schneider, Reinhold; Schwab, Christoph; Yserentant, Harry

    2014-01-01

    In April 2007, the  Deutsche Forschungsgemeinschaft (DFG) approved the  Priority Program 1324 “Mathematical Methods for Extracting Quantifiable Information from Complex Systems.” This volume presents a comprehensive overview of the most important results obtained over the course of the program.   Mathematical models of complex systems provide the foundation for further technological developments in science, engineering and computational finance.  Motivated by the trend toward steadily increasing computer power, ever more realistic models have been developed in recent years. These models have also become increasingly complex, and their numerical treatment poses serious challenges.   Recent developments in mathematics suggest that, in the long run, much more powerful numerical solution strategies could be derived if the interconnections between the different fields of research were systematically exploited at a conceptual level. Accordingly, a deeper understanding of the mathematical foundations as w...

  18. Quantifying the evolution of individual scientific impact.

    Science.gov (United States)

    Sinatra, Roberta; Wang, Dashun; Deville, Pierre; Song, Chaoming; Barabási, Albert-László

    2016-11-04

    Despite the frequent use of numerous quantitative indicators to gauge the professional impact of a scientist, little is known about how scientific impact emerges and evolves in time. Here, we quantify the changes in impact and productivity throughout a career in science, finding that impact, as measured by influential publications, is distributed randomly within a scientist's sequence of publications. This random-impact rule allows us to formulate a stochastic model that uncouples the effects of productivity, individual ability, and luck and unveils the existence of universal patterns governing the emergence of scientific success. The model assigns a unique individual parameter Q to each scientist, which is stable during a career, and it accurately predicts the evolution of a scientist's impact, from the h-index to cumulative citations, and independent recognitions, such as prizes. Copyright © 2016, American Association for the Advancement of Science.

  19. Quantifying creativity: can measures span the spectrum?

    Science.gov (United States)

    Simonton, Dean Keith

    2012-03-01

    Because the cognitive neuroscientists have become increasingly interested in the phenomenon of creativity, the issue arises of how creativity is to be optimally measured. Unlike intelligence, which can be assessed across the full range of intellectual ability creativity measures tend to concentrate on different sections of the overall spectrum. After first defining creativity in terms of the three criteria of novelty, usefulness, and surprise, this article provides an overview of the available measures. Not only do these instruments vary according to whether they focus on the creative process, person, or product, but they differ regarding whether they tap into "little-c" versus "Big-C" creativity; only productivity and eminence measures reach into genius-level manifestations of the phenomenon. The article closes by discussing whether various alternative assessment techniques can be integrated into a single measure that quantifies creativity across the full spectrum.

  20. Quantifying capital goods for waste incineration

    International Nuclear Information System (INIS)

    Brogaard, L.K.; Riber, C.; Christensen, T.H.

    2013-01-01

    Highlights: • Materials and energy used for the construction of waste incinerators were quantified. • The data was collected from five incineration plants in Scandinavia. • Included were six main materials, electronic systems, cables and all transportation. • The capital goods contributed 2–3% compared to the direct emissions impact on GW. - Abstract: Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000–240,000 tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main material used amounting to 19,000–26,000 tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 4000–5000 MW h. In terms of the environmental burden of producing the materials used in the construction, steel for the building and the machinery contributed the most. The material and energy used for the construction corresponded to the emission of 7–14 kg CO 2 per tonne of waste combusted throughout the lifetime of the incineration plant. The assessment showed that, compared to data reported in the literature on direct emissions from the operation of incinerators, the environmental impacts caused by the construction of buildings and machinery (capital goods) could amount to 2–3% with respect to kg CO 2 per tonne of waste combusted

  1. Quantifying structural states of soft mudrocks

    Science.gov (United States)

    Li, B.; Wong, R. C. K.

    2016-05-01

    In this paper, a cm model is proposed to quantify structural states of soft mudrocks, which are dependent on clay fractions and porosities. Physical properties of natural and reconstituted soft mudrock samples are used to derive two parameters in the cm model. With the cm model, a simplified homogenization approach is proposed to estimate geomechanical properties and fabric orientation distributions of soft mudrocks based on the mixture theory. Soft mudrocks are treated as a mixture of nonclay minerals and clay-water composites. Nonclay minerals have a high stiffness and serve as a structural framework of mudrocks when they have a high volume fraction. Clay-water composites occupy the void space among nonclay minerals and serve as an in-fill matrix. With the increase of volume fraction of clay-water composites, there is a transition in the structural state from the state of framework supported to the state of matrix supported. The decreases in shear strength and pore size as well as increases in compressibility and anisotropy in fabric are quantitatively related to such transition. The new homogenization approach based on the proposed cm model yields better performance evaluation than common effective medium modeling approaches because the interactions among nonclay minerals and clay-water composites are considered. With wireline logging data, the cm model is applied to quantify the structural states of Colorado shale formations at different depths in the Cold Lake area, Alberta, Canada. Key geomechancial parameters are estimated based on the proposed homogenization approach and the critical intervals with low strength shale formations are identified.

  2. Quantifying uncertainty in nuclear analytical measurements

    International Nuclear Information System (INIS)

    2004-07-01

    The lack of international consensus on the expression of uncertainty in measurements was recognised by the late 1970s and led, after the issuance of a series of rather generic recommendations, to the publication of a general publication, known as GUM, the Guide to the Expression of Uncertainty in Measurement. This publication, issued in 1993, was based on co-operation over several years by the Bureau International des Poids et Mesures, the International Electrotechnical Commission, the International Federation of Clinical Chemistry, the International Organization for Standardization (ISO), the International Union of Pure and Applied Chemistry, the International Union of Pure and Applied Physics and the Organisation internationale de metrologie legale. The purpose was to promote full information on how uncertainty statements are arrived at and to provide a basis for harmonized reporting and the international comparison of measurement results. The need to provide more specific guidance to different measurement disciplines was soon recognized and the field of analytical chemistry was addressed by EURACHEM in 1995 in the first edition of a guidance report on Quantifying Uncertainty in Analytical Measurements, produced by a group of experts from the field. That publication translated the general concepts of the GUM into specific applications for analytical laboratories and illustrated the principles with a series of selected examples as a didactic tool. Based on feedback from the actual practice, the EURACHEM publication was extensively reviewed in 1997-1999 under the auspices of the Co-operation on International Traceability in Analytical Chemistry (CITAC), and a second edition was published in 2000. Still, except for a single example on the measurement of radioactivity in GUM, the field of nuclear and radiochemical measurements was not covered. The explicit requirement of ISO standard 17025:1999, General Requirements for the Competence of Testing and Calibration

  3. Quantifying Urban Groundwater in Environmental Field Observatories

    Science.gov (United States)

    Welty, C.; Miller, A. J.; Belt, K.; Smith, J. A.; Band, L. E.; Groffman, P.; Scanlon, T.; Warner, J.; Ryan, R. J.; Yeskis, D.; McGuire, M. P.

    2006-12-01

    Despite the growing footprint of urban landscapes and their impacts on hydrologic and biogeochemical cycles, comprehensive field studies of urban water budgets are few. The cumulative effects of urban infrastructure (buildings, roads, culverts, storm drains, detention ponds, leaking water supply and wastewater pipe networks) on temporal and spatial patterns of groundwater stores, fluxes, and flowpaths are poorly understood. The goal of this project is to develop expertise and analytical tools for urban groundwater systems that will inform future environmental observatory planning and that can be shared with research teams working in urban environments elsewhere. The work plan for this project draws on a robust set of information resources in Maryland provided by ongoing monitoring efforts of the Baltimore Ecosystem Study (BES), USGS, and the U.S. Forest Service working together with university scientists and engineers from multiple institutions. A key concern is to bridge the gap between small-scale intensive field studies and larger-scale and longer-term hydrologic patterns using synoptic field surveys, remote sensing, numerical modeling, data mining and visualization tools. Using the urban water budget as a unifying theme, we are working toward estimating the various elements of the budget in order to quantify the influence of urban infrastructure on groundwater. Efforts include: (1) comparison of base flow behavior from stream gauges in a nested set of watersheds at four different spatial scales from 0.8 to 171 km2, with diverse patterns of impervious cover and urban infrastructure; (2) synoptic survey of well water levels to characterize the regional water table; (3) use of airborne thermal infrared imagery to identify locations of groundwater seepage into streams across a range of urban development patterns; (4) use of seepage transects and tracer tests to quantify the spatial pattern of groundwater fluxes to the drainage network in selected subwatersheds; (5

  4. Quantifying mechanical force in axonal growth and guidance

    Directory of Open Access Journals (Sweden)

    Ahmad Ibrahim Mahmoud Athamneh

    2015-09-01

    Full Text Available Mechanical force plays a fundamental role in neuronal development, physiology, and regeneration. In particular, research has shown that force is involved in growth cone-mediated axonal growth and guidance as well as stretch-induced elongation when an organism increases in size after forming initial synaptic connections. However, much of the details about the exact role of force in these fundamental processes remain unknown. In this review, we highlight (1 standing questions concerning the role of mechanical force in axonal growth and guidance and (2 different experimental techniques used to quantify forces in axons and growth cones. We believe that satisfying answers to these questions will require quantitative information about the relationship between elongation, forces, cytoskeletal dynamics, axonal transport, signaling, substrate adhesion, and stiffness contributing to directional growth advance. Furthermore, we address why a wide range of force values have been reported in the literature, and what these values mean in the context of neuronal mechanics. We hope that this review will provide a guide for those interested in studying the role of force in development and regeneration of neuronal networks.

  5. QUANTIFYING LIFE STYLE IMPACT ON LIFESPAN

    Directory of Open Access Journals (Sweden)

    Antonello Lorenzini

    2012-12-01

    Full Text Available A healthy diet, physical activity and avoiding dangerous habits such as smoking are effective ways of increasing health and lifespan. Although a significant portion of the world's population still suffers from malnutrition, especially children, the most common cause of death in the world today is non-communicable diseases. Overweight and obesity significantly increase the relative risk for the most relevant non communicable diseases: cardiovascular disease, type II diabetes and some cancers. Childhood overweight also seems to increase the likelihood of disease in adulthood through epigenetic mechanisms. This worrisome trend now termed "globesity" will deeply impact society unless preventive strategies are put into effect. Researchers of the basic biology of aging have clearly established that animals with short lifespans live longer when their diet is calorie restricted. Although similar experiments carried on rhesus monkeys, a longer-lived species more closely related to humans, yielded mixed results, overall the available scientific data suggest keeping the body mass index in the "normal" range increases the chances of living a longer, healthier life. This can be successfully achieved both by maintaining a healthy diet and by engaging in physical activity. In this review we will try to quantify the relative impact of life style choices on lifespan.

  6. Quantifying and Mapping Global Data Poverty.

    Science.gov (United States)

    Leidig, Mathias; Teeuw, Richard M

    2015-01-01

    Digital information technologies, such as the Internet, mobile phones and social media, provide vast amounts of data for decision-making and resource management. However, access to these technologies, as well as their associated software and training materials, is not evenly distributed: since the 1990s there has been concern about a "Digital Divide" between the data-rich and the data-poor. We present an innovative metric for evaluating international variations in access to digital data: the Data Poverty Index (DPI). The DPI is based on Internet speeds, numbers of computer owners and Internet users, mobile phone ownership and network coverage, as well as provision of higher education. The datasets used to produce the DPI are provided annually for almost all the countries of the world and can be freely downloaded. The index that we present in this 'proof of concept' study is the first to quantify and visualise the problem of global data poverty, using the most recent datasets, for 2013. The effects of severe data poverty, particularly limited access to geoinformatic data, free software and online training materials, are discussed in the context of sustainable development and disaster risk reduction. The DPI highlights countries where support is needed for improving access to the Internet and for the provision of training in geoinfomatics. We conclude that the DPI is of value as a potential metric for monitoring the Sustainable Development Goals of the Sendai Framework for Disaster Risk Reduction.

  7. Quantifying capital goods for waste incineration.

    Science.gov (United States)

    Brogaard, L K; Riber, C; Christensen, T H

    2013-06-01

    Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000-240,000 tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main material used amounting to 19,000-26,000 tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 4000-5000 MW h. In terms of the environmental burden of producing the materials used in the construction, steel for the building and the machinery contributed the most. The material and energy used for the construction corresponded to the emission of 7-14 kg CO2 per tonne of waste combusted throughout the lifetime of the incineration plant. The assessment showed that, compared to data reported in the literature on direct emissions from the operation of incinerators, the environmental impacts caused by the construction of buildings and machinery (capital goods) could amount to 2-3% with respect to kg CO2 per tonne of waste combusted. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Fluorescence imaging to quantify crop residue cover

    Science.gov (United States)

    Daughtry, C. S. T.; Mcmurtrey, J. E., III; Chappelle, E. W.

    1994-01-01

    Crop residues, the portion of the crop left in the field after harvest, can be an important management factor in controlling soil erosion. Methods to quantify residue cover are needed that are rapid, accurate, and objective. Scenes with known amounts of crop residue were illuminated with long wave ultraviolet (UV) radiation and fluorescence images were recorded with an intensified video camera fitted with a 453 to 488 nm band pass filter. A light colored soil and a dark colored soil were used as background for the weathered soybean stems. Residue cover was determined by counting the proportion of the pixels in the image with fluorescence values greater than a threshold. Soil pixels had the lowest gray levels in the images. The values of the soybean residue pixels spanned nearly the full range of the 8-bit video data. Classification accuracies typically were within 3(absolute units) of measured cover values. Video imaging can provide an intuitive understanding of the fraction of the soil covered by residue.

  9. Quantifying Potential Groundwater Recharge In South Texas

    Science.gov (United States)

    Basant, S.; Zhou, Y.; Leite, P. A.; Wilcox, B. P.

    2015-12-01

    Groundwater in South Texas is heavily relied on for human consumption and irrigation for food crops. Like most of the south west US, woody encroachment has altered the grassland ecosystems here too. While brush removal has been widely implemented in Texas with the objective of increasing groundwater recharge, the linkage between vegetation and groundwater recharge in South Texas is still unclear. Studies have been conducted to understand plant-root-water dynamics at the scale of plants. However, little work has been done to quantify the changes in soil water and deep percolation at the landscape scale. Modeling water flow through soil profiles can provide an estimate of the total water flowing into deep percolation. These models are especially powerful with parameterized and calibrated with long term soil water data. In this study we parameterize the HYDRUS soil water model using long term soil water data collected in Jim Wells County in South Texas. Soil water was measured at every 20 cm intervals up to a depth of 200 cm. The parameterized model will be used to simulate soil water dynamics under a variety of precipitation regimes ranging from well above normal to severe drought conditions. The results from the model will be compared with the changes in soil moisture profile observed in response to vegetation cover and treatments from a study in a similar. Comparative studies like this can be used to build new and strengthen existing hypotheses regarding deep percolation and the role of soil texture and vegetation in groundwater recharge.

  10. Quantifying Anthropogenic Stress on Groundwater Resources.

    Science.gov (United States)

    Ashraf, Batool; AghaKouchak, Amir; Alizadeh, Amin; Mousavi Baygi, Mohammad; R Moftakhari, Hamed; Mirchi, Ali; Anjileli, Hassan; Madani, Kaveh

    2017-10-10

    This study explores a general framework for quantifying anthropogenic influences on groundwater budget based on normalized human outflow (h out ) and inflow (h in ). The framework is useful for sustainability assessment of groundwater systems and allows investigating the effects of different human water abstraction scenarios on the overall aquifer regime (e.g., depleted, natural flow-dominated, and human flow-dominated). We apply this approach to selected regions in the USA, Germany and Iran to evaluate the current aquifer regime. We subsequently present two scenarios of changes in human water withdrawals and return flow to the system (individually and combined). Results show that approximately one-third of the selected aquifers in the USA, and half of the selected aquifers in Iran are dominated by human activities, while the selected aquifers in Germany are natural flow-dominated. The scenario analysis results also show that reduced human withdrawals could help with regime change in some aquifers. For instance, in two of the selected USA aquifers, a decrease in anthropogenic influences by ~20% may change the condition of depleted regime to natural flow-dominated regime. We specifically highlight a trending threat to the sustainability of groundwater in northwest Iran and California, and the need for more careful assessment and monitoring practices as well as strict regulations to mitigate the negative impacts of groundwater overexploitation.

  11. Quantifying Supply Risk at a Cellulosic Biorefinery

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Jason K [Idaho National Laboratory; Jacobson, Jacob Jordan [Idaho National Laboratory; Cafferty, Kara Grace [Idaho National Laboratory; Lamers, Patrick [Idaho National Laboratory; Roni, MD S [Idaho National Laboratory

    2015-03-01

    In order to increase the sustainability and security of the nation’s energy supply, the U.S. Department of Energy through its Bioenergy Technology Office has set a vision for one billion tons of biomass to be processed for renewable energy and bioproducts annually by the year 2030. The Renewable Fuels Standard limits the amount of corn grain that can be used in ethanol conversion sold in the U.S, which is already at its maximum. Therefore making the DOE’s vision a reality requires significant growth in the advanced biofuels industry where currently three cellulosic biorefineries convert cellulosic biomass to ethanol. Risk mitigation is central to growing the industry beyond its infancy to a level necessary to achieve the DOE vision. This paper focuses on reducing the supply risk that faces a firm that owns a cellulosic biorefinery. It uses risk theory and simulation modeling to build a risk assessment model based on causal relationships of underlying, uncertain, supply driving variables. Using the model the paper quantifies supply risk reduction achieved by converting the supply chain from a conventional supply system (bales and trucks) to an advanced supply system (depots, pellets, and trains). Results imply that the advanced supply system reduces supply system risk, defined as the probability of a unit cost overrun, from 83% in the conventional system to 4% in the advanced system. Reducing cost risk in this nascent industry improves the odds of realizing desired growth.

  12. Quantifying Supply Risk at a Cellulosic Biorefinery

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Jason K.; Jacobson, Jacob J.; Cafferty, Kara G.; Lamers, Patrick; Roni, Mohammad S.

    2015-07-01

    In order to increase the sustainability and security of the nation’s energy supply, the U.S. Department of Energy through its Bioenergy Technology Office has set a vision for one billion tons of biomass to be processed for renewable energy and bioproducts annually by the year 2030. The Renewable Fuels Standard limits the amount of corn grain that can be used in ethanol conversion sold in the U.S, which is already at its maximum. Therefore making the DOE’s vision a reality requires significant growth in the advanced biofuels industry where currently three cellulosic biorefineries convert cellulosic biomass to ethanol. Risk mitigation is central to growing the industry beyond its infancy to a level necessary to achieve the DOE vision. This paper focuses on reducing the supply risk that faces a firm that owns a cellulosic biorefinery. It uses risk theory and simulation modeling to build a risk assessment model based on causal relationships of underlying, uncertain, supply driving variables. Using the model the paper quantifies supply risk reduction achieved by converting the supply chain from a conventional supply system (bales and trucks) to an advanced supply system (depots, pellets, and trains). Results imply that the advanced supply system reduces supply system risk, defined as the probability of a unit cost overrun, from 83% in the conventional system to 4% in the advanced system. Reducing cost risk in this nascent industry improves the odds of realizing desired growth.

  13. Quantifying and Mapping Global Data Poverty.

    Directory of Open Access Journals (Sweden)

    Mathias Leidig

    Full Text Available Digital information technologies, such as the Internet, mobile phones and social media, provide vast amounts of data for decision-making and resource management. However, access to these technologies, as well as their associated software and training materials, is not evenly distributed: since the 1990s there has been concern about a "Digital Divide" between the data-rich and the data-poor. We present an innovative metric for evaluating international variations in access to digital data: the Data Poverty Index (DPI. The DPI is based on Internet speeds, numbers of computer owners and Internet users, mobile phone ownership and network coverage, as well as provision of higher education. The datasets used to produce the DPI are provided annually for almost all the countries of the world and can be freely downloaded. The index that we present in this 'proof of concept' study is the first to quantify and visualise the problem of global data poverty, using the most recent datasets, for 2013. The effects of severe data poverty, particularly limited access to geoinformatic data, free software and online training materials, are discussed in the context of sustainable development and disaster risk reduction. The DPI highlights countries where support is needed for improving access to the Internet and for the provision of training in geoinfomatics. We conclude that the DPI is of value as a potential metric for monitoring the Sustainable Development Goals of the Sendai Framework for Disaster Risk Reduction.

  14. Data Used in Quantified Reliability Models

    Science.gov (United States)

    DeMott, Diana; Kleinhammer, Roger K.; Kahn, C. J.

    2014-01-01

    Data is the crux to developing quantitative risk and reliability models, without the data there is no quantification. The means to find and identify reliability data or failure numbers to quantify fault tree models during conceptual and design phases is often the quagmire that precludes early decision makers consideration of potential risk drivers that will influence design. The analyst tasked with addressing a system or product reliability depends on the availability of data. But, where is does that data come from and what does it really apply to? Commercial industries, government agencies, and other international sources might have available data similar to what you are looking for. In general, internal and external technical reports and data based on similar and dissimilar equipment is often the first and only place checked. A common philosophy is "I have a number - that is good enough". But, is it? Have you ever considered the difference in reported data from various federal datasets and technical reports when compared to similar sources from national and/or international datasets? Just how well does your data compare? Understanding how the reported data was derived, and interpreting the information and details associated with the data is as important as the data itself.

  15. A Tensor Statistical Model for Quantifying Dynamic Functional Connectivity.

    Science.gov (United States)

    Zhu, Yingying; Zhu, Xiaofeng; Kim, Minjeong; Yan, Jin; Wu, Guorong

    2017-06-01

    Functional connectivity (FC) has been widely investigated in many imaging-based neuroscience and clinical studies. Since functional Magnetic Resonance Image (MRI) signal is just an indirect reflection of brain activity, it is difficult to accurately quantify the FC strength only based on signal correlation. To address this limitation, we propose a learning-based tensor model to derive high sensitivity and specificity connectome biomarkers at the individual level from resting-state fMRI images. First, we propose a learning-based approach to estimate the intrinsic functional connectivity. In addition to the low level region-to-region signal correlation, latent module-to-module connection is also estimated and used to provide high level heuristics for measuring connectivity strength. Furthermore, sparsity constraint is employed to automatically remove the spurious connections, thus alleviating the issue of searching for optimal threshold. Second, we integrate our learning-based approach with the sliding-window technique to further reveal the dynamics of functional connectivity. Specifically, we stack the functional connectivity matrix within each sliding window and form a 3D tensor where the third dimension denotes for time. Then we obtain dynamic functional connectivity (dFC) for each individual subject by simultaneously estimating the within-sliding-window functional connectivity and characterizing the across-sliding-window temporal dynamics. Third, in order to enhance the robustness of the connectome patterns extracted from dFC, we extend the individual-based 3D tensors to a population-based 4D tensor (with the fourth dimension stands for the training subjects) and learn the statistics of connectome patterns via 4D tensor analysis. Since our 4D tensor model jointly (1) optimizes dFC for each training subject and (2) captures the principle connectome patterns, our statistical model gains more statistical power of representing new subject than current state

  16. Quantifying Riverscape Connectivity with Graph Theory

    Science.gov (United States)

    Carbonneau, P.; Milledge, D.; Sinha, R.; Tandon, S. K.

    2013-12-01

    Fluvial catchments convey fluxes of water, sediment, nutrients and aquatic biota. At continental scales, crustal topography defines the overall path of channels whilst at local scales depositional and/or erosional features generally determine the exact path of a channel. Furthermore, constructions such as dams, for either water abstraction or hydropower, often have a significant impact on channel networks.The concept of ';connectivity' is commonly invoked when conceptualising the structure of a river network.This concept is easy to grasp but there have been uneven efforts across the environmental sciences to actually quantify connectivity. Currently there have only been a few studies reporting quantitative indices of connectivity in river sciences, notably, in the study of avulsion processes. However, the majority of current work describing some form of environmental connectivity in a quantitative manner is in the field of landscape ecology. Driven by the need to quantify habitat fragmentation, landscape ecologists have returned to graph theory. Within this formal setting, landscape ecologists have successfully developed a range of indices which can model connectivity loss. Such formal connectivity metrics are currently needed for a range of applications in fluvial sciences. One of the most urgent needs relates to dam construction. In the developed world, hydropower development has generally slowed and in many countries, dams are actually being removed. However, this is not the case in the developing world where hydropower is seen as a key element to low-emissions power-security. For example, several dam projects are envisaged in Himalayan catchments in the next 2 decades. This region is already under severe pressure from climate change and urbanisation, and a better understanding of the network fragmentation which can be expected in this system is urgently needed. In this paper, we apply and adapt connectivity metrics from landscape ecology. We then examine the

  17. Quantifying human vitamin kinetics using AMS

    Energy Technology Data Exchange (ETDEWEB)

    Hillegonds, D; Dueker, S; Ognibene, T; Buchholz, B; Lin, Y; Vogel, J; Clifford, A

    2004-02-19

    Tracing vitamin kinetics at physiologic concentrations has been hampered by a lack of quantitative sensitivity for chemically equivalent tracers that could be used safely in healthy people. Instead, elderly or ill volunteers were sought for studies involving pharmacologic doses with radioisotopic labels. These studies fail to be relevant in two ways: vitamins are inherently micronutrients, whose biochemical paths are saturated and distorted by pharmacological doses; and while vitamins remain important for health in the elderly or ill, their greatest effects may be in preventing slow and cumulative diseases by proper consumption throughout youth and adulthood. Neither the target dose nor the target population are available for nutrient metabolic studies through decay counting of radioisotopes at high levels. Stable isotopic labels are quantified by isotope ratio mass spectrometry at levels that trace physiologic vitamin doses, but the natural background of stable isotopes severely limits the time span over which the tracer is distinguishable. Indeed, study periods seldom ranged over a single biological mean life of the labeled nutrients, failing to provide data on the important final elimination phase of the compound. Kinetic data for the absorption phase is similarly rare in micronutrient research because the phase is rapid, requiring many consecutive plasma samples for accurate representation. However, repeated blood samples of sufficient volume for precise stable or radio-isotope quantitations consume an indefensible amount of the volunteer's blood over a short period. Thus, vitamin pharmacokinetics in humans has often relied on compartmental modeling based upon assumptions and tested only for the short period of maximal blood circulation, a period that poorly reflects absorption or final elimination kinetics except for the most simple models.

  18. Quantifying Sentiment and Influence in Blogspaces

    Energy Technology Data Exchange (ETDEWEB)

    Hui, Peter SY; Gregory, Michelle L.

    2010-07-25

    The weblog, or blog, has become a popular form of social media, through which authors can write posts, which can in turn generate feedback in the form of user comments. When considered in totality, a collection of blogs can thus be viewed as a sort of informal collection of mass sentiment and opinion. An obvious topic of interest might be to mine this collection to obtain some gauge of public sentiment over the wide variety of topics contained therein. However, the sheer size of the so-called blogosphere, combined with the fact that the subjects of posts can vary over a practically limitless number of topics poses some serious challenges when any meaningful analysis is attempted. Namely, the fact that largely anyone with access to the Internet can author their own blog, raises the serious issue of credibility— should some blogs be considered to be more influential than others, and consequently, when gauging sentiment with respect to a topic, should some blogs be weighted more heavily than others? In addition, as new posts and comments can be made on almost a constant basis, any blog analysis algorithm must be able to handle such updates efficiently. In this paper, we give a formalization of the blog model. We give formal methods of quantifying sentiment and influence with respect to a hierarchy of topics, with the specific aim of facilitating the computation of a per-topic, influence-weighted sentiment measure. Finally, as efficiency is a specific endgoal, we give upper bounds on the time required to update these values with new posts, showing that our analysis and algorithms are scalable.

  19. Quantifying antimicrobial resistance at veal calf farms.

    Directory of Open Access Journals (Sweden)

    Angela B Bosman

    Full Text Available This study was performed to determine a sampling strategy to quantify the prevalence of antimicrobial resistance on veal calf farms, based on the variation in antimicrobial resistance within and between calves on five farms. Faecal samples from 50 healthy calves (10 calves/farm were collected. From each individual sample and one pooled faecal sample per farm, 90 selected Escherichia coli isolates were tested for their resistance against 25 mg/L amoxicillin, 25 mg/L tetracycline, 0.5 mg/L cefotaxime, 0.125 mg/L ciprofloxacin and 8/152 mg/L trimethoprim/sulfamethoxazole (tmp/s by replica plating. From each faecal sample another 10 selected E. coli isolates were tested for their resistance by broth microdilution as a reference. Logistic regression analysis was performed to compare the odds of testing an isolate resistant between both test methods (replica plating vs. broth microdilution and to evaluate the effect of pooling faecal samples. Bootstrap analysis was used to investigate the precision of the estimated prevalence of resistance to each antimicrobial obtained by several simulated sampling strategies. Replica plating showed similar odds of E. coli isolates tested resistant compared to broth microdilution, except for ciprofloxacin (OR 0.29, p ≤ 0.05. Pooled samples showed in general lower odds of an isolate being resistant compared to individual samples, although these differences were not significant. Bootstrap analysis showed that within each antimicrobial the various compositions of a pooled sample provided consistent estimates for the mean proportion of resistant isolates. Sampling strategies should be based on the variation in resistance among isolates within faecal samples and between faecal samples, which may vary by antimicrobial. In our study, the optimal sampling strategy from the perspective of precision of the estimated levels of resistance and practicality consists of a pooled faecal sample from 20 individual animals, of which

  20. Quantifying the Clinical Significance of Cannabis Withdrawal

    Science.gov (United States)

    Allsop, David J.; Copeland, Jan; Norberg, Melissa M.; Fu, Shanlin; Molnar, Anna; Lewis, John; Budney, Alan J.

    2012-01-01

    Background and Aims Questions over the clinical significance of cannabis withdrawal have hindered its inclusion as a discrete cannabis induced psychiatric condition in the Diagnostic and Statistical Manual of Mental Disorders (DSM IV). This study aims to quantify functional impairment to normal daily activities from cannabis withdrawal, and looks at the factors predicting functional impairment. In addition the study tests the influence of functional impairment from cannabis withdrawal on cannabis use during and after an abstinence attempt. Methods and Results A volunteer sample of 49 non-treatment seeking cannabis users who met DSM-IV criteria for dependence provided daily withdrawal-related functional impairment scores during a one-week baseline phase and two weeks of monitored abstinence from cannabis with a one month follow up. Functional impairment from withdrawal symptoms was strongly associated with symptom severity (p = 0.0001). Participants with more severe cannabis dependence before the abstinence attempt reported greater functional impairment from cannabis withdrawal (p = 0.03). Relapse to cannabis use during the abstinence period was associated with greater functional impairment from a subset of withdrawal symptoms in high dependence users. Higher levels of functional impairment during the abstinence attempt predicted higher levels of cannabis use at one month follow up (p = 0.001). Conclusions Cannabis withdrawal is clinically significant because it is associated with functional impairment to normal daily activities, as well as relapse to cannabis use. Sample size in the relapse group was small and the use of a non-treatment seeking population requires findings to be replicated in clinical samples. Tailoring treatments to target withdrawal symptoms contributing to functional impairment during a quit attempt may improve treatment outcomes. PMID:23049760

  1. Quantifying seasonal velocity at Khumbu Glacier, Nepal

    Science.gov (United States)

    Miles, E.; Quincey, D. J.; Miles, K.; Hubbard, B. P.; Rowan, A. V.

    2017-12-01

    While the low-gradient debris-covered tongues of many Himalayan glaciers exhibit low surface velocities, quantifying ice flow and its variation through time remains a key challenge for studies aimed at determining the long-term evolution of these glaciers. Recent work has suggested that glaciers in the Everest region of Nepal may show seasonal variability in surface velocity, with ice flow peaking during the summer as monsoon precipitation provides hydrological inputs and thus drives changes in subglacial drainage efficiency. However, satellite and aerial observations of glacier velocity during the monsoon are greatly limited due to cloud cover. Those that do exist do not span the period over which the most dynamic changes occur, and consequently short-term (i.e. daily) changes in flow, as well as the evolution of ice dynamics through the monsoon period, remain poorly understood. In this study, we combine field and remote (satellite image) observations to create a multi-temporal, 3D synthesis of ice deformation rates at Khumbu Glacier, Nepal, focused on the 2017 monsoon period. We first determine net annual and seasonal surface displacements for the whole glacier based on Landsat-8 (OLI) panchromatic data (15m) processed with ImGRAFT. We integrate inclinometer observations from three boreholes drilled by the EverDrill project to determine cumulative deformation at depth, providing a 3D perspective and enabling us to assess the role of basal sliding at each site. We additionally analyze high-frequency on-glacier L1 GNSS data from three sites to characterize variability within surface deformation at sub-seasonal timescales. Finally, each dataset is validated against repeat-dGPS observations at gridded points in the vicinity of the boreholes and GNSS dataloggers. These datasets complement one another to infer thermal regime across the debris-covered ablation area of the glacier, and emphasize the seasonal and spatial variability of ice deformation for glaciers in High

  2. Quantifying collective attention from tweet stream.

    Directory of Open Access Journals (Sweden)

    Kazutoshi Sasahara

    Full Text Available Online social media are increasingly facilitating our social interactions, thereby making available a massive "digital fossil" of human behavior. Discovering and quantifying distinct patterns using these data is important for studying social behavior, although the rapid time-variant nature and large volumes of these data make this task difficult and challenging. In this study, we focused on the emergence of "collective attention" on Twitter, a popular social networking service. We propose a simple method for detecting and measuring the collective attention evoked by various types of events. This method exploits the fact that tweeting activity exhibits a burst-like increase and an irregular oscillation when a particular real-world event occurs; otherwise, it follows regular circadian rhythms. The difference between regular and irregular states in the tweet stream was measured using the Jensen-Shannon divergence, which corresponds to the intensity of collective attention. We then associated irregular incidents with their corresponding events that attracted the attention and elicited responses from large numbers of people, based on the popularity and the enhancement of key terms in posted messages or "tweets." Next, we demonstrate the effectiveness of this method using a large dataset that contained approximately 490 million Japanese tweets by over 400,000 users, in which we identified 60 cases of collective attentions, including one related to the Tohoku-oki earthquake. "Retweet" networks were also investigated to understand collective attention in terms of social interactions. This simple method provides a retrospective summary of collective attention, thereby contributing to the fundamental understanding of social behavior in the digital era.

  3. Quantifying the clinical significance of cannabis withdrawal.

    Directory of Open Access Journals (Sweden)

    David J Allsop

    Full Text Available Questions over the clinical significance of cannabis withdrawal have hindered its inclusion as a discrete cannabis induced psychiatric condition in the Diagnostic and Statistical Manual of Mental Disorders (DSM IV. This study aims to quantify functional impairment to normal daily activities from cannabis withdrawal, and looks at the factors predicting functional impairment. In addition the study tests the influence of functional impairment from cannabis withdrawal on cannabis use during and after an abstinence attempt.A volunteer sample of 49 non-treatment seeking cannabis users who met DSM-IV criteria for dependence provided daily withdrawal-related functional impairment scores during a one-week baseline phase and two weeks of monitored abstinence from cannabis with a one month follow up. Functional impairment from withdrawal symptoms was strongly associated with symptom severity (p=0.0001. Participants with more severe cannabis dependence before the abstinence attempt reported greater functional impairment from cannabis withdrawal (p=0.03. Relapse to cannabis use during the abstinence period was associated with greater functional impairment from a subset of withdrawal symptoms in high dependence users. Higher levels of functional impairment during the abstinence attempt predicted higher levels of cannabis use at one month follow up (p=0.001.Cannabis withdrawal is clinically significant because it is associated with functional impairment to normal daily activities, as well as relapse to cannabis use. Sample size in the relapse group was small and the use of a non-treatment seeking population requires findings to be replicated in clinical samples. Tailoring treatments to target withdrawal symptoms contributing to functional impairment during a quit attempt may improve treatment outcomes.

  4. Quantifying motion for pancreatic radiotherapy margin calculation

    International Nuclear Information System (INIS)

    Whitfield, Gillian; Jain, Pooja; Green, Melanie; Watkins, Gillian; Henry, Ann; Stratford, Julie; Amer, Ali; Marchant, Thomas; Moore, Christopher; Price, Patricia

    2012-01-01

    Background and purpose: Pancreatic radiotherapy (RT) is limited by uncertain target motion. We quantified 3D patient/organ motion during pancreatic RT and calculated required treatment margins. Materials and methods: Cone-beam computed tomography (CBCT) and orthogonal fluoroscopy images were acquired post-RT delivery from 13 patients with locally advanced pancreatic cancer. Bony setup errors were calculated from CBCT. Inter- and intra-fraction fiducial (clip/seed/stent) motion was determined from CBCT projections and orthogonal fluoroscopy. Results: Using an off-line CBCT correction protocol, systematic (random) setup errors were 2.4 (3.2), 2.0 (1.7) and 3.2 (3.6) mm laterally (left–right), vertically (anterior–posterior) and longitudinally (cranio-caudal), respectively. Fiducial motion varied substantially. Random inter-fractional changes in mean fiducial position were 2.0, 1.6 and 2.6 mm; 95% of intra-fractional peak-to-peak fiducial motion was up to 6.7, 10.1 and 20.6 mm, respectively. Calculated clinical to planning target volume (CTV–PTV) margins were 1.4 cm laterally, 1.4 cm vertically and 3.0 cm longitudinally for 3D conformal RT, reduced to 0.9, 1.0 and 1.8 cm, respectively, if using 4D planning and online setup correction. Conclusions: Commonly used CTV–PTV margins may inadequately account for target motion during pancreatic RT. Our results indicate better immobilisation, individualised allowance for respiratory motion, online setup error correction and 4D planning would improve targeting.

  5. Basic digital signal processing

    CERN Document Server

    Lockhart, Gordon B

    1985-01-01

    Basic Digital Signal Processing describes the principles of digital signal processing and experiments with BASIC programs involving the fast Fourier theorem (FFT). The book reviews the fundamentals of the BASIC program, continuous and discrete time signals including analog signals, Fourier analysis, discrete Fourier transform, signal energy, power. The text also explains digital signal processing involving digital filters, linear time-variant systems, discrete time unit impulse, discrete-time convolution, and the alternative structure for second order infinite impulse response (IIR) sections.

  6. Quantifying forest mortality with the remote sensing of snow

    Science.gov (United States)

    Baker, Emily Hewitt

    Greenhouse gas emissions have altered global climate significantly, increasing the frequency of drought, fire, and pest-related mortality in forests across the western United States, with increasing area affected each year. Associated changes in forests are of great concern for the public, land managers, and the broader scientific community. These increased stresses have resulted in a widespread, spatially heterogeneous decline of forest canopies, which in turn exerts strong controls on the accumulation and melt of the snowpack, and changes forest-atmosphere exchanges of carbon, water, and energy. Most satellite-based retrievals of summer-season forest data are insufficient to quantify canopy, as opposed to the combination of canopy and undergrowth, since the signals of the two types of vegetation greenness have proven persistently difficult to distinguish. To overcome this issue, this research develops a method to quantify forest canopy cover using winter-season fractional snow covered area (FSCA) data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) snow covered area and grain size (MODSCAG) algorithm. In areas where the ground surface and undergrowth are completely snow-covered, a pixel comprises only forest canopy and snow. Following a snowfall event, FSCA initially rises, as snow is intercepted in the canopy, and then falls, as snow unloads. A select set of local minima in a winter F SCA timeseries form a threshold where canopy is snow-free, but forest understory is snow-covered. This serves as a spatially-explicit measurement of forest canopy, and viewable gap fraction (VGF) on a yearly basis. Using this method, we determine that MODIS-observed VGF is significantly correlated with an independent product of yearly crown mortality derived from spectral analysis of Landsat imagery at 25 high-mortality sites in northern Colorado. (r =0.96 +/-0.03, p =0.03). Additionally, we determine the lag timing between green-stage tree mortality and

  7. A Methodological Approach to Quantifying Plyometric Intensity.

    Science.gov (United States)

    Jarvis, Mark M; Graham-Smith, Phil; Comfort, Paul

    2016-09-01

    Jarvis, MM, Graham-Smith, P, and Comfort, P. A Methodological approach to quantifying plyometric intensity. J Strength Cond Res 30(9): 2522-2532, 2016-In contrast to other methods of training, the quantification of plyometric exercise intensity is poorly defined. The purpose of this study was to evaluate the suitability of a range of neuromuscular and mechanical variables to describe the intensity of plyometric exercises. Seven male recreationally active subjects performed a series of 7 plyometric exercises. Neuromuscular activity was measured using surface electromyography (SEMG) at vastus lateralis (VL) and biceps femoris (BF). Surface electromyography data were divided into concentric (CON) and eccentric (ECC) phases of movement. Mechanical output was measured by ground reaction forces and processed to provide peak impact ground reaction force (PF), peak eccentric power (PEP), and impulse (IMP). Statistical analysis was conducted to assess the reliability intraclass correlation coefficient and sensitivity smallest detectable difference of all variables. Mean values of SEMG demonstrate high reliability (r ≥ 0.82), excluding ECC VL during a 40-cm drop jump (r = 0.74). PF, PEP, and IMP demonstrated high reliability (r ≥ 0.85). Statistical power for force variables was excellent (power = 1.0), and good for SEMG (power ≥0.86) excluding CON BF (power = 0.57). There was no significant difference (p > 0.05) in CON SEMG between exercises. Eccentric phase SEMG only distinguished between exercises involving a landing and those that did not (percentage of maximal voluntary isometric contraction [%MVIC] = no landing -65 ± 5, landing -140 ± 8). Peak eccentric power, PF, and IMP all distinguished between exercises. In conclusion, CON neuromuscular activity does not appear to vary when intent is maximal, whereas ECC activity is dependent on the presence of a landing. Force characteristics provide a reliable and sensitive measure enabling precise description of intensity

  8. Quantifying Permafrost Characteristics with DCR-ERT

    Science.gov (United States)

    Schnabel, W.; Trochim, E.; Munk, J.; Kanevskiy, M. Z.; Shur, Y.; Fortier, R.

    2012-12-01

    Geophysical methods are an efficient method for quantifying permafrost characteristics for Arctic road design and engineering. In the Alaskan Arctic construction and maintenance of roads requires integration of permafrost; ground that is below 0 degrees C for two or more years. Features such as ice content and temperature are critical for understanding current and future ground conditions for planning, design and evaluation of engineering applications. This study focused on the proposed Foothills West Transportation Access project corridor where the purpose is to construct a new all-season road connecting the Dalton Highway to Umiat. Four major areas were chosen that represented a range of conditions including gravel bars, alluvial plains, tussock tundra (both unburned and burned conditions), high and low centered ice-wedge polygons and an active thermokarst feature. Direct-current resistivity using galvanic contact (DCR-ERT) was applied over transects. In conjunction complimentary site data including boreholes, active layer depths, vegetation descriptions and site photographs was obtained. The boreholes provided information on soil morphology, ice texture and gravimetric moisture content. Horizontal and vertical resolutions in the DCR-ERT were varied to determine the presence or absence of ground ice; subsurface heterogeneity; and the depth to groundwater (if present). The four main DCR-ERT methods used were: 84 electrodes with 2 m spacing; 42 electrodes with 0.5 m spacing; 42 electrodes with 2 m spacing; and 84 electrodes with 1 m spacing. In terms of identifying the ground ice characteristics the higher horizontal resolution DCR-ERT transects with either 42 or 84 electrodes and 0.5 or 1 m spacing were best able to differentiate wedge-ice. This evaluation is based on a combination of both borehole stratigraphy and surface characteristics. Simulated apparent resistivity values for permafrost areas varied from a low of 4582 Ω m to a high of 10034 Ω m. Previous

  9. Quantifying geocode location error using GIS methods

    Directory of Open Access Journals (Sweden)

    Gardner Bennett R

    2007-04-01

    Full Text Available Abstract Background The Metropolitan Atlanta Congenital Defects Program (MACDP collects maternal address information at the time of delivery for infants and fetuses with birth defects. These addresses have been geocoded by two independent agencies: (1 the Georgia Division of Public Health Office of Health Information and Policy (OHIP and (2 a commercial vendor. Geographic information system (GIS methods were used to quantify uncertainty in the two sets of geocodes using orthoimagery and tax parcel datasets. Methods We sampled 599 infants and fetuses with birth defects delivered during 1994–2002 with maternal residence in either Fulton or Gwinnett County. Tax parcel datasets were obtained from the tax assessor's offices of Fulton and Gwinnett County. High-resolution orthoimagery for these counties was acquired from the U.S. Geological Survey. For each of the 599 addresses we attempted to locate the tax parcel corresponding to the maternal address. If the tax parcel was identified the distance and the angle between the geocode and the residence were calculated. We used simulated data to characterize the impact of geocode location error. In each county 5,000 geocodes were generated and assigned their corresponding Census 2000 tract. Each geocode was then displaced at a random angle by a random distance drawn from the distribution of observed geocode location errors. The census tract of the displaced geocode was determined. We repeated this process 5,000 times and report the percentage of geocodes that resolved into incorrect census tracts. Results Median location error was less than 100 meters for both OHIP and commercial vendor geocodes; the distribution of angles appeared uniform. Median location error was approximately 35% larger in Gwinnett (a suburban county relative to Fulton (a county with urban and suburban areas. Location error occasionally caused the simulated geocodes to be displaced into incorrect census tracts; the median percentage

  10. Entropy generation method to quantify thermal comfort

    Science.gov (United States)

    Boregowda, S. C.; Tiwari, S. N.; Chaturvedi, S. K.

    2001-01-01

    The present paper presents a thermodynamic approach to assess the quality of human-thermal environment interaction and quantify thermal comfort. The approach involves development of entropy generation term by applying second law of thermodynamics to the combined human-environment system. The entropy generation term combines both human thermal physiological responses and thermal environmental variables to provide an objective measure of thermal comfort. The original concepts and definitions form the basis for establishing the mathematical relationship between thermal comfort and entropy generation term. As a result of logic and deterministic approach, an Objective Thermal Comfort Index (OTCI) is defined and established as a function of entropy generation. In order to verify the entropy-based thermal comfort model, human thermal physiological responses due to changes in ambient conditions are simulated using a well established and validated human thermal model developed at the Institute of Environmental Research of Kansas State University (KSU). The finite element based KSU human thermal computer model is being utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal responses to different environmental conditions. The output from the simulation, which include human thermal responses and input data consisting of environmental conditions are fed into the thermal comfort model. Continuous monitoring of thermal comfort in comfortable and extreme environmental conditions is demonstrated. The Objective Thermal Comfort values obtained from the entropy-based model are validated against regression based Predicted Mean Vote (PMV) values. Using the corresponding air temperatures and vapor pressures that were used in the computer simulation in the regression equation generates the PMV values. The preliminary results indicate that the OTCI and PMV values correlate well under ideal conditions. However, an experimental study

  11. Quantifying emission reduction contributions by emerging economics

    Energy Technology Data Exchange (ETDEWEB)

    Moltmann, Sara; Hagemann, Markus; Eisbrenner, Katja; Hoehne, Niklas [Ecofys GmbH, Koeln (Germany); Sterk, Wolfgang; Mersmann, Florian; Ott, Hermann E.; Watanabe, Rie [Wuppertal Institut (Germany)

    2011-04-15

    Further action is needed that goes far beyond what has been agreed so far under the United Nations Framework Convention on Climate Change (UNFCCC) and the Kyoto Protocol to 'prevent dangerous anthropogenic interference with the climate system', the ultimate objective of the UNFCCC. It is out of question that developed countries (Annex I countries) will have to take a leading role. They will have to commit to substantial emission reductions and financing commitments due to their historical responsibility and their financial capability. However, the stabilisation of the climate system will require global emissions to peak within the next decade and decline well below half of current levels by the middle of the century. It is hence a global issue and, thus, depends on the participation of as many countries as possible. This report provides a comparative analysis of greenhouse gas (GHG) emissions, including their national climate plans, of the major emitting developing countries Brazil, China, India, Mexico, South Africa and South Korea. It includes an overview of emissions and economic development, existing national climate change strategies, uses a consistent methodology for estimating emission reduction potential, costs of mitigation options, provides an estimate of the reductions to be achieved through the national climate plans and finally provides a comparison of the results to the allocation of emission rights according to different global effort-sharing approaches. In addition, the report discusses possible nationally appropriate mitigation actions (NAMAs) the six countries could take based on the analysis of mitigation options. This report is an output of the project 'Proposals for quantifying emission reduction contributions by emerging economies' by Ecofys and the Wuppertal Institute for the Federal Environment Agency in Dessau. It builds upon earlier joint work ''Proposals for contributions of emerging economies to the climate

  12. Quantifying the impacts of global disasters

    Science.gov (United States)

    Jones, L. M.; Ross, S.; Wilson, R. I.; Borrero, J. C.; Brosnan, D.; Bwarie, J. T.; Geist, E. L.; Hansen, R. A.; Johnson, L. A.; Kirby, S. H.; Long, K.; Lynett, P. J.; Miller, K. M.; Mortensen, C. E.; Perry, S. C.; Porter, K. A.; Real, C. R.; Ryan, K. J.; Thio, H. K.; Wein, A. M.; Whitmore, P.; Wood, N. J.

    2012-12-01

    The US Geological Survey, National Oceanic and Atmospheric Administration, California Geological Survey, and other entities are developing a Tsunami Scenario, depicting a realistic outcome of a hypothetical but plausible large tsunami originating in the eastern Aleutian Arc, affecting the west coast of the United States, including Alaska and Hawaii. The scenario includes earth-science effects, damage and restoration of the built environment, and social and economic impacts. Like the earlier ShakeOut and ARkStorm disaster scenarios, the purpose of the Tsunami Scenario is to apply science to quantify the impacts of natural disasters in a way that can be used by decision makers in the affected sectors to reduce the potential for loss. Most natural disasters are local. A major hurricane can destroy a city or damage a long swath of coastline while mostly sparing inland areas. The largest earthquake on record caused strong shaking along 1500 km of Chile, but left the capital relatively unscathed. Previous scenarios have used the local nature of disasters to focus interaction with the user community. However, the capacity for global disasters is growing with the interdependency of the global economy. Earthquakes have disrupted global computer chip manufacturing and caused stock market downturns. Tsunamis, however, can be global in their extent and direct impact. Moreover, the vulnerability of seaports to tsunami damage can increase the global consequences. The Tsunami Scenario is trying to capture the widespread effects while maintaining the close interaction with users that has been one of the most successful features of the previous scenarios. The scenario tsunami occurs in the eastern Aleutians with a source similar to the 2011 Tohoku event. Geologic similarities support the argument that a Tohoku-like source is plausible in Alaska. It creates a major nearfield tsunami in the Aleutian arc and peninsula, a moderate tsunami in the US Pacific Northwest, large but not the

  13. Quantifier spreading in child eye movements: A case of the Russian quantifier kazhdyj ‘every'

    Directory of Open Access Journals (Sweden)

    Irina A. Sekerina

    2017-07-01

    Full Text Available Extensive cross-linguistic work has documented that children up to the age of 9–10 make errors when performing a sentence-picture verification task that pairs spoken sentences with the universal quantifier 'every 'and pictures with entities in partial one-to-one correspondence. These errors stem from children’s difficulties in restricting the domain of a universal quantifier to the appropriate noun phrase and are referred in the literature as 'quantifier-spreading '('q'-spreading. We adapted the task to be performed in conjunction with eye-movement recordings using the Visual World Paradigm. Russian-speaking 5-to-6-year-old children ('N '= 31 listened to sentences like 'Kazhdyj alligator lezhit v vanne '‘Every alligator is lying in a bathtub’ and viewed pictures with three alligators, each in a bathtub, and two extra empty bathtubs. Non-spreader children ('N '= 12 were adult-like in their accuracy whereas 'q'-spreading ones ('N '= 19 were only 43% correct in interpreting such sentences compared to the control sentences. Eye movements of 'q'-spreading children revealed that more looks to the extra containers (two empty bathtubs correlated with higher error rates reflecting the processing pattern of 'q'-spreading. In contrast, more looks to the distractors in control sentences did not lead to errors in interpretation. We argue that 'q'-spreading errors are caused by interference from the extra entities in the visual context, and our results support the processing difficulty account of acquisition of quantification. Interference results in cognitive overload as children have to integrate multiple sources of information, i.e., visual context with salient extra entities and the spoken sentence in which these entities are mentioned in real-time processing.   This article is part of the special collection: Acquisition of Quantification

  14. Phase synchronization of instrumental music signals

    Science.gov (United States)

    Mukherjee, Sayan; Palit, Sanjay Kumar; Banerjee, Santo; Ariffin, M. R. K.; Bhattacharya, D. K.

    2014-06-01

    Signal analysis is one of the finest scientific techniques in communication theory. Some quantitative and qualitative measures describe the pattern of a music signal, vary from one to another. Same musical recital, when played by different instrumentalists, generates different types of music patterns. The reason behind various patterns is the psycho-acoustic measures - Dynamics, Timber, Tonality and Rhythm, varies in each time. However, the psycho-acoustic study of the music signals does not reveal any idea about the similarity between the signals. For such cases, study of synchronization of long-term nonlinear dynamics may provide effective results. In this context, phase synchronization (PS) is one of the measures to show synchronization between two non-identical signals. In fact, it is very critical to investigate any other kind of synchronization for experimental condition, because those are completely non identical signals. Also, there exists equivalence between the phases and the distances of the diagonal line in Recurrence plot (RP) of the signals, which is quantifiable by the recurrence quantification measure τ-recurrence rate. This paper considers two nonlinear music signals based on same raga played by two eminent sitar instrumentalists as two non-identical sources. The psycho-acoustic study shows how the Dynamics, Timber, Tonality and Rhythm vary for the two music signals. Then, long term analysis in the form of phase space reconstruction is performed, which reveals the chaotic phase spaces for both the signals. From the RP of both the phase spaces, τ-recurrence rate is calculated. Finally by the correlation of normalized tau-recurrence rate of their 3D phase spaces and the PS of the two music signals has been established. The numerical results well support the analysis.

  15. Retroactive signaling in short signaling pathways.

    Directory of Open Access Journals (Sweden)

    Jacques-Alexandre Sepulchre

    Full Text Available In biochemical signaling pathways without explicit feedback connections, the core signal transduction is usually described as a one-way communication, going from upstream to downstream in a feedforward chain or network of covalent modification cycles. In this paper we explore the possibility of a new type of signaling called retroactive signaling, offered by the recently demonstrated property of retroactivity in signaling cascades. The possibility of retroactive signaling is analysed in the simplest case of the stationary states of a bicyclic cascade of signaling cycles. In this case, we work out the conditions for which variables of the upstream cycle are affected by a change of the total amount of protein in the downstream cycle, or by a variation of the phosphatase deactivating the same protein. Particularly, we predict the characteristic ranges of the downstream protein, or of the downstream phosphatase, for which a retroactive effect can be observed on the upstream cycle variables. Next, we extend the possibility of retroactive signaling in short but nonlinear signaling pathways involving a few covalent modification cycles.

  16. Quantifying uncertainty and resilience on coral reefs using a Bayesian approach

    International Nuclear Information System (INIS)

    Van Woesik, R

    2013-01-01

    Coral reefs are rapidly deteriorating globally. The contemporary management option favors managing for resilience to provide reefs with the capacity to tolerate human-induced disturbances. Yet resilience is most commonly defined as the capacity of a system to absorb disturbances without changing fundamental processes or functionality. Quantifying no change, or the uncertainty of a null hypothesis, is nonsensical using frequentist statistics, but is achievable using a Bayesian approach. This study outlines a practical Bayesian framework that quantifies the resilience of coral reefs using two inter-related models. The first model examines the functionality of coral reefs in the context of their reef-building capacity, whereas the second model examines the recovery rates of coral cover after disturbances. Quantifying intrinsic rates of increase in coral cover and habitat-specific, steady-state equilibria are useful proxies of resilience. A reduction in the intrinsic rate of increase following a disturbance, or the slowing of recovery over time, can be useful indicators of stress; a change in the steady-state equilibrium suggests a phase shift. Quantifying the uncertainty of key reef-building processes and recovery parameters, and comparing these parameters against benchmarks, facilitates the detection of loss of resilience and provides signals of imminent change. (letter)

  17. Quantifying uncertainty and resilience on coral reefs using a Bayesian approach

    Science.gov (United States)

    van Woesik, R.

    2013-12-01

    Coral reefs are rapidly deteriorating globally. The contemporary management option favors managing for resilience to provide reefs with the capacity to tolerate human-induced disturbances. Yet resilience is most commonly defined as the capacity of a system to absorb disturbances without changing fundamental processes or functionality. Quantifying no change, or the uncertainty of a null hypothesis, is nonsensical using frequentist statistics, but is achievable using a Bayesian approach. This study outlines a practical Bayesian framework that quantifies the resilience of coral reefs using two inter-related models. The first model examines the functionality of coral reefs in the context of their reef-building capacity, whereas the second model examines the recovery rates of coral cover after disturbances. Quantifying intrinsic rates of increase in coral cover and habitat-specific, steady-state equilibria are useful proxies of resilience. A reduction in the intrinsic rate of increase following a disturbance, or the slowing of recovery over time, can be useful indicators of stress; a change in the steady-state equilibrium suggests a phase shift. Quantifying the uncertainty of key reef-building processes and recovery parameters, and comparing these parameters against benchmarks, facilitates the detection of loss of resilience and provides signals of imminent change.

  18. Quantifying climate changes of the Common Era for Finland

    Science.gov (United States)

    Luoto, Tomi P.; Nevalainen, Liisa

    2017-10-01

    In this study, we aim to quantify summer air temperatures from sediment records from Southern, Central and Northern Finland over the past 2000 years. We use lake sediment archives to estimate paleotemperatures applying fossil Chironomidae assemblages and the transfer function approach. The used enhanced Chironomidae-based temperature calibration set was validated in a 70-year high-resolution sediment record against instrumentally measured temperatures. Since the inferred and observed temperatures showed close correlation, we deduced that the new calibration model is reliable for reconstructions beyond the monitoring records. The 700-year long temperature reconstructions from three sites at multi-decadal temporal resolution showed similar trends, although they had differences in timing of the cold Little Ice Age (LIA) and the initiation of recent warming. The 2000-year multi-centennial reconstructions from three different sites showed resemblance with each other having clear signals of the Medieval Climate Anomaly (MCA) and LIA, but with differences in their timing. The influence of external forcing on climate of the southern and central sites appeared to be complex at the decadal scale, but the North Atlantic Oscillation (NAO) was closely linked to the temperature development of the northern site. Solar activity appears to be synchronous with the temperature fluctuations at the multi-centennial scale in all the sites. The present study provides new insights into centennial and decadal variability in air temperature dynamics in Northern Europe and on the external forcing behind these trends. These results are particularly useful in comparing regional responses and lags of temperature trends between different parts of Scandinavia.

  19. Using infrared thermography for understanding and quantifying soil surface processes

    Science.gov (United States)

    de Lima, João L. M. P.

    2017-04-01

    At present, our understanding of the soil hydrologic response is restricted by measurement limitations. In the literature, there have been repeatedly calls for interdisciplinary approaches to expand our knowledge in this field and eventually overcome the limitations that are inherent to conventional measuring techniques used, for example, for tracing water at the basin, hillslope and even field or plot scales. Infrared thermography is a versatile, accurate and fast technique of monitoring surface temperature and has been used in a variety of fields, such as military surveillance, medical diagnosis, industrial processes optimisation, building inspections and agriculture. However, many applications are still to be fully explored. In surface hydrology, it has been successfully employed as a high spatial and temporal resolution non-invasive and non-destructive imaging tool to e.g. access groundwater discharges into waterbodies or quantify thermal heterogeneities of streams. It is believed that thermal infrared imagery can grasp the spatial and temporal variability of many processes at the soil surface. Thermography interprets the heat signals and can provide an attractive view for identifying both areas where water is flowing or has infiltrated more, or accumulated temporarily in depressions or macropores. Therefore, we hope to demonstrate the potential for thermal infrared imagery to indirectly make a quantitative estimation of several hydrologic processes. Applications include: e.g. mapping infiltration, microrelief and macropores; estimating flow velocities; defining sampling strategies; identifying water sources, accumulation of waters or even connectivity. Protocols for the assessment of several hydrologic processes with the help of IR thermography will be briefly explained, presenting some examples from laboratory soil flumes and field.

  20. Quantifying adaptive evolution in the Drosophila immune system.

    Directory of Open Access Journals (Sweden)

    Darren J Obbard

    2009-10-01

    Full Text Available It is estimated that a large proportion of amino acid substitutions in Drosophila have been fixed by natural selection, and as organisms are faced with an ever-changing array of pathogens and parasites to which they must adapt, we have investigated the role of parasite-mediated selection as a likely cause. To quantify the effect, and to identify which genes and pathways are most likely to be involved in the host-parasite arms race, we have re-sequenced population samples of 136 immunity and 287 position-matched non-immunity genes in two species of Drosophila. Using these data, and a new extension of the McDonald-Kreitman approach, we estimate that natural selection fixes advantageous amino acid changes in immunity genes at nearly double the rate of other genes. We find the rate of adaptive evolution in immunity genes is also more variable than other genes, with a small subset of immune genes evolving under intense selection. These genes, which are likely to represent hotspots of host-parasite coevolution, tend to share similar functions or belong to the same pathways, such as the antiviral RNAi pathway and the IMD signalling pathway. These patterns appear to be general features of immune system evolution in both species, as rates of adaptive evolution are correlated between the D. melanogaster and D. simulans lineages. In summary, our data provide quantitative estimates of the elevated rate of adaptive evolution in immune system genes relative to the rest of the genome, and they suggest that adaptation to parasites is an important force driving molecular evolution.

  1. On the contrast between Germanic and Romance negated quantifiers

    OpenAIRE

    Robert Cirillo

    2009-01-01

    Universal quantifiers can be stranded in the manner described by Sportiche (1988), Giusti (1990) and Shlonsky (1991) in both the Romance and Germanic languages, but a negated universal quantifier can only be stranded in the Germanic languages. The goal of this paper is to show that this contrast between the Romance and the Germanic languages can be explained if one adapts the theory of sentential negation in Zeijlstra (2004) to constituent (quantifier) negation. According to Zeijlstra’s theor...

  2. Certain Verbs Are Syntactically Explicit Quantifiers

    Directory of Open Access Journals (Sweden)

    Anna Szabolcsi

    2010-12-01

    Full Text Available Quantification over individuals, times, and worlds can in principle be made explicit in the syntax of the object language, or left to the semantics and spelled out in the meta-language. The traditional view is that quantification over individuals is syntactically explicit, whereas quantification over times and worlds is not. But a growing body of literature proposes a uniform treatment. This paper examines the scopal interaction of aspectual raising verbs (begin, modals (can, and intensional raising verbs (threaten with quantificational subjects in Shupamem, Dutch, and English. It appears that aspectual raising verbs and at least modals may undergo the same kind of overt or covert scope-changing operations as nominal quantifiers; the case of intensional raising verbs is less clear. Scope interaction is thus shown to be a new potential diagnostic of object-linguistic quantification, and the similarity in the scope behavior of nominal and verbal quantifiers supports the grammatical plausibility of ontological symmetry, explored in Schlenker (2006.ReferencesBen-Shalom, D. 1996. Semantic Trees. Ph.D. thesis, UCLA.Bittner, M. 1993. Case, Scope, and Binding. Dordrecht: Reidel.Cresswell, M. 1990. Entities and Indices. Dordrecht: Kluwer.Cresti, D. 1995. ‘Extraction and reconstruction’. Natural Language Semantics 3: 79–122.http://dx.doi.org/10.1007/BF01252885Curry, B. H. & Feys, R. 1958. Combinatory Logic I. Dordrecht: North-Holland.Dowty, D. R. 1988. ‘Type raising, functional composition, and non-constituent conjunction’. In Richard T. Oehrle, Emmon W. Bach & Deirdre Wheeler (eds. ‘Categorial Grammars and Natural Language Structures’, 153–197. Dordrecht: Reidel.Fox, D. 2002. ‘TOn Logical Form’. In Randall Hendrick (ed. ‘Minimalist Syntax’, 82–124. Oxford: Blackwell.Gallin, D. 1975. Intensional and higher-order modal logic: with applications to Montague semantics. North Holland Pub. Co.; American Elsevier Pub. Co., Amsterdam

  3. Quantifying radionuclide signatures from a γ–γ coincidence system

    International Nuclear Information System (INIS)

    Britton, Richard; Jackson, Mark J.; Davies, Ashley V.

    2015-01-01

    A method for quantifying gamma coincidence signatures has been developed, and tested in conjunction with a high-efficiency multi-detector system to quickly identify trace amounts of radioactive material. The γ–γ system utilises fully digital electronics and list-mode acquisition to time–stamp each event, allowing coincidence matrices to be easily produced alongside typical ‘singles’ spectra. To quantify the coincidence signatures a software package has been developed to calculate efficiency and cascade summing corrected branching ratios. This utilises ENSDF records as an input, and can be fully automated, allowing the user to quickly and easily create/update a coincidence library that contains all possible γ and conversion electron cascades, associated cascade emission probabilities, and true-coincidence summing corrected γ cascade detection probabilities. It is also fully searchable by energy, nuclide, coincidence pair, γ multiplicity, cascade probability and half-life of the cascade. The probabilities calculated were tested using measurements performed on the γ–γ system, and found to provide accurate results for the nuclides investigated. Given the flexibility of the method, (it only relies on evaluated nuclear data, and accurate efficiency characterisations), the software can now be utilised for a variety of systems, quickly and easily calculating coincidence signature probabilities. - Highlights: • Monte-Carlo based software developed to easily create/update a coincidence signal library for environmental radionuclides. • Coincidence library utilised to accurately quantify gamma coincidence signatures. • All coincidence signature probabilities are corrected for cascade summing, conversion electron emission and pair production. • Key CTBTO relevant radionuclides have been tested to verify the calculated correction factors. • Accurately quantifying coincidence signals during routine analysis will allow dramatically improved detection

  4. Quantifying uncertainty in observational rainfall datasets

    Science.gov (United States)

    Lennard, Chris; Dosio, Alessandro; Nikulin, Grigory; Pinto, Izidine; Seid, Hussen

    2015-04-01

    rainfall datasets available over Africa on monthly, daily and sub-daily time scales as appropriate to quantify spatial and temporal differences between the datasets. We find regional wet and dry biases between datasets (using the ensemble mean as a reference) with generally larger biases in reanalysis products. Rainfall intensity is poorly represented in some datasets which demonstrates some datasets should not be used for rainfall intensity analyses. Using 10 CORDEX models we show in east Africa that the spread between observed datasets is often similar to the spread between models. We recommend that specific observational rainfall datasets datasets be used for specific investigations and also that where many datasets are applicable to an investigation, a probabilistic view be adopted for rainfall studies over Africa. Endris, H. S., P. Omondi, S. Jain, C. Lennard, B. Hewitson, L. Chang'a, J. L. Awange, A. Dosio, P. Ketiem, G. Nikulin, H-J. Panitz, M. Büchner, F. Stordal, and L. Tazalika (2013) Assessment of the Performance of CORDEX Regional Climate Models in Simulating East African Rainfall. J. Climate, 26, 8453-8475. DOI: 10.1175/JCLI-D-12-00708.1 Gbobaniyi, E., A. Sarr, M. B. Sylla, I. Diallo, C. Lennard, A. Dosio, A. Dhie ?diou, A. Kamga, N. A. B. Klutse, B. Hewitson, and B. Lamptey (2013) Climatology, annual cycle and interannual variability of precipitation and temperature in CORDEX simulations over West Africa. Int. J. Climatol., DOI: 10.1002/joc.3834 Hernández-Díaz, L., R. Laprise, L. Sushama, A. Martynov, K. Winger, and B. Dugas (2013) Climate simulation over CORDEX Africa domain using the fifth-generation Canadian Regional Climate Model (CRCM5). Clim. Dyn. 40, 1415-1433. DOI: 10.1007/s00382-012-1387-z Kalognomou, E., C. Lennard, M. Shongwe, I. Pinto, A. Favre, M. Kent, B. Hewitson, A. Dosio, G. Nikulin, H. Panitz, and M. Büchner (2013) A diagnostic evaluation of precipitation in CORDEX models over southern Africa. Journal of Climate, 26, 9477-9506. DOI:10

  5. Quantifying biodiversity and asymptotics for a sequence of random strings.

    Science.gov (United States)

    Koyano, Hitoshi; Kishino, Hirohisa

    2010-06-01

    We present a methodology for quantifying biodiversity at the sequence level by developing the probability theory on a set of strings. Further, we apply our methodology to the problem of quantifying the population diversity of microorganisms in several extreme environments and digestive organs and reveal the relation between microbial diversity and various environmental parameters.

  6. Visual Attention and Quantifier-Spreading in Heritage Russian Bilinguals

    Science.gov (United States)

    Sekerina, Irina A.; Sauermann, Antje

    2015-01-01

    It is well established in language acquisition research that monolingual children and adult second language learners misinterpret sentences with the universal quantifier "every" and make quantifier-spreading errors that are attributed to a preference for a match in number between two sets of objects. The present Visual World eye-tracking…

  7. Gender Differences in Knee Joint Congruity Quantified from MRI

    DEFF Research Database (Denmark)

    Tummala, Sudhakar; Schiphof, Dieuwke; Byrjalsen, Inger

    2018-01-01

    was located and quantified using Euclidean distance transform. Furthermore, the CI was quantified over the contact area by assessing agreement of the first- and second-order general surface features. Then, the gender differences between CA and CI values were evaluated at different stages of radiographic OA...

  8. Anatomy of Alternating Quantifier Satisfiability (Work in progress)

    DEFF Research Database (Denmark)

    Dung, Phan Anh; Bjørner, Nikolaj; Monniaux, David

    We report on work in progress to generalize an algorithm recently introduced in [10] for checking satisfiability of formulas with quantifier alternation. The algorithm uses two auxiliary procedures: a procedure for producing a candidate formula for quantifier elimination and a procedure for elimi...

  9. The Role of Quantifier Alternations in Cut Elimination

    DEFF Research Database (Denmark)

    Gerhardy, Philipp

    2005-01-01

    Extending previous results on the complexity of cut elimination for the sequent calculus LK, we discuss the role of quantifier alternations and develop a measure to describe the complexity of cut elimination in terms of quantifier alternations in cut formulas and contractions on such formulas...

  10. QUANTIFYING THE SHORT LIFETIME WITH TCSPC-FLIM: FIRST MOMENT VERSUS FITTING METHODS

    Directory of Open Access Journals (Sweden)

    LINGLING XU

    2013-10-01

    Full Text Available Combing the time-correlated single photon counting (TCSPC with fluorescence lifetime imaging microscopy (FLIM provides promising opportunities in revealing important information on the microenvironment of cells and tissues, but the applications are thus far mainly limited by the accuracy and precision of the TCSPC-FLIM technique. Here we present a comprehensive investigation on the performance of two data analysis methods, the first moment (M1 method and the conventional least squares (Fitting method, in quantifying fluorescence lifetime. We found that the M1 method is more superior than the Fitting method when the lifetime is short (70 ~ 400 ps or the signal intensity is weak (<103 photons.

  11. Quantitative phosphoproteomics to characterize signaling networks

    DEFF Research Database (Denmark)

    Rigbolt, Kristoffer T G; Blagoev, Blagoy

    2012-01-01

    for analyzing protein phosphorylation at a system-wide scale and has become the intuitive strategy for comprehensive characterization of signaling networks. Contemporary phosphoproteomics use highly optimized procedures for sample preparation, mass spectrometry and data analysis algorithms to identify......Reversible protein phosphorylation is involved in the regulation of most, if not all, major cellular processes via dynamic signal transduction pathways. During the last decade quantitative phosphoproteomics have evolved from a highly specialized area to a powerful and versatile platform...... and quantify thousands of phosphorylations, thus providing extensive overviews of the cellular signaling networks. As a result of these developments quantitative phosphoproteomics have been applied to study processes as diverse as immunology, stem cell biology and DNA damage. Here we review the developments...

  12. Expected Signal Observability at Future Experiments

    CERN Document Server

    Bartsch, Valeria

    2005-01-01

    Several methods to quantify the ''significance'' of an expected signal at future experiments have been used or suggested in literature. In this note, comparisons are presented with a method based on the likelihood ratio of the ''background hypothesis'' and the ''signal-plus-background hypothesis''. A large number of Monte Carlo experiments are performed to investigate the properties of the various methods and to check whether the probability of a background fluctuation having produced the claimed significance of the discovery is properly described. In addition, the best possible separation between the two hypotheses should be provided, in other words, the discovery potential of a future experiment be maximal. Finally, a practical method to apply a likelihood-based definition of the significance is suggested in this note. Signal and background contributions are determined from a likelihoo d fit based on shapes only, and the probability density distributions of the significance thus determined are found to be o...

  13. Identifying and quantifying main components of physiological noise in functional near infrared spectroscopy on prefrontal cortex

    Directory of Open Access Journals (Sweden)

    Evgeniya eKirilina

    2013-12-01

    Full Text Available Functional Near-Infrared Spectroscopy (fNIRS is a promising method to study functional organization of the prefrontal cortex. However, in order to realize the high potential of fNIRS, effective discrimination between physiological noise originating from forehead skin haemodynamic and cerebral signals is required. Main sources of physiological noise are global and local blood flow regulation processes on multiple time scales. The goal of the present study was to identify the main physiological noise contributions in fNIRS forehead signals and to develop a method for physiological de-noising of fNIRS data. To achieve this goal we combined concurrent time-domain fNIRS and peripheral physiology recordings with wavelet coherence analysis. Depth selectivity was achieved by analyzing moments of photon time-of-flight distributions provided by time-domain fNIRS. Simultaneously, mean arterial blood pressure (MAP, heart rate (HR, and skin blood flow (SBF on the forehead were recorded. Wavelet coherence analysis was employed to quantify the impact of physiological processes on fNIRS signals separately for different time scales. We identified three main processes contributing to physiological noise in fNIRS signals on the forehead. The first process with the period of about 3 s is induced by respiration. The second process is highly correlated with time lagged MAP and HR fluctuations with a period of about 10 s often referred as Mayer waves. The third process is local regulation of the facial skin blood flow time locked to the task-evoked fNIRS signals. All processes affect oxygenated haemoglobin concentration more strongly than that of deoxygenated haemoglobin. Based on these results we developed a set of physiological regressors, which were used for physiological de-noising of fNIRS signals. Our results demonstrate that proposed de-noising method can significantly improve the sensitivity of fNIRS to cerebral signals.

  14. Signal sciences workshop proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J.V.

    1997-05-01

    This meeting is aimed primarily at signal processing and controls. The technical program for the 1997 Workshop includes a variety of efforts in the Signal Sciences with applications in the Microtechnology Area a new program at LLNL and a future area of application for both Signal/Image Sciences. Special sessions organized by various individuals in Seismic and Optical Signal Processing as well as Micro-Impulse Radar Processing highlight the program, while the speakers at the Signal Processing Applications session discuss various applications of signal processing/control to real world problems. For the more theoretical, a session on Signal Processing Algorithms was organized as well as for the more pragmatic, featuring a session on Real-Time Signal Processing.

  15. Signal sciences workshop. Proceedings

    International Nuclear Information System (INIS)

    Candy, J.V.

    1997-01-01

    This meeting is aimed primarily at signal processing and controls. The technical program for the 1997 Workshop includes a variety of efforts in the Signal Sciences with applications in the Microtechnology Area a new program at LLNL and a future area of application for both Signal/Image Sciences. Special sessions organized by various individuals in Seismic and Optical Signal Processing as well as Micro-Impulse Radar Processing highlight the program, while the speakers at the Signal Processing Applications session discuss various applications of signal processing/control to real world problems. For the more theoretical, a session on Signal Processing Algorithms was organized as well as for the more pragmatic, featuring a session on Real-Time Signal Processing

  16. Quantifying pelagic-benthic coupling in the North Sea: Are we asking the right question?

    DEFF Research Database (Denmark)

    Richardson, K.; Cedhagen, T.

    2002-01-01

    is devoted to obtaining more and better data describing this exchange. Efforts to quantify exchange between the water column and the sediment must continue. However, such studies will not, in themselves, lead to a quantification of pelagic-benthic coupling in the North Sea. We identify here other areas......The coupling between pelagic and benthic processes occurs through the signals sent between the water column and the seabed. Huge methodological challenges are associated with the quantification of the signals being sent between these two domains - especially in a relatively shallow and heavily...... fished region such as the North Sea where deployment of sediment traps or bottom mounted cameras or samplers is difficult. Thus, there are relatively few sites in the North Sea for which good data are available for describing pelagic-benthic (or near shore-offsbore) coupling and considerable effort...

  17. Model developments for quantitative estimates of the benefits of the signals on nuclear power plant availability and economics

    International Nuclear Information System (INIS)

    Seong, Poong Hyun

    1993-01-01

    A novel framework for quantitative estimates of the benefits of signals on nuclear power plant availability and economics has been developed in this work. The models developed in this work quantify how the perfect signals affect the human operator's success in restoring the power plant to the desired state when it enters undesirable transients. Also, the models quantify the economic benefits of these perfect signals. The models have been applied to the condensate feedwater system of the nuclear power plant for demonstration. (Author)

  18. ECG signal processing

    NARCIS (Netherlands)

    2009-01-01

    A system extracts an ECG signal from a composite signal (308) representing an electric measurement of a living subject. Identification means (304) identify a plurality of temporal segments (309) of the composite signal corresponding to a plurality of predetermined segments (202,204,206) of an ECG

  19. Optimal Signal Quality Index for Photoplethysmogram Signals

    Directory of Open Access Journals (Sweden)

    Mohamed Elgendi

    2016-09-01

    Full Text Available A photoplethysmogram (PPG is a noninvasive circulatory signal related to the pulsatile volume of blood in tissue and is typically collected by pulse oximeters. PPG signals collected via mobile devices are prone to artifacts that negatively impact measurement accuracy, which can lead to a significant number of misleading diagnoses. Given the rapidly increased use of mobile devices to collect PPG signals, developing an optimal signal quality index (SQI is essential to classify the signal quality from these devices. Eight SQIs were developed and tested based on: perfusion, kurtosis, skewness, relative power, non-stationarity, zero crossing, entropy, and the matching of systolic wave detectors. Two independent annotators annotated all PPG data (106 recordings, 60 s each and a third expert conducted the adjudication of differences. The independent annotators labeled each PPG signal with one of the following labels: excellent, acceptable or unfit for diagnosis. All indices were compared using Mahalanobis distance, linear discriminant analysis, quadratic discriminant analysis, and support vector machine with leave-one-out cross-validation. The skewness index outperformed the other seven indices in differentiating between excellent PPG and acceptable, acceptable combined with unfit, and unfit recordings, with overall F 1 scores of 86.0%, 87.2%, and 79.1%, respectively.

  20. Optimal Signal Quality Index for Photoplethysmogram Signals.

    Science.gov (United States)

    Elgendi, Mohamed

    2016-09-22

    A photoplethysmogram (PPG) is a noninvasive circulatory signal related to the pulsatile volume of blood in tissue and is typically collected by pulse oximeters. PPG signals collected via mobile devices are prone to artifacts that negatively impact measurement accuracy, which can lead to a significant number of misleading diagnoses. Given the rapidly increased use of mobile devices to collect PPG signals, developing an optimal signal quality index (SQI) is essential to classify the signal quality from these devices. Eight SQIs were developed and tested based on: perfusion, kurtosis, skewness, relative power, non-stationarity, zero crossing, entropy, and the matching of systolic wave detectors. Two independent annotators annotated all PPG data (106 recordings, 60 s each) and a third expert conducted the adjudication of differences. The independent annotators labeled each PPG signal with one of the following labels: excellent, acceptable or unfit for diagnosis. All indices were compared using Mahalanobis distance, linear discriminant analysis, quadratic discriminant analysis, and support vector machine with leave-one-out cross-validation. The skewness index outperformed the other seven indices in differentiating between excellent PPG and acceptable, acceptable combined with unfit, and unfit recordings, with overall F 1 scores of 86.0%, 87.2%, and 79.1%, respectively.

  1. Quantifying Neural Oscillatory Synchronization: A Comparison between Spectral Coherence and Phase-Locking Value Approaches

    Science.gov (United States)

    Lowet, Eric; Roberts, Mark J.; Bonizzi, Pietro; Karel, Joël; De Weerd, Peter

    2016-01-01

    Synchronization or phase-locking between oscillating neuronal groups is considered to be important for coordination of information among cortical networks. Spectral coherence is a commonly used approach to quantify phase locking between neural signals. We systematically explored the validity of spectral coherence measures for quantifying synchronization among neural oscillators. To that aim, we simulated coupled oscillatory signals that exhibited synchronization dynamics using an abstract phase-oscillator model as well as interacting gamma-generating spiking neural networks. We found that, within a large parameter range, the spectral coherence measure deviated substantially from the expected phase-locking. Moreover, spectral coherence did not converge to the expected value with increasing signal-to-noise ratio. We found that spectral coherence particularly failed when oscillators were in the partially (intermittent) synchronized state, which we expect to be the most likely state for neural synchronization. The failure was due to the fast frequency and amplitude changes induced by synchronization forces. We then investigated whether spectral coherence reflected the information flow among networks measured by transfer entropy (TE) of spike trains. We found that spectral coherence failed to robustly reflect changes in synchrony-mediated information flow between neural networks in many instances. As an alternative approach we explored a phase-locking value (PLV) method based on the reconstruction of the instantaneous phase. As one approach for reconstructing instantaneous phase, we used the Hilbert Transform (HT) preceded by Singular Spectrum Decomposition (SSD) of the signal. PLV estimates have broad applicability as they do not rely on stationarity, and, unlike spectral coherence, they enable more accurate estimations of oscillatory synchronization across a wide range of different synchronization regimes, and better tracking of synchronization-mediated information

  2. Quantifying Neural Oscillatory Synchronization: A Comparison between Spectral Coherence and Phase-Locking Value Approaches.

    Directory of Open Access Journals (Sweden)

    Eric Lowet

    Full Text Available Synchronization or phase-locking between oscillating neuronal groups is considered to be important for coordination of information among cortical networks. Spectral coherence is a commonly used approach to quantify phase locking between neural signals. We systematically explored the validity of spectral coherence measures for quantifying synchronization among neural oscillators. To that aim, we simulated coupled oscillatory signals that exhibited synchronization dynamics using an abstract phase-oscillator model as well as interacting gamma-generating spiking neural networks. We found that, within a large parameter range, the spectral coherence measure deviated substantially from the expected phase-locking. Moreover, spectral coherence did not converge to the expected value with increasing signal-to-noise ratio. We found that spectral coherence particularly failed when oscillators were in the partially (intermittent synchronized state, which we expect to be the most likely state for neural synchronization. The failure was due to the fast frequency and amplitude changes induced by synchronization forces. We then investigated whether spectral coherence reflected the information flow among networks measured by transfer entropy (TE of spike trains. We found that spectral coherence failed to robustly reflect changes in synchrony-mediated information flow between neural networks in many instances. As an alternative approach we explored a phase-locking value (PLV method based on the reconstruction of the instantaneous phase. As one approach for reconstructing instantaneous phase, we used the Hilbert Transform (HT preceded by Singular Spectrum Decomposition (SSD of the signal. PLV estimates have broad applicability as they do not rely on stationarity, and, unlike spectral coherence, they enable more accurate estimations of oscillatory synchronization across a wide range of different synchronization regimes, and better tracking of synchronization

  3. Quantifying Tc-99 contamination in a fuel fabrication plant - 59024

    International Nuclear Information System (INIS)

    Darbyshire, Carol; Burgess, Pete

    2012-01-01

    The Springfields facility manufactures nuclear fuel products for the UK's nuclear power stations and for international customers. Fuel manufacture is scheduled to continue into the future. In addition to fuel manufacture, Springfields is also undertaking decommissioning activities. Today it is run and operated by Springfields Fuels Limited, under the management of Westinghouse Electric UK Limited. The site has been operating since 1946 manufacturing nuclear fuel. As part of the decommissioning activities, there was a need was to quantify contamination in a large redundant building. This building had been used to process uranium derived from uranium ore concentrate but had also processed a limited quantity of recycled uranium. The major non-uranic contaminant was Tc-99. The aim was to be able to identify any areas where the bulk activity exceeded 0.4 Bq/g Tc-99 as this would preclude the demolition rubble being sent to the local disposal facility. The problems associated with this project were the presence of significant uranium contamination, the realisation that both the Tc-99 and the uranium had diffused into the brickwork to a significant depth and the relatively low beta energy of Tc-99. The uranium was accompanied by Pa-234m, an energetic beta emitter. The concentration/depth profile was determined for several areas on the plant for Tc-99 and for uranium. The radiochemical analysis was performed locally but the performance of the local laboratory was checked during the initial investigation by splitting samples three ways and having confirmation analyses performed by 2 other laboratories. The results showed surprisingly consistent concentration gradients for Tc-99 and for uranium across the samples. Using that information, the instrument response was calculated for Tc-99 using the observed diffusion gradient and averaged through the full 225 mm of brick wall, as agreed by the regulator. The Tc-99 and uranium contributions to the detector signal were separated

  4. Quantifying and containing the curse of high resolution coronal imaging

    Directory of Open Access Journals (Sweden)

    V. Delouille

    2008-10-01

    Full Text Available Future missions such as Solar Orbiter (SO, InterHelioprobe, or Solar Probe aim at approaching the Sun closer than ever before, with on board some high resolution imagers (HRI having a subsecond cadence and a pixel area of about (80 km2 at the Sun during perihelion. In order to guarantee their scientific success, it is necessary to evaluate if the photon counts available at these resolution and cadence will provide a sufficient signal-to-noise ratio (SNR. For example, if the inhomogeneities in the Quiet Sun emission prevail at higher resolution, one may hope to locally have more photon counts than in the case of a uniform source. It is relevant to quantify how inhomogeneous the quiet corona will be for a pixel pitch that is about 20 times smaller than in the case of SoHO/EIT, and 5 times smaller than TRACE. We perform a first step in this direction by analyzing and characterizing the spatial intermittency of Quiet Sun images thanks to a multifractal analysis. We identify the parameters that specify the scale-invariance behavior. This identification allows next to select a family of multifractal processes, namely the Compound Poisson Cascades, that can synthesize artificial images having some of the scale-invariance properties observed on the recorded images. The prevalence of self-similarity in Quiet Sun coronal images makes it relevant to study the ratio between the SNR present at SoHO/EIT images and in coarsened images. SoHO/EIT images thus play the role of "high resolution" images, whereas the "low-resolution" coarsened images are rebinned so as to simulate a smaller angular resolution and/or a larger distance to the Sun. For a fixed difference in angular resolution and in Spacecraft-Sun distance, we determine the proportion of pixels having a SNR preserved at high resolution given a particular increase in effective area. If scale-invariance continues to prevail at smaller scales, the conclusion reached with SoHO/EIT images can be transposed

  5. Second-hand signals

    DEFF Research Database (Denmark)

    Bergenholtz, Carsten

    2014-01-01

    Studies of signaling theory have traditionally focused on the dyadic link between the sender and receiver of the signal. Within a science‐based perspective this framing has led scholars to investigate how patents and publications of firms function as signals. I explore another important type...... used by various agents in their search for and assessment of products and firms. I conclude by arguing how this second‐hand nature of signals goes beyond a simple dyadic focus on senders and receivers of signals, and thus elucidates the more complex interrelations of the various types of agents...

  6. Quantified Effects of Late Pregnancy and Lactation on the Osmotic ...

    African Journals Online (AJOL)

    Quantified Effects of Late Pregnancy and Lactation on the Osmotic Stability of ... in the composition of erythrocyte membranes associated with the physiologic states. Keywords: Erythrocyteosmotic stability, osmotic fragility, late pregnancy, ...

  7. Study Quantifies Physical Demands of Yoga in Seniors

    Science.gov (United States)

    ... Z Study Quantifies Physical Demands of Yoga in Seniors Share: A recent NCCAM-funded study measured the ... performance of seven standing poses commonly taught in senior yoga classes: Chair, Wall Plank, Tree, Warrior II, ...

  8. Quantifying the economic water savings benefit of water hyacinth ...

    African Journals Online (AJOL)

    Quantifying the economic water savings benefit of water hyacinth ... Value Method was employed to estimate the average production value of irrigation water, ... invasions of this nature, as they present significant costs to the economy and ...

  9. Analyzing complex networks evolution through Information Theory quantifiers

    International Nuclear Information System (INIS)

    Carpi, Laura C.; Rosso, Osvaldo A.; Saco, Patricia M.; Ravetti, Martin Gomez

    2011-01-01

    A methodology to analyze dynamical changes in complex networks based on Information Theory quantifiers is proposed. The square root of the Jensen-Shannon divergence, a measure of dissimilarity between two probability distributions, and the MPR Statistical Complexity are used to quantify states in the network evolution process. Three cases are analyzed, the Watts-Strogatz model, a gene network during the progression of Alzheimer's disease and a climate network for the Tropical Pacific region to study the El Nino/Southern Oscillation (ENSO) dynamic. We find that the proposed quantifiers are able not only to capture changes in the dynamics of the processes but also to quantify and compare states in their evolution.

  10. Analyzing complex networks evolution through Information Theory quantifiers

    Energy Technology Data Exchange (ETDEWEB)

    Carpi, Laura C., E-mail: Laura.Carpi@studentmail.newcastle.edu.a [Civil, Surveying and Environmental Engineering, University of Newcastle, University Drive, Callaghan NSW 2308 (Australia); Departamento de Fisica, Instituto de Ciencias Exatas, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, Belo Horizonte (31270-901), MG (Brazil); Rosso, Osvaldo A., E-mail: rosso@fisica.ufmg.b [Departamento de Fisica, Instituto de Ciencias Exatas, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, Belo Horizonte (31270-901), MG (Brazil); Chaos and Biology Group, Instituto de Calculo, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Pabellon II, Ciudad Universitaria, 1428 Ciudad de Buenos Aires (Argentina); Saco, Patricia M., E-mail: Patricia.Saco@newcastle.edu.a [Civil, Surveying and Environmental Engineering, University of Newcastle, University Drive, Callaghan NSW 2308 (Australia); Departamento de Hidraulica, Facultad de Ciencias Exactas, Ingenieria y Agrimensura, Universidad Nacional de Rosario, Avenida Pellegrini 250, Rosario (Argentina); Ravetti, Martin Gomez, E-mail: martin.ravetti@dep.ufmg.b [Departamento de Engenharia de Producao, Universidade Federal de Minas Gerais, Av. Antonio Carlos, 6627, Belo Horizonte (31270-901), MG (Brazil)

    2011-01-24

    A methodology to analyze dynamical changes in complex networks based on Information Theory quantifiers is proposed. The square root of the Jensen-Shannon divergence, a measure of dissimilarity between two probability distributions, and the MPR Statistical Complexity are used to quantify states in the network evolution process. Three cases are analyzed, the Watts-Strogatz model, a gene network during the progression of Alzheimer's disease and a climate network for the Tropical Pacific region to study the El Nino/Southern Oscillation (ENSO) dynamic. We find that the proposed quantifiers are able not only to capture changes in the dynamics of the processes but also to quantify and compare states in their evolution.

  11. On the contrast between Germanic and Romance negated quantifiers

    Directory of Open Access Journals (Sweden)

    Robert Cirillo

    2009-01-01

    Full Text Available Universal quantifiers can be stranded in the manner described by Sportiche (1988, Giusti (1990 and Shlonsky (1991 in both the Romance and Germanic languages, but a negated universal quantifier can only be stranded in the Germanic languages. The goal of this paper is to show that this contrast between the Romance and the Germanic languages can be explained if one adapts the theory of sentential negation in Zeijlstra (2004 to constituent (quantifier negation. According to Zeijlstra’s theory, a negation marker in the Romance languages is the head of a NegP that dominates vP, whereas in the Germanic languages a negation marker is a maximal projection that occupies the specifier position of a verbal phrase. I will show that the non-occurrence of stranded negated quantifiers in the Romance languages follows from the fact that negation markers in the Romance languages are highly positioned syntactic heads.

  12. A kernel plus method for quantifying wind turbine performance upgrades

    KAUST Repository

    Lee, Giwhyun; Ding, Yu; Xie, Le; Genton, Marc G.

    2014-01-01

    Power curves are commonly estimated using the binning method recommended by the International Electrotechnical Commission, which primarily incorporates wind speed information. When such power curves are used to quantify a turbine's upgrade

  13. Quantifying Functional Reuse from Object Oriented Requirements Specifications

    NARCIS (Netherlands)

    Condori-Fernandez, Nelly; Condori-Fernández, N.; Pastor, O; Daneva, Maia; Abran, A.; Castro, J.; Quer, C.; Carvallo, J. B.; Fernandes da Silva, L.

    2008-01-01

    Software reuse is essential in improving efficiency and productivity in the software development process. This paper analyses reuse within requirements engineering phase by taking and adapting a standard functional size measurement method, COSMIC FFP. Our proposal attempts to quantify reusability

  14. User guide : process for quantifying the benefits of research.

    Science.gov (United States)

    2017-07-01

    The Minnesota Department of Transportation Research Services has adopted a process for quantifying the monetary benefits of research projects, such as the dollar value of particular ideas when implemented across the states transportation system. T...

  15. How to Quantify Deterrence and Reduce Critical Infrastructure Risk

    OpenAIRE

    Taquechel, Eric F.; Lewis, Ted G.

    2012-01-01

    This article appeared in Homeland Security Affairs (August 2012), v.8, article 12 "We propose a definition of critical infrastructure deterrence and develop a methodology to explicitly quantify the deterrent effects of critical infrastructure security strategies. We leverage historical work on analyzing deterrence, game theory and utility theory. Our methodology quantifies deterrence as the extent to which an attacker's expected utility from an infrastructure attack changes after a defende...

  16. Method of signal analysis

    International Nuclear Information System (INIS)

    Berthomier, Charles

    1975-01-01

    A method capable of handling the amplitude and the frequency time laws of a certain kind of geophysical signals is described here. This method is based upon the analytical signal idea of Gabor and Ville, which is constructed either in the time domain by adding an imaginary part to the real signal (in-quadrature signal), or in the frequency domain by suppressing negative frequency components. The instantaneous frequency of the initial signal is then defined as the time derivative of the phase of the analytical signal, and his amplitude, or envelope, as the modulus of this complex signal. The method is applied to three types of magnetospheric signals: chorus, whistlers and pearls. The results obtained by analog and numerical calculations are compared to results obtained by classical systems using filters, i.e. based upon a different definition of the concept of frequency. The precision with which the frequency-time laws are determined leads then to the examination of the principle of the method and to a definition of instantaneous power density spectrum attached to the signal, and to the first consequences of this definition. In this way, a two-dimensional representation of the signal is introduced which is less deformed by the analysis system properties than the usual representation, and which moreover has the advantage of being obtainable practically in real time [fr

  17. Measurand transient signal suppressor

    Science.gov (United States)

    Bozeman, Richard J., Jr. (Inventor)

    1994-01-01

    A transient signal suppressor for use in a controls system which is adapted to respond to a change in a physical parameter whenever it crosses a predetermined threshold value in a selected direction of increasing or decreasing values with respect to the threshold value and is sustained for a selected discrete time interval is presented. The suppressor includes a sensor transducer for sensing the physical parameter and generating an electrical input signal whenever the sensed physical parameter crosses the threshold level in the selected direction. A manually operated switch is provided for adapting the suppressor to produce an output drive signal whenever the physical parameter crosses the threshold value in the selected direction of increasing or decreasing values. A time delay circuit is selectively adjustable for suppressing the transducer input signal for a preselected one of a plurality of available discrete suppression time and producing an output signal only if the input signal is sustained for a time greater than the selected suppression time. An electronic gate is coupled to receive the transducer input signal and the timer output signal and produce an output drive signal for energizing a control relay whenever the transducer input is a non-transient signal which is sustained beyond the selected time interval.

  18. Exponential signaling gain at the receptor level enhances signal-to-noise ratio in bacterial chemotaxis.

    Directory of Open Access Journals (Sweden)

    Silke Neumann

    Full Text Available Cellular signaling systems show astonishing precision in their response to external stimuli despite strong fluctuations in the molecular components that determine pathway activity. To control the effects of noise on signaling most efficiently, living cells employ compensatory mechanisms that reach from simple negative feedback loops to robustly designed signaling architectures. Here, we report on a novel control mechanism that allows living cells to keep precision in their signaling characteristics - stationary pathway output, response amplitude, and relaxation time - in the presence of strong intracellular perturbations. The concept relies on the surprising fact that for systems showing perfect adaptation an exponential signal amplification at the receptor level suffices to eliminate slowly varying multiplicative noise. To show this mechanism at work in living systems, we quantified the response dynamics of the E. coli chemotaxis network after genetically perturbing the information flux between upstream and downstream signaling components. We give strong evidence that this signaling system results in dynamic invariance of the activated response regulator against multiplicative intracellular noise. We further demonstrate that for environmental conditions, for which precision in chemosensing is crucial, the invariant response behavior results in highest chemotactic efficiency. Our results resolve several puzzling features of the chemotaxis pathway that are widely conserved across prokaryotes but so far could not be attributed any functional role.

  19. Instruction of hematopoietic lineage choice by cytokine signaling

    Energy Technology Data Exchange (ETDEWEB)

    Endele, Max; Etzrodt, Martin; Schroeder, Timm, E-mail: timm.schroeder@bsse.ethz.ch

    2014-12-10

    Hematopoiesis is the cumulative consequence of finely tuned signaling pathways activated through extrinsic factors, such as local niche signals and systemic hematopoietic cytokines. Whether extrinsic factors actively instruct the lineage choice of hematopoietic stem and progenitor cells or are only selectively allowing survival and proliferation of already intrinsically lineage-committed cells has been debated over decades. Recent results demonstrated that cytokines can instruct lineage choice. However, the precise function of individual cytokine-triggered signaling molecules in inducing cellular events like proliferation, lineage choice, and differentiation remains largely elusive. Signal transduction pathways activated by different cytokine receptors are highly overlapping, but support the production of distinct hematopoietic lineages. Cellular context, signaling dynamics, and the crosstalk of different signaling pathways determine the cellular response of a given extrinsic signal. New tools to manipulate and continuously quantify signaling events at the single cell level are therefore required to thoroughly interrogate how dynamic signaling networks yield a specific cellular response. - Highlights: • Recent studies provided definite proof for lineage-instructive action of cytokines. • Signaling pathways involved in hematopoietic lineage instruction remain elusive. • New tools are emerging to quantitatively study dynamic signaling networks over time.

  20. Quantifying the range of cross-correlated fluctuations using a q- L dependent AHXA coefficient

    Science.gov (United States)

    Wang, Fang; Wang, Lin; Chen, Yuming

    2018-03-01

    Recently, based on analogous height cross-correlation analysis (AHXA), a cross-correlation coefficient ρ×(L) has been proposed to quantify the levels of cross-correlation on different temporal scales for bivariate series. A limitation of this coefficient is that it cannot capture the full information of cross-correlations on amplitude of fluctuations. In fact, it only detects the cross-correlation at a specific order fluctuation, which might neglect some important information inherited from other order fluctuations. To overcome this disadvantage, in this work, based on the scaling of the qth order covariance and time delay L, we define a two-parameter dependent cross-correlation coefficient ρq(L) to detect and quantify the range and level of cross-correlations. This new version of ρq(L) coefficient leads to the formation of a ρq(L) surface, which not only is able to quantify the level of cross-correlations, but also allows us to identify the range of fluctuation amplitudes that are correlated in two given signals. Applications to the classical ARFIMA models and the binomial multifractal series illustrate the feasibility of this new coefficient ρq(L) . In addition, a statistical test is proposed to quantify the existence of cross-correlations between two given series. Applying our method to the real life empirical data from the 1999-2000 California electricity market, we find that the California power crisis in 2000 destroys the cross-correlation between the price and the load series but does not affect the correlation of the load series during and before the crisis.

  1. Wnt signaling in cancer

    Science.gov (United States)

    Zhan, T; Rindtorff, N; Boutros, M

    2017-01-01

    Wnt signaling is one of the key cascades regulating development and stemness, and has also been tightly associated with cancer. The role of Wnt signaling in carcinogenesis has most prominently been described for colorectal cancer, but aberrant Wnt signaling is observed in many more cancer entities. Here, we review current insights into novel components of Wnt pathways and describe their impact on cancer development. Furthermore, we highlight expanding functions of Wnt signaling for both solid and liquid tumors. We also describe current findings how Wnt signaling affects maintenance of cancer stem cells, metastasis and immune control. Finally, we provide an overview of current strategies to antagonize Wnt signaling in cancer and challenges that are associated with such approaches. PMID:27617575

  2. Outsourced Probe Data Effectiveness on Signalized Arterials

    Energy Technology Data Exchange (ETDEWEB)

    Young, Stanley E [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sharifi, Elham [University of Maryland; Eshragh, Sepideh [University of Maryland; Hamedi, Masoud [University of Maryland; Juster, Reuben M. [University of Maryland; Kaushik, Kartik [University of Maryland

    2017-07-31

    This paper presents results of an I-95 Corridor Coalition sponsored project to assess the ability of outsourced vehicle probe data to provide accurate travel time on signalized roadways for the purposes of real-time operations as well as performance measures. The quality of outsourced probe data on freeways has led many departments of transportation to consider such data for arterial performance monitoring. From April 2013 through June of 2014, the University of Maryland Center for Advanced Transportation Technology gathered travel times from several arterial corridors within the mid-Atlantic region using Bluetooth traffic monitoring (BTM) equipment, and compared these travel times with the data reported to the I95 Vehicle Probe Project (VPP) from an outsourced probe data vendor. The analysis consisted of several methodologies: (1) a traditional analysis that used precision and bias speed metrics; (2) a slowdown analysis that quantified the percentage of significant traffic disruptions accurately captured in the VPP data; (3) a sampled distribution method that uses overlay methods to enhance and analyze recurring congestion patterns. (4) Last, the BTM and VPP data from each 24-hour period of data collection were reviewed by the research team to assess the extent to which VPP captured the nature of the traffic flow. Based on the analysis, probe data is recommended only on arterial roadways with signal densities (measured in signals per mile) up to one, and it should be tested and used with caution for signal densities between one and two, and is not recommended when signal density exceeds two.

  3. Traffic signal synchronization.

    Science.gov (United States)

    Huang, Ding-wei; Huang, Wei-neng

    2003-05-01

    The benefits of traffic signal synchronization are examined within the cellular automata approach. The microsimulations of traffic flow are obtained with different settings of signal period T and time delay delta. Both numerical results and analytical approximations are presented. For undersaturated traffic, the green-light wave solutions can be realized. For saturated traffic, the correlation among the traffic signals has no effect on the throughput. For oversaturated traffic, the benefits of synchronization are manifest only when stochastic noise is suppressed.

  4. Digital signal processing the Tevatron BPM signals

    International Nuclear Information System (INIS)

    Cancelo, G.; James, E.; Wolbers, S.

    2005-01-01

    The Beam Position Monitor (TeV BPM) readout system at Fermilab's Tevatron has been updated and is currently being commissioned. The new BPMs use new analog and digital hardware to achieve better beam position measurement resolution. The new system reads signals from both ends of the existing directional stripline pickups to provide simultaneous proton and antiproton measurements. The signals provided by the two ends of the BPM pickups are processed by analog band-pass filters and sampled by 14-bit ADCs at 74.3MHz. A crucial part of this work has been the design of digital filters that process the signal. This paper describes the digital processing and estimation techniques used to optimize the beam position measurement. The BPM electronics must operate in narrow-band and wide-band modes to enable measurements of closed-orbit and turn-by-turn positions. The filtering and timing conditions of the signals are tuned accordingly for the operational modes. The analysis and the optimized result for each mode are presented

  5. Nichtkontinuierliche (zeitdiskrete) Signale

    Science.gov (United States)

    Plaßmann, Wilfried

    Zeitdiskrete Signale werden häufig aus zeitkontinuierlichen Signalen durch Abtastung erzeugt. Dass beide Signale gleichwertig sind, zeigt das Abtasttheorem (Kap. 116) von Shannon, sofern die Bedingung nach (116.2), f_{ab}≈(2{,}2 {\\ldots} 4)\\cdot fg) eingehalten wird. Digitale Signale haben Vorteile: Einfache Speicherung, Weiterverarbeitung in Rechnern, wenig störanfällige Übertragung. Für die Bearbeitung dieser Signale dienen die im Kapitel dargestellten Hilfsmittel: Diskrete Fouriertransformation; Schnelle Fouriertransformation; z-Transformation: Darstellung, Sätze zur z-Transformation, Korrespondenzen zu Zeitfunktionen, Beispiele.

  6. Biomedical signals and systems

    CERN Document Server

    Tranquillo, Joseph V

    2013-01-01

    Biomedical Signals and Systems is meant to accompany a one-semester undergraduate signals and systems course. It may also serve as a quick-start for graduate students or faculty interested in how signals and systems techniques can be applied to living systems. The biological nature of the examples allows for systems thinking to be applied to electrical, mechanical, fluid, chemical, thermal and even optical systems. Each chapter focuses on a topic from classic signals and systems theory: System block diagrams, mathematical models, transforms, stability, feedback, system response, control, time

  7. Radiation signal processing system

    International Nuclear Information System (INIS)

    Bennett, M.; Knoll, G.; Strange, D.

    1980-01-01

    An improved signal processing system for radiation imaging apparatus comprises: a radiation transducer producing transducer signals proportional to apparent spatial coordinates of detected radiation events; means for storing true spatial coordinates corresponding to a plurality of predetermined apparent spatial coordinates relative to selected detected radiation events said means for storing responsive to said transducer signal and producing an output signal representative of said true spatial coordinates; and means for interpolating the true spatial coordinates of the detected radiation events located intermediate the stored true spatial coordinates, said means for interpolating communicating with said means for storing

  8. Digital signal processing

    CERN Document Server

    O'Shea, Peter; Hussain, Zahir M

    2011-01-01

    In three parts, this book contributes to the advancement of engineering education and that serves as a general reference on digital signal processing. Part I presents the basics of analog and digital signals and systems in the time and frequency domain. It covers the core topics: convolution, transforms, filters, and random signal analysis. It also treats important applications including signal detection in noise, radar range estimation for airborne targets, binary communication systems, channel estimation, banking and financial applications, and audio effects production. Part II considers sel

  9. Quantifying fluctuations of resting state networks using arterial spin labeling perfusion MRI.

    Science.gov (United States)

    Dai, Weiying; Varma, Gopal; Scheidegger, Rachel; Alsop, David C

    2016-03-01

    Blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) has been widely used to investigate spontaneous low-frequency signal fluctuations across brain resting state networks. However, BOLD only provides relative measures of signal fluctuations. Arterial Spin Labeling (ASL) MRI holds great potential for quantitative measurements of resting state network fluctuations. This study systematically quantified signal fluctuations of the large-scale resting state networks using ASL data from 20 healthy volunteers by separating them from global signal fluctuations and fluctuations caused by residual noise. Global ASL signal fluctuation was 7.59% ± 1.47% relative to the ASL baseline perfusion. Fluctuations of seven detected resting state networks vary from 2.96% ± 0.93% to 6.71% ± 2.35%. Fluctuations of networks and residual noise were 6.05% ± 1.18% and 6.78% ± 1.16% using 4-mm resolution ASL data applied with Gaussian smoothing kernel of 6mm. However, network fluctuations were reduced by 7.77% ± 1.56% while residual noise fluctuation was markedly reduced by 39.75% ± 2.90% when smoothing kernel of 12 mm was applied to the ASL data. Therefore, global and network fluctuations are the dominant structured noise sources in ASL data. Quantitative measurements of resting state networks may enable improved noise reduction and provide insights into the function of healthy and diseased brain. © The Author(s) 2015.

  10. Drought Signaling in Plants

    Indian Academy of Sciences (India)

    depending upon the source and nature of signaling: (i) hormone signal, (ii) .... plants to regulate the rate of transpiration through minor structural .... cell has to keep spending energy (in the form of A TP) to maintain a .... enzymes and proteins in the regulation of cellular metabolism can be determined by either inactivating.

  11. Ubiquitination in apoptosis signaling

    NARCIS (Netherlands)

    van de Kooij, L.W.

    2014-01-01

    The work described in this thesis focuses on ubiquitination and protein degradation, with an emphasis on how these processes regulate apoptosis signaling. More specifically, our aims were: 1. To increase the understanding of ubiquitin-mediated regulation of apoptosis signaling. 2. To identify the E3

  12. Multiresolution signal decomposition schemes

    NARCIS (Netherlands)

    J. Goutsias (John); H.J.A.M. Heijmans (Henk)

    1998-01-01

    textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis

  13. SignalR blueprints

    CERN Document Server

    Ingebrigtsen, Einar

    2015-01-01

    This book is designed for software developers, primarily those with knowledge of C#, .NET, and JavaScript. Good knowledge and understanding of SignalR is assumed to allow efficient programming of core elements and applications in SignalR.

  14. Optimal fault signal estimation

    NARCIS (Netherlands)

    Stoorvogel, Antonie Arij; Niemann, H.H.; Saberi, A.; Sannuti, P.

    2002-01-01

    We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either $H_2$ optimal, $H_2$ suboptimal or Hinfinity suboptimal estimation. By

  15. Signal sampling circuit

    NARCIS (Netherlands)

    Louwsma, S.M.; Vertregt, Maarten

    2011-01-01

    A sampling circuit for sampling a signal is disclosed. The sampling circuit comprises a plurality of sampling channels adapted to sample the signal in time-multiplexed fashion, each sampling channel comprising a respective track-and-hold circuit connected to a respective analogue to digital

  16. Signal sampling circuit

    NARCIS (Netherlands)

    Louwsma, S.M.; Vertregt, Maarten

    2010-01-01

    A sampling circuit for sampling a signal is disclosed. The sampling circuit comprises a plurality of sampling channels adapted to sample the signal in time-multiplexed fashion, each sampling channel comprising a respective track-and-hold circuit connected to a respective analogue to digital

  17. Updating signal typing in voice: addition of type 4 signals.

    Science.gov (United States)

    Sprecher, Alicia; Olszewski, Aleksandra; Jiang, Jack J; Zhang, Yu

    2010-06-01

    The addition of a fourth type of voice to Titze's voice classification scheme is proposed. This fourth voice type is characterized by primarily stochastic noise behavior and is therefore unsuitable for both perturbation and correlation dimension analysis. Forty voice samples were classified into the proposed four types using narrowband spectrograms. Acoustic, perceptual, and correlation dimension analyses were completed for all voice samples. Perturbation measures tended to increase with voice type. Based on reliability cutoffs, the type 1 and type 2 voices were considered suitable for perturbation analysis. Measures of unreliability were higher for type 3 and 4 voices. Correlation dimension analyses increased significantly with signal type as indicated by a one-way analysis of variance. Notably, correlation dimension analysis could not quantify the type 4 voices. The proposed fourth voice type represents a subset of voices dominated by noise behavior. Current measures capable of evaluating type 4 voices provide only qualitative data (spectrograms, perceptual analysis, and an infinite correlation dimension). Type 4 voices are highly complex and the development of objective measures capable of analyzing these voices remains a topic of future investigation.

  18. Bioelectric Signal Measuring System

    Science.gov (United States)

    Guadarrama-Santana, A.; Pólo-Parada, L.; García-Valenzuela, A.

    2015-01-01

    We describe a low noise measuring system based on interdigitated electrodes for sensing bioelectrical signals. The system registers differential voltage measurements in order of microvolts. The base noise during measurements was in nanovolts and thus, the sensing signals presented a very good signal to noise ratio. An excitation voltage of 1Vrms with 10 KHz frequency was applied to an interdigitated capacitive sensor without a material under test and to a mirror device simultaneously. The output signals of both devices was then subtracted in order to obtain an initial reference value near cero volts and reduce parasitic capacitances due to the electronics, wiring and system hardware as well. The response of the measuring system was characterized by monitoring temporal bioelectrical signals in real time of biological materials such as embryo chicken heart cells and bovine suprarenal gland cells.

  19. Acoustic Signals and Systems

    DEFF Research Database (Denmark)

    2008-01-01

    The Handbook of Signal Processing in Acoustics will compile the techniques and applications of signal processing as they are used in the many varied areas of Acoustics. The Handbook will emphasize the interdisciplinary nature of signal processing in acoustics. Each Section of the Handbook...... will present topics on signal processing which are important in a specific area of acoustics. These will be of interest to specialists in these areas because they will be presented from their technical perspective, rather than a generic engineering approach to signal processing. Non-specialists, or specialists...... from different areas, will find the self-contained chapters accessible and will be interested in the similarities and differences between the approaches and techniques used in different areas of acoustics....

  20. Molecular and Cellular Signaling

    CERN Document Server

    Beckerman, Martin

    2005-01-01

    A small number of signaling pathways, no more than a dozen or so, form a control layer that is responsible for all signaling in and between cells of the human body. The signaling proteins belonging to the control layer determine what kinds of cells are made during development and how they function during adult life. Malfunctions in the proteins belonging to the control layer are responsible for a host of human diseases ranging from neurological disorders to cancers. Most drugs target components in the control layer, and difficulties in drug design are intimately related to the architecture of the control layer. Molecular and Cellular Signaling provides an introduction to molecular and cellular signaling in biological systems with an emphasis on the underlying physical principles. The text is aimed at upper-level undergraduates, graduate students and individuals in medicine and pharmacology interested in broadening their understanding of how cells regulate and coordinate their core activities and how diseases ...

  1. Adaptive signal processor

    Energy Technology Data Exchange (ETDEWEB)

    Walz, H.V.

    1980-07-01

    An experimental, general purpose adaptive signal processor system has been developed, utilizing a quantized (clipped) version of the Widrow-Hoff least-mean-square adaptive algorithm developed by Moschner. The system accommodates 64 adaptive weight channels with 8-bit resolution for each weight. Internal weight update arithmetic is performed with 16-bit resolution, and the system error signal is measured with 12-bit resolution. An adapt cycle of adjusting all 64 weight channels is accomplished in 8 ..mu..sec. Hardware of the signal processor utilizes primarily Schottky-TTL type integrated circuits. A prototype system with 24 weight channels has been constructed and tested. This report presents details of the system design and describes basic experiments performed with the prototype signal processor. Finally some system configurations and applications for this adaptive signal processor are discussed.

  2. Adaptive signal processor

    International Nuclear Information System (INIS)

    Walz, H.V.

    1980-07-01

    An experimental, general purpose adaptive signal processor system has been developed, utilizing a quantized (clipped) version of the Widrow-Hoff least-mean-square adaptive algorithm developed by Moschner. The system accommodates 64 adaptive weight channels with 8-bit resolution for each weight. Internal weight update arithmetic is performed with 16-bit resolution, and the system error signal is measured with 12-bit resolution. An adapt cycle of adjusting all 64 weight channels is accomplished in 8 μsec. Hardware of the signal processor utilizes primarily Schottky-TTL type integrated circuits. A prototype system with 24 weight channels has been constructed and tested. This report presents details of the system design and describes basic experiments performed with the prototype signal processor. Finally some system configurations and applications for this adaptive signal processor are discussed

  3. Zinc Signals and Immunity.

    Science.gov (United States)

    Maywald, Martina; Wessels, Inga; Rink, Lothar

    2017-10-24

    Zinc homeostasis is crucial for an adequate function of the immune system. Zinc deficiency as well as zinc excess result in severe disturbances in immune cell numbers and activities, which can result in increased susceptibility to infections and development of especially inflammatory diseases. This review focuses on the role of zinc in regulating intracellular signaling pathways in innate as well as adaptive immune cells. Main underlying molecular mechanisms and targets affected by altered zinc homeostasis, including kinases, caspases, phosphatases, and phosphodiesterases, will be highlighted in this article. In addition, the interplay of zinc homeostasis and the redox metabolism in affecting intracellular signaling will be emphasized. Key signaling pathways will be described in detail for the different cell types of the immune system. In this, effects of fast zinc flux, taking place within a few seconds to minutes will be distinguish from slower types of zinc signals, also designated as "zinc waves", and late homeostatic zinc signals regarding prolonged changes in intracellular zinc.

  4. Quantifying the perceived risks associated with nuclear energy issues

    International Nuclear Information System (INIS)

    Sandquist, G.M.

    2004-01-01

    A mathematical model is presented for quantifying and assessing perceived risks in an empirical manner. The analytical model provides for the identification and assignment of any number of quantifiable risk perception factors that can be incorporated within standard risk methodology. The set of risk perception factors used to demonstrate the model are those that have been identified by social and behavioural scientists as principal factors influencing people in their perception of risks associated with major technical issues. These same risk factors are commonly associated with nuclear energy issues. A rational means is proposed for determining and quantifying these risk factors for a given application. The model should contribute to improved understanding of the basis and logic of public risk perception and provide practical and effective means for addressing perceived risks when they arise over important technical issues and projects. (author)

  5. Quantifying the value of E and P technology

    International Nuclear Information System (INIS)

    Heinemann, R.F.; Donlon, W.P.; Hoefner, M.L.

    1996-01-01

    A quantitative value-to-cost analysis was performed for the upstream technology portfolio of Mobil Oil for the period 1993 to 1998, by quantifying the cost of developing and delivering various technologies, including the net present value from technologies applied to thirty major assets. The value captured was classified into four general categories: (1) reduced capital costs, (2) reduced operating costs, (3) increased hydrocarbon production, and (4) increased proven reserves. The methodology used in quantifying the value-to-cost of upstream technologies and the results of asset analysis were described, with examples of value of technology to specific assets. A method to incorporate strategic considerations and business alignment to set overall program priorities was also discussed. Identifying and quantifying specific cases of technology application on an asset by asset basis was considered to be the principal advantage of using this method. figs

  6. Comparison of different functional EIT approaches to quantify tidal ventilation distribution.

    Science.gov (United States)

    Zhao, Zhanqi; Yun, Po-Jen; Kuo, Yen-Liang; Fu, Feng; Dai, Meng; Frerichs, Inez; Möller, Knut

    2018-01-30

    The aim of the study was to examine the pros and cons of different types of functional EIT (fEIT) to quantify tidal ventilation distribution in a clinical setting. fEIT images were calculated with (1) standard deviation of pixel time curve, (2) regression coefficients of global and local impedance time curves, or (3) mean tidal variations. To characterize temporal heterogeneity of tidal ventilation distribution, another fEIT image of pixel inspiration times is also proposed. fEIT-regression is very robust to signals with different phase information. When the respiratory signal should be distinguished from the heart-beat related signal, or during high-frequency oscillatory ventilation, fEIT-regression is superior to other types. fEIT-tidal variation is the most stable image type regarding the baseline shift. We recommend using this type of fEIT image for preliminary evaluation of the acquired EIT data. However, all these fEITs would be misleading in their assessment of ventilation distribution in the presence of temporal heterogeneity. The analysis software provided by the currently available commercial EIT equipment only offers either fEIT of standard deviation or tidal variation. Considering the pros and cons of each fEIT type, we recommend embedding more types into the analysis software to allow the physicians dealing with more complex clinical applications with on-line EIT measurements.

  7. Ventilation in Sewers Quantified by Measurements of CO2

    DEFF Research Database (Denmark)

    Fuglsang, Emil Dietz; Vollertsen, Jes; Nielsen, Asbjørn Haaning

    2012-01-01

    Understanding and quantifying ventilation in sewer systems is a prerequisite to predict transport of odorous and corrosive gasses within the system as well as their interaction with the urban atmosphere. This paper studies ventilation in sewer systems quantified by measurements of the natural...... occurring compound CO2. Most often Danish wastewater is supersaturated with CO2 and hence a potential for stripping is present. A novel model was built based on the kinetics behind the stripping process. It was applied to simulate ventilation rates from field measurements of wastewater temperature, p...

  8. Orexin/Hypocretin Signaling.

    Science.gov (United States)

    Kukkonen, Jyrki P

    Orexin/hypocretin peptide (orexin-A and orexin-B) signaling is believed to take place via the two G-protein-coupled receptors (GPCRs), named OX 1 and OX 2 orexin receptors, as described in the previous chapters. Signaling of orexin peptides has been investigated in diverse endogenously orexin receptor-expressing cells - mainly neurons but also other types of cells - and in recombinant cells expressing the receptors in a heterologous manner. Findings in the different systems are partially convergent but also indicate cellular background-specific signaling. The general picture suggests an inherently high degree of diversity in orexin receptor signaling.In the current chapter, I present orexin signaling on the cellular and molecular levels. Discussion of the connection to (potential) physiological orexin responses is only brief since these are in focus of other chapters in this book. The same goes for the post-synaptic signaling mechanisms, which are dealt with in Burdakov: Postsynaptic actions of orexin. The current chapter is organized according to the tissue type, starting from the central nervous system. Finally, receptor signaling pathways are discussed across tissues, cell types, and even species.

  9. Signal flow analysis

    CERN Document Server

    Abrahams, J R; Hiller, N

    1965-01-01

    Signal Flow Analysis provides information pertinent to the fundamental aspects of signal flow analysis. This book discusses the basic theory of signal flow graphs and shows their relation to the usual algebraic equations.Organized into seven chapters, this book begins with an overview of properties of a flow graph. This text then demonstrates how flow graphs can be applied to a wide range of electrical circuits that do not involve amplification. Other chapters deal with the parameters as well as circuit applications of transistors. This book discusses as well the variety of circuits using ther

  10. Quantifying the predictability of diaphragm motion during respiration with a noninvasive external marker

    International Nuclear Information System (INIS)

    Vedam, S.S.; Kini, V.R.; Keall, P.J.; Ramakrishnan, V.; Mostafavi, H.; Mohan, R.

    2003-01-01

    The aim of this work was to quantify the ability to predict intrafraction diaphragm motion from an external respiration signal during a course of radiotherapy. The data obtained included diaphragm motion traces from 63 fluoroscopic lung procedures for 5 patients, acquired simultaneously with respiratory motion signals (an infrared camera-based system was used to track abdominal wall motion). During these sessions, the patients were asked to breathe either (i) without instruction, (ii) with audio prompting, or (iii) using visual feedback. A statistical general linear model was formulated to describe the relationship between the respiration signal and diaphragm motion over all sessions and for all breathing training types. The model parameters derived from the first session for each patient were then used to predict the diaphragm motion for subsequent sessions based on the respiration signal. Quantification of the difference between the predicted and actual motion during each session determined our ability to predict diaphragm motion during a course of radiotherapy. This measure of diaphragm motion was also used to estimate clinical target volume (CTV) to planning target volume (PTV) margins for conventional, gated, and proposed four-dimensional (4D) radiotherapy. Results from statistical analysis indicated a strong linear relationship between the respiration signal and diaphragm motion (p<0.001) over all sessions, irrespective of session number (p=0.98) and breathing training type (p=0.19). Using model parameters obtained from the first session, diaphragm motion was predicted in subsequent sessions to within 0.1 cm (1 σ) for gated and 4D radiotherapy. Assuming a 0.4 cm setup error, superior-inferior CTV-PTV margins of 1.1 cm for conventional radiotherapy could be reduced to 0.8 cm for gated and 4D radiotherapy. The diaphragm motion is strongly correlated with the respiration signal obtained from the abdominal wall. This correlation can be used to predict diaphragm

  11. A user-oriented and quantifiable approach to irrigation design.

    NARCIS (Netherlands)

    Baars, E.; Bastiaansen, A.P.M.; Menenti, M.

    1995-01-01

    A new user-oriented approach is presented to apply marketing research techniques to quantify perceptions, preferences and utility values of farmers. This approach was applied to design an improved water distribution method for an irrigation scheme in Mendoza, Argentina. The approach comprises two

  12. Quantifying the CO{sub 2} permit price sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Gruell, Georg; Kiesel, Ruediger [Duisburg-Essen Univ., Essen (Germany). Inst. of Energy Trading and Financial Services

    2012-06-15

    Equilibrium models have been widely used in the literature with the aim of showing theoretical properties of emissions trading schemes. This paper applies equilibrium models to empirically study permit prices and to quantify the permit price sensitivity. In particular, we demonstrate that emission trading schemes both with and without banking are inherently prone to price jumps. (orig.)

  13. Quantifying Creative Destruction Entrepreneurship and Productivity in New Zealand

    OpenAIRE

    John McMillan

    2005-01-01

    This paper (a) provides a framework for quantifying any economy’s flexibility, and (b) reviews the evidence on New Zealand firms’ birth, growth and death. The data indicate that, by and large, the labour market and the financial market are doing their job.

  14. Comparing methods to quantify experimental transmission of infectious agents

    NARCIS (Netherlands)

    Velthuis, A.G.J.; Jong, de M.C.M.; Bree, de J.

    2007-01-01

    Transmission of an infectious agent can be quantified from experimental data using the transient-state (TS) algorithm. The TS algorithm is based on the stochastic SIR model and provides a time-dependent probability distribution over the number of infected individuals during an epidemic, with no need

  15. Quantifying Solar Cell Cracks in Photovoltaic Modules by Electroluminescence Imaging

    DEFF Research Database (Denmark)

    Spataru, Sergiu; Hacke, Peter; Sera, Dezso

    2015-01-01

    This article proposes a method for quantifying the percentage of partially and totally disconnected solar cell cracks by analyzing electroluminescence images of the photovoltaic module taken under high- and low-current forward bias. The method is based on the analysis of the module’s electrolumin...

  16. Quantifying levels of animal activity using camera trap data

    NARCIS (Netherlands)

    Rowcliffe, J.M.; Kays, R.; Kranstauber, B.; Carbone, C.; Jansen, P.A.

    2014-01-01

    1.Activity level (the proportion of time that animals spend active) is a behavioural and ecological metric that can provide an indicator of energetics, foraging effort and exposure to risk. However, activity level is poorly known for free-living animals because it is difficult to quantify activity

  17. Information on Quantifiers and Argument Structure in English Learner's Dictionaries.

    Science.gov (United States)

    Lee, Thomas Hun-tak

    1993-01-01

    Lexicographers have been arguing for the inclusion of abstract and complex grammatical information in dictionaries. This paper examines the extent to which information about quantifiers and the argument structure of verbs is encoded in English learner's dictionaries. The Oxford Advanced Learner's Dictionary (1989), the Longman Dictionary of…

  18. Quantifying trail erosion and stream sedimentation with sediment tracers

    Science.gov (United States)

    Mark S. Riedel

    2006-01-01

    Abstract--The impacts of forest disturbance and roads on stream sedimentation have been rigorously investigated and documented. While historical research on turbidity and suspended sediments has been thorough, studies of stream bed sedimentation have typically relied on semi-quantitative measures such as embeddedness or marginal pool depth. To directly quantify the...

  19. Coupling and quantifying resilience and sustainability in facilities management

    DEFF Research Database (Denmark)

    Cox, Rimante Andrasiunaite; Nielsen, Susanne Balslev; Rode, Carsten

    2015-01-01

    Purpose – The purpose of this paper is to consider how to couple and quantify resilience and sustainability, where sustainability refers to not only environmental impact, but also economic and social impacts. The way a particular function of a building is provisioned may have significant repercus......Purpose – The purpose of this paper is to consider how to couple and quantify resilience and sustainability, where sustainability refers to not only environmental impact, but also economic and social impacts. The way a particular function of a building is provisioned may have significant...... repercussions beyond just resilience. The goal is to develop a decision support tool for facilities managers. Design/methodology/approach – A risk framework is used to quantify both resilience and sustainability in monetary terms. The risk framework allows to couple resilience and sustainability, so...... that the provisioning of a particular building can be investigated with consideration of functional, environmental, economic and, possibly, social dimensions. Findings – The method of coupling and quantifying resilience and sustainability (CQRS) is illustrated with a simple example that highlights how very different...

  20. Quantifying Time Dependent Moisture Storage and Transport Properties

    DEFF Research Database (Denmark)

    Peuhkuri, Ruut H

    2003-01-01

    This paper describes an experimental and numerical approach to quantify the time dependence of sorption mechanisms for some hygroscopic building - mostly insulation - materials. Some investigations of retarded sorption and non-Fickian phenomena, mostly on wood, have given inspiration to the present...

  1. A framework for quantifying net benefits of alternative prognostic models

    NARCIS (Netherlands)

    Rapsomaniki, E.; White, I.R.; Wood, A.M.; Thompson, S.G.; Feskens, E.J.M.; Kromhout, D.

    2012-01-01

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit)

  2. Using multiple linear regression techniques to quantify carbon ...

    African Journals Online (AJOL)

    Fallow ecosystems provide a significant carbon stock that can be quantified for inclusion in the accounts of global carbon budgets. Process and statistical models of productivity, though useful, are often technically rigid as the conditions for their application are not easy to satisfy. Multiple regression techniques have been ...

  3. Quantifying Stakeholder Values of VET Provision in the Netherlands

    Science.gov (United States)

    van der Sluis, Margriet E.; Reezigt, Gerry J.; Borghans, Lex

    2014-01-01

    It is well-known that the quality of vocational education and training (VET) depends on how well a given programme aligns with the values and interests of its stakeholders, but it is less well-known what these values and interests are and to what extent they are shared across different groups of stakeholders. We use vignettes to quantify the…

  4. Cross-linguistic patterns in the acquisition of quantifiers

    Science.gov (United States)

    Cummins, Chris; Gavarró, Anna; Kuvač Kraljević, Jelena; Hrzica, Gordana; Grohmann, Kleanthes K.; Skordi, Athina; Jensen de López, Kristine; Sundahl, Lone; van Hout, Angeliek; Hollebrandse, Bart; Overweg, Jessica; Faber, Myrthe; van Koert, Margreet; Smith, Nafsika; Vija, Maigi; Zupping, Sirli; Kunnari, Sari; Morisseau, Tiffany; Rusieshvili, Manana; Yatsushiro, Kazuko; Fengler, Anja; Varlokosta, Spyridoula; Konstantzou, Katerina; Farby, Shira; Guasti, Maria Teresa; Vernice, Mirta; Okabe, Reiko; Isobe, Miwa; Crosthwaite, Peter; Hong, Yoonjee; Balčiūnienė, Ingrida; Ahmad Nizar, Yanti Marina; Grech, Helen; Gatt, Daniela; Cheong, Win Nee; Asbjørnsen, Arve; Torkildsen, Janne von Koss; Haman, Ewa; Miękisz, Aneta; Gagarina, Natalia; Puzanova, Julia; Anđelković, Darinka; Savić, Maja; Jošić, Smiljana; Slančová, Daniela; Kapalková, Svetlana; Barberán, Tania; Özge, Duygu; Hassan, Saima; Chan, Cecilia Yuet Hung; Okubo, Tomoya; van der Lely, Heather; Sauerland, Uli; Noveck, Ira

    2016-01-01

    Learners of most languages are faced with the task of acquiring words to talk about number and quantity. Much is known about the order of acquisition of number words as well as the cognitive and perceptual systems and cultural practices that shape it. Substantially less is known about the acquisition of quantifiers. Here, we consider the extent to which systems and practices that support number word acquisition can be applied to quantifier acquisition and conclude that the two domains are largely distinct in this respect. Consequently, we hypothesize that the acquisition of quantifiers is constrained by a set of factors related to each quantifier’s specific meaning. We investigate competence with the expressions for “all,” “none,” “some,” “some…not,” and “most” in 31 languages, representing 11 language types, by testing 768 5-y-old children and 536 adults. We found a cross-linguistically similar order of acquisition of quantifiers, explicable in terms of four factors relating to their meaning and use. In addition, exploratory analyses reveal that language- and learner-specific factors, such as negative concord and gender, are significant predictors of variation. PMID:27482119

  5. FRAGSTATS: spatial pattern analysis program for quantifying landscape structure.

    Science.gov (United States)

    Kevin McGarigal; Barbara J. Marks

    1995-01-01

    This report describes a program, FRAGSTATS, developed to quantify landscape structure. FRAGSTATS offers a comprehensive choice of landscape metrics and was designed to be as versatile as possible. The program is almost completely automated and thus requires little technical training. Two separate versions of FRAGSTATS exist: one for vector images and one for raster...

  6. Quantifying Spin Hall Angles from Spin Pumping : Experiments and Theory

    NARCIS (Netherlands)

    Mosendz, O.; Pearson, J.E.; Fradin, F.Y.; Bauer, G.E.W.; Bader, S.D.; Hoffmann, A.

    2010-01-01

    Spin Hall effects intermix spin and charge currents even in nonmagnetic materials and, therefore, ultimately may allow the use of spin transport without the need for ferromagnets. We show how spin Hall effects can be quantified by integrating Ni80Fe20|normal metal (N) bilayers into a coplanar

  7. Quantifying Effectiveness of Streambank Stabilization Practices on Cedar River, Nebraska

    Directory of Open Access Journals (Sweden)

    Naisargi Dave

    2017-11-01

    Full Text Available Excessive sediment is a major pollutant to surface waters worldwide. In some watersheds, streambanks are a significant source of this sediment, leading to the expenditure of billions of dollars in stabilization projects. Although costly streambank stabilization projects have been implemented worldwide, long-term monitoring to quantify their success is lacking. There is a critical need to document the long-term success of streambank restoration projects. The objectives of this research were to (1 quantify streambank retreat before and after the stabilization of 18 streambanks on the Cedar River in North Central Nebraska, USA; (2 assess the impact of a large flood event; and (3 determine the most cost-efficient stabilization practice. The stabilized streambanks included jetties (10, rock-toe protection (1, slope reduction/gravel bank (1, a retaining wall (1, rock vanes (2, and tree revetments (3. Streambank retreat and accumulation were quantified using aerial images from 1993 to 2016. Though streambank retreat has been significant throughout the study period, a breached dam in 2010 caused major flooding and streambank erosion on the Cedar River. This large-scale flood enabled us to quantify the effect of one extreme event and evaluate the effectiveness of the stabilized streambanks. With a 70% success rate, jetties were the most cost-efficient practice and yielded the most deposition. If minimal risk is unacceptable, a more costly yet immobile practice such as a gravel bank or retaining wall is recommended.

  8. Quantifying carbon stores and decomposition in dead wood: A review

    Science.gov (United States)

    Matthew B. Russell; Shawn Fraver; Tuomas Aakala; Jeffrey H. Gove; Christopher W. Woodall; Anthony W. D’Amato; Mark J. Ducey

    2015-01-01

    The amount and dynamics of forest dead wood (both standing and downed) has been quantified by a variety of approaches throughout the forest science and ecology literature. Differences in the sampling and quantification of dead wood can lead to differences in our understanding of forests and their role in the sequestration and emissions of CO2, as...

  9. Quantifying soil respiration at landscape scales. Chapter 11

    Science.gov (United States)

    John B. Bradford; Michael G. Ryan

    2008-01-01

    Soil CO2, efflux, or soil respiration, represents a substantial component of carbon cycling in terrestrial ecosystems. Consequently, quantifying soil respiration over large areas and long time periods is an increasingly important goal. However, soil respiration rates vary dramatically in space and time in response to both environmental conditions...

  10. Lecture Note on Discrete Mathematics: Predicates and Quantifiers

    DEFF Research Database (Denmark)

    Nordbjerg, Finn Ebertsen

    2016-01-01

    This lecture note supplements the treatment of predicates and quantifiers given in standard textbooks on Discrete Mathematics (e.g.: [1]) and introduces the notation used in this course. We will present central concepts that are important, when predicate logic is used for specification...

  11. Quantifying the FIR interaction enhancement in paired galaxies

    International Nuclear Information System (INIS)

    Xu Cong; Sulentic, J.W.

    1990-01-01

    We studied the ''Catalogue of Isolated Pairs of Galaxies in the Northern Hemisphere'' by Karachentsev (1972) and a well matched comparison sample taken from the ''Catalogue of Isolated Galaxies'' by Karachentseva (1973) in order to quantify the enhanced FIR emission properties of interacting galaxies. 8 refs, 6 figs

  12. A Sustainability Initiative to Quantify Carbon Sequestration by Campus Trees

    Science.gov (United States)

    Cox, Helen M.

    2012-01-01

    Over 3,900 trees on a university campus were inventoried by an instructor-led team of geography undergraduates in order to quantify the carbon sequestration associated with biomass growth. The setting of the project is described, together with its logistics, methodology, outcomes, and benefits. This hands-on project provided a team of students…

  13. Designing a systematic landscape monitoring approach for quantifying ecosystem services

    Science.gov (United States)

    A key problem encountered early on by governments striving to incorporate the ecosystem services concept into decision making is quantifying ecosystem services across large landscapes. Basically, they are faced with determining what to measure, how to measure it and how to aggre...

  14. Challenges in quantifying biosphere-atmosphere exchange of nitrogen species

    DEFF Research Database (Denmark)

    Sutton, M.A.; Nemitz, E.; Erisman, J.W.

    2007-01-01

    Recent research in nitrogen exchange with the atmosphere has separated research communities according to N form. The integrated perspective needed to quantify the net effect of N on greenhouse-gas balance is being addressed by the NitroEurope Integrated Project (NEU). Recent advances have depende...

  15. Quantifying and mapping spatial variability in simulated forest plots

    Science.gov (United States)

    Gavin R. Corral; Harold E. Burkhart

    2016-01-01

    We used computer simulations to test the efficacy of multivariate statistical methods to detect, quantify, and map spatial variability of forest stands. Simulated stands were developed of regularly-spaced plantations of loblolly pine (Pinus taeda L.). We assumed no affects of competition or mortality, but random variability was added to individual tree characteristics...

  16. Quantifying Ladder Fuels: A New Approach Using LiDAR

    Science.gov (United States)

    Heather Kramer; Brandon Collins; Maggi Kelly; Scott Stephens

    2014-01-01

    We investigated the relationship between LiDAR and ladder fuels in the northern Sierra Nevada, California USA. Ladder fuels are often targeted in hazardous fuel reduction treatments due to their role in propagating fire from the forest floor to tree crowns. Despite their importance, ladder fuels are difficult to quantify. One common approach is to calculate canopy base...

  17. Quantifying a Negative: How Homeland Security Adds Value

    Science.gov (United States)

    2015-12-01

    access to future victims. The Law Enforcement agency could then identifying and quantifying the value of future crimes. For example, if a serial ... killer is captured with evidence of the next victim or an established pattern of victimization, network theory could be used to identify the next

  18. Detailed Analysis of Torque Ripple in High Frequency Signal Injection based Sensor less PMSM Drives

    Directory of Open Access Journals (Sweden)

    Ravikumar Setty A.

    2017-01-01

    Full Text Available High Frequency Signal Injection based techniques are robust and well proven to estimate the rotor position from stand still to low speed. However, Injected high frequency signal introduces, high frequency harmonics in the motor phase currents and results in significant Output Torque ripple. There is no detailed analysis exist in the literature, to study the effect of injected signal frequency on Torque ripple. Objective of this work is to study the Torque Ripple resulting from High Frequency signal injection in PMSM motor drives. Detailed MATLAB/Simulink simulations are carried to quantify the Torque ripple at different Signal frequencies.

  19. Underwater Acoustic Signal Processing

    National Research Council Canada - National Science Library

    Culver, Richard L; Sibul, Leon H; Bradley, David L

    2007-01-01

    .... The research is directed toward passive sonar detection and classification, continuous wave (CW) and broadband signals, shallow water operation, both platform-mounted and distributed systems, and frequencies below 1 kHz...

  20. Signals and systems

    CERN Document Server

    Rao, K Deergha

    2018-01-01

    This textbook covers the fundamental theories of signals and systems analysis, while incorporating recent developments from integrated circuits technology into its examples. Starting with basic definitions in signal theory, the text explains the properties of continuous-time and discrete-time systems and their representation by differential equations and state space. From those tools, explanations for the processes of Fourier analysis, the Laplace transform, and the z-Transform provide new ways of experimenting with different kinds of time systems. The text also covers the separate classes of analog filters and their uses in signal processing applications. Intended for undergraduate electrical engineering students, chapter sections include exercise for review and practice for the systems concepts of each chapter. Along with exercises, the text includes MATLAB-based examples to allow readers to experiment with signals and systems code on their own. An online repository of the MATLAB code from this textbook can...

  1. Topological signal processing

    CERN Document Server

    Robinson, Michael

    2014-01-01

    Signal processing is the discipline of extracting information from collections of measurements. To be effective, the measurements must be organized and then filtered, detected, or transformed to expose the desired information.  Distortions caused by uncertainty, noise, and clutter degrade the performance of practical signal processing systems. In aggressively uncertain situations, the full truth about an underlying signal cannot be known.  This book develops the theory and practice of signal processing systems for these situations that extract useful, qualitative information using the mathematics of topology -- the study of spaces under continuous transformations.  Since the collection of continuous transformations is large and varied, tools which are topologically-motivated are automatically insensitive to substantial distortion. The target audience comprises practitioners as well as researchers, but the book may also be beneficial for graduate students.

  2. Acoustic MIMO signal processing

    CERN Document Server

    Huang, Yiteng; Chen, Jingdong

    2006-01-01

    A timely and important book addressing a variety of acoustic signal processing problems under multiple-input multiple-output (MIMO) scenarios. It uniquely investigates these problems within a unified framework offering a novel and penetrating analysis.

  3. Ultrahigh bandwidth signal processing

    DEFF Research Database (Denmark)

    Oxenløwe, Leif Katsuo

    2016-01-01

    Optical time lenses have proven to be very versatile for advanced optical signal processing. Based on a controlled interplay between dispersion and phase-modulation by e.g. four-wave mixing, the processing is phase-preserving, an hence useful for all types of data signals including coherent multi......-level modulation founats. This has enabled processing of phase-modulated spectrally efficient data signals, such as orthogonal frequency division multiplexed (OFDM) signa In that case, a spectral telescope system was used, using two time lenses with different focal lengths (chirp rates), yielding a spectral...... regeneratio These operations require a broad bandwidth nonlinear platform, and novel photonic integrated nonlinear platform like aluminum gallium arsenide nano-waveguides used for 1.28 Tbaud optical signal processing will be described....

  4. Traffic Signal Cycle Lengths

    Data.gov (United States)

    Town of Chapel Hill, North Carolina — Traffic signal location list for the town of Chapel Hill. This data set includes light cycle information as well as as intersection information.The Town of Chapel...

  5. Foundations of signal processing

    CERN Document Server

    Vetterli, Martin; Goyal, Vivek K

    2014-01-01

    This comprehensive and engaging textbook introduces the basic principles and techniques of signal processing, from the fundamental ideas of signals and systems theory to real-world applications. Students are introduced to the powerful foundations of modern signal processing, including the basic geometry of Hilbert space, the mathematics of Fourier transforms, and essentials of sampling, interpolation, approximation and compression. The authors discuss real-world issues and hurdles to using these tools, and ways of adapting them to overcome problems of finiteness and localisation, the limitations of uncertainty and computational costs. Standard engineering notation is used throughout, making mathematical examples easy for students to follow, understand and apply. It includes over 150 homework problems and over 180 worked examples, specifically designed to test and expand students' understanding of the fundamentals of signal processing, and is accompanied by extensive online materials designed to aid learning, ...

  6. Source of seismic signals

    Energy Technology Data Exchange (ETDEWEB)

    Frankovskii, B.A.; Khor' yakov, K.A.

    1980-08-30

    Patented is a source of seismic signals consisting of a shock generator with a basic low-voltage and auxillary high-voltage stator coils, a capacitive transformer and control switches. To increase the amplitude of signal excitation a condensor battery and auxillary commutator are introduced into the device, which are connected in parallel and serially into the circuit of the main low-voltage stator coil.

  7. Halo-independence with quantified maximum entropy at DAMA/LIBRA

    Energy Technology Data Exchange (ETDEWEB)

    Fowlie, Andrew, E-mail: andrew.j.fowlie@googlemail.com [ARC Centre of Excellence for Particle Physics at the Tera-scale, Monash University, Melbourne, Victoria 3800 (Australia)

    2017-10-01

    Using the DAMA/LIBRA anomaly as an example, we formalise the notion of halo-independence in the context of Bayesian statistics and quantified maximum entropy. We consider an infinite set of possible profiles, weighted by an entropic prior and constrained by a likelihood describing noisy measurements of modulated moments by DAMA/LIBRA. Assuming an isotropic dark matter (DM) profile in the galactic rest frame, we find the most plausible DM profiles and predictions for unmodulated signal rates at DAMA/LIBRA. The entropic prior contains an a priori unknown regularisation factor, β, that describes the strength of our conviction that the profile is approximately Maxwellian. By varying β, we smoothly interpolate between a halo-independent and a halo-dependent analysis, thus exploring the impact of prior information about the DM profile.

  8. Detector and quantifier of ionizing x-radiation by indirect method

    International Nuclear Information System (INIS)

    Pablo, Aramayo; Roberto, Cruz; Luis, Rocha; Rotger Viviana I; Olivera, Juan Manuel

    2007-01-01

    The work presents the development of a device able to detect and quantify ionizing radiations. The transduction principle proposed for the design of the detector consists on using the properties of the fluorescent screens able to respond to the incident radiation with a proportional brightness. Though the method is well-known, it proved necessary to optimize the design of the detectors in order to get a greater efficiency in the relationship radiation/brightness; to that purpose, different models were tried out, varying its geometry and the optoelectronic device. The resultant signal was processed and presented in a visualization system. It is important to highlight that the project is in development and the results we obtained are preliminary

  9. Redox signaling in plants.

    Science.gov (United States)

    Foyer, Christine H; Noctor, Graham

    2013-06-01

    Our aim is to deliver an authoritative and challenging perspective of current concepts in plant redox signaling, focusing particularly on the complex interface between the redox and hormone-signaling pathways that allow precise control of plant growth and defense in response to metabolic triggers and environmental constraints and cues. Plants produce significant amounts of singlet oxygen and other reactive oxygen species (ROS) as a result of photosynthetic electron transport and metabolism. Such pathways contribute to the compartment-specific redox-regulated signaling systems in plant cells that convey information to the nucleus to regulate gene expression. Like the chloroplasts and mitochondria, the apoplast-cell wall compartment makes a significant contribution to the redox signaling network, but unlike these organelles, the apoplast has a low antioxidant-buffering capacity. The respective roles of ROS, low-molecular antioxidants, redox-active proteins, and antioxidant enzymes are considered in relation to the functions of plant hormones such as salicylic acid, jasmonic acid, and auxin, in the composite control of plant growth and defense. Regulation of redox gradients between key compartments in plant cells such as those across the plasma membrane facilitates flexible and multiple faceted opportunities for redox signaling that spans the intracellular and extracellular environments. In conclusion, plants are recognized as masters of the art of redox regulation that use oxidants and antioxidants as flexible integrators of signals from metabolism and the environment.

  10. Quantifying South East Asia's forest degradation using latest generation optical and radar satellite remote sensing

    Science.gov (United States)

    Broich, M.; Tulbure, M. G.; Wijaya, A.; Weisse, M.; Stolle, F.

    2017-12-01

    Deforestation and forest degradation form the 2nd largest source of anthropogenic CO2 emissions. While deforestation is being globally mapped with satellite image time series, degradation remains insufficiently quantified. Previous studies quantified degradation for small scale, local sites. A method suitable for accurate mapping across large areas has not yet been developed due to the variability of the low magnitude and short-lived degradation signal and the absence of data with suitable resolution properties. Here we use a combination of newly available streams of free optical and radar image time series acquired by NASA and ESA, and HPC-based data science algorithms to innovatively quantify degradation consistently across Southeast Asia (SEA). We used Sentinel1 c-band radar data and NASA's new Harmonized Landsat8 (L8) Sentinel2 (S2) product (HLS) for cloud free optical images. Our results show that dense time series of cloud penetrating Sentinel 1 c-band radar can provide degradation alarm flags, while the HLS product of cloud-free optical images can unambiguously confirm degradation alarms. The detectability of degradation differed across SEA. In the seasonal forest of continental SEA the reliability of our radar-based alarm flags increased as the variability in landscape moisture decreases in the dry season. We reliably confirmed alarms with optical image time series during the late dry season, where degradation in open canopy forests becomes detectable once the undergrowth vegetation has died down. Conversely, in insular SEA landscape moisture is low, the radar time series generated degradation alarms flags with moderate to high reliability throughout the year, further confirmed with the HLS product. Based on the HLS product we can now confirm degradation within time series provides better results than either one on its own. Our results provide significant information with application for carbon trading policy and land management.

  11. Transient-Switch-Signal Suppressor

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Circuit delays transmission of switch-opening or switch-closing signal until after preset suppression time. Used to prevent transmission of undesired momentary switch signal. Basic mode of operation simple. Beginning of switch signal initiates timing sequence. If switch signal persists after preset suppression time, circuit transmits switch signal to external circuitry. If switch signal no longer present after suppression time, switch signal deemed transient, and circuit does not pass signal on to external circuitry, as though no transient switch signal. Suppression time preset at value large enough to allow for damping of underlying pressure wave or other mechanical transient.

  12. Digital Signal Processing applied to Physical Signals

    CERN Document Server

    Alberto, Diego; Musa, L

    2011-01-01

    It is well known that many of the scientific and technological discoveries of the XXI century will depend on the capability of processing and understanding a huge quantity of data. With the advent of the digital era, a fully digital and automated treatment can be designed and performed. From data mining to data compression, from signal elaboration to noise reduction, a processing is essential to manage and enhance features of interest after every data acquisition (DAQ) session. In the near future, science will go towards interdisciplinary research. In this work there will be given an example of the application of signal processing to different fields of Physics from nuclear particle detectors to biomedical examinations. In Chapter 1 a brief description of the collaborations that allowed this thesis is given, together with a list of the publications co-produced by the author in these three years. The most important notations, definitions and acronyms used in the work are also provided. In Chapter 2, the last r...

  13. A Synthetic Phased Array Surface Acoustic Wave Sensor for Quantifying Bolt Tension

    Directory of Open Access Journals (Sweden)

    Rasim Guldiken

    2012-09-01

    Full Text Available In this paper, we report our findings on implementing a synthetic phased array surface acoustic wave sensor to quantify bolt tension. Maintaining proper bolt tension is important in many fields such as for ensuring safe operation of civil infrastructures. Significant advantages of this relatively simple methodology is its capability to assess bolt tension without any contact with the bolt, thus enabling measurement at inaccessible locations, multiple bolt measurement capability at a time, not requiring data collection during the installation and no calibration requirements. We performed detailed experiments on a custom-built flexible bench-top experimental setup consisting of 1018 steel plate of 12.7 mm (½ in thickness, a 6.4 mm (¼ in grade 8 bolt and a stainless steel washer with 19 mm (¾ in of external diameter. Our results indicate that this method is not only capable of clearly distinguishing properly bolted joints from loosened joints but also capable of quantifying how loose the bolt actually is. We also conducted detailed signal-to-noise (SNR analysis and showed that the SNR value for the entire bolt tension range was sufficient for image reconstruction.

  14. Quantifying the resilience of an urban traffic-electric power coupled system

    International Nuclear Information System (INIS)

    Fotouhi, Hossein; Moryadee, Seksun; Miller-Hooks, Elise

    2017-01-01

    Transportation system resilience has been the subject of several recent studies. To assess the resilience of a transportation network, however, it is essential to model its interactions with and reliance on other lifelines. Prior works might consider these interactions implicitly, perhaps in the form of hazard impact scenarios wherein services from a second lifeline (e.g. power) are precluded due to a hazard event. In this paper, a bi-level, mixed-integer, stochastic program is presented for quantifying the resilience of a coupled traffic-power network under a host of potential natural or anthropogenic hazard-impact scenarios. A two-layer network representation is employed that includes details of both systems. Interdependencies between the urban traffic and electric power distribution systems are captured through linking variables and logical constraints. The modeling approach was applied on a case study developed on a portion of the signalized traffic-power distribution system in southern Minneapolis. The results of the case study show the importance of explicitly considering interdependencies between critical infrastructures in transportation resilience estimation. The results also provide insights on lifeline performance from an alternate power perspective. - Highlights: • Model interdependent infrastructure systems. • Provide method for quantifying resilience of coupled traffic and power networks. • Propose bi-level, mixed-integer, stochastic program. • Take a multi-hazard, stochastic futures approach.

  15. Mind the gap. Quantifying principal-agent problems in energy efficiency

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2007-10-15

    Energy efficiency presents a unique opportunity to address three energy-related challenges in IEA member countries: energy security, climate change, and economic development. Yet an energy-efficiency gap exists between actual and optimal energy use. That is, significant cost-effective energy efficiency potential is wasted because market barriers prevent countries from achieving optimal levels. Market barriers take many forms, from inadequate access to capital, isolation from price signals, information asymmetry, and split-incentives. Though many studies have reported the existence of such market barriers, none so far have attempted to quantify the magnitude of their effect on energy use and efficiency. This publication is an unprecedented attempt to quantify the size of one of the most pervasive barriers to energy efficiency - principal-agent problems, or in common parlance, variations on the 'landlord-tenant' problem. In doing so, the book provides energy analysts and economists with unique insights into the amount of energy affected by principal-agent problems. Using an innovative methodology applied to eight case studies (covering commercial and residential sectors, and end-use appliances) from five different IEA countries, the analysis identifies over 3,800 PJ/year of affected energy use - that is, around 85% of the annual energy use of a country the size of Spain. The book builds on these findings to suggest a range of possible policy solutions that can reduce the impact of principal-agent problems and help policy makers mind the energy efficiency gap.

  16. Approach to quantify human dermal skin aging using multiphoton laser scanning microscopy

    Science.gov (United States)

    Puschmann, Stefan; Rahn, Christian-Dennis; Wenck, Horst; Gallinat, Stefan; Fischer, Frank

    2012-03-01

    Extracellular skin structures in human skin are impaired during intrinsic and extrinsic aging. Assessment of these dermal changes is conducted by subjective clinical evaluation and histological and molecular analysis. We aimed to develop a new parameter for the noninvasive quantitative determination of dermal skin alterations utilizing the high-resolution three-dimensional multiphoton laser scanning microscopy (MPLSM) technique. To quantify structural differences between chronically sun-exposed and sun-protected human skin, the respective collagen-specific second harmonic generation and the elastin-specific autofluorescence signals were recorded in young and elderly volunteers using the MPLSM technique. After image processing, the elastin-to-collagen ratio (ELCOR) was calculated. Results show that the ELCOR parameter of volar forearm skin significantly increases with age. For elderly volunteers, the ELCOR value calculated for the chronically sun-exposed temple area is significantly augmented compared to the sun-protected upper arm area. Based on the MPLSM technology, we introduce the ELCOR parameter as a new means to quantify accurately age-associated alterations in the extracellular matrix.

  17. Detecting the chaotic nature in a transitional boundary layer using symbolic information-theory quantifiers.

    Science.gov (United States)

    Zhang, Wen; Liu, Peiqing; Guo, Hao; Wang, Jinjun

    2017-11-01

    The permutation entropy and the statistical complexity are employed to study the boundary-layer transition induced by the surface roughness. The velocity signals measured in the transition process are analyzed with these symbolic quantifiers, as well as the complexity-entropy causality plane, and the chaotic nature of the instability fluctuations is identified. The frequency of the dominant fluctuations has been found according to the time scales corresponding to the extreme values of the symbolic quantifiers. The laminar-turbulent transition process is accompanied by the evolution in the degree of organization of the complex eddy motions, which is also characterized with the growing smaller and flatter circles in the complexity-entropy causality plane. With the help of the permutation entropy and the statistical complexity, the differences between the chaotic fluctuations detected in the experiments and the classical Tollmien-Schlichting wave are shown and discussed. It is also found that the chaotic features of the instability fluctuations can be approximated with a number of regular sine waves superimposed on the fluctuations of the undisturbed laminar boundary layer. This result is related to the physical mechanism in the generation of the instability fluctuations, which is the noise-induced chaos.

  18. The benefits and risks of quantified relationship technologies : response to open peer commentaries on "the quantified relationship"

    NARCIS (Netherlands)

    Danaher, J.; Nyholm, S.R.; Earp, B.D.

    2018-01-01

    Our critics argue that quantified relationships (QR) will threaten privacy, undermine autonomy, reinforce problematic business models, and promote epistemic injustice. We do not deny these risks. But to determine the appropriate policy response, it will be necessary to assess their likelihood,

  19. Cellular signalling properties in microcircuits

    DEFF Research Database (Denmark)

    Toledo-Rodriguez, Maria; El Manira, Abdeljabbar; Wallén, Peter

    2005-01-01

    Molecules and cells are the signalling elements in microcircuits. Recent studies have uncovered bewildering diversity in postsynaptic signalling properties in all areas of the vertebrate nervous system. Major effort is now being invested in establishing the specialized signalling properties...

  20. Quantifying Pilot Visual Attention in Low Visibility Terminal Operations

    Science.gov (United States)

    Ellis, Kyle K.; Arthur, J. J.; Latorella, Kara A.; Kramer, Lynda J.; Shelton, Kevin J.; Norman, Robert M.; Prinzel, Lawrence J.

    2012-01-01

    Quantifying pilot visual behavior allows researchers to determine not only where a pilot is looking and when, but holds implications for specific behavioral tracking when these data are coupled with flight technical performance. Remote eye tracking systems have been integrated into simulators at NASA Langley with effectively no impact on the pilot environment. This paper discusses the installation and use of a remote eye tracking system. The data collection techniques from a complex human-in-the-loop (HITL) research experiment are discussed; especially, the data reduction algorithms and logic to transform raw eye tracking data into quantified visual behavior metrics, and analysis methods to interpret visual behavior. The findings suggest superior performance for Head-Up Display (HUD) and improved attentional behavior for Head-Down Display (HDD) implementations of Synthetic Vision System (SVS) technologies for low visibility terminal area operations. Keywords: eye tracking, flight deck, NextGen, human machine interface, aviation

  1. A new paradigm of quantifying ecosystem stress through chemical signatures

    Energy Technology Data Exchange (ETDEWEB)

    Kravitz, Ben [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, P.O. Box 999, MSIN K9-30 Richland Washington 99352 USA; Guenther, Alex B. [Department of Earth System Science, University of California Irvine, 3200 Croul Hall Street Irvine California 92697 USA; Gu, Lianhong [Environmental Sciences Division, Oak Ridge National Laboratory, Oak Ridge Tennessee 37831 USA; Karl, Thomas [Institute of Atmospheric and Crysopheric Sciences, University of Innsbruck, Innrain 52f A-6020 Innsbruck Austria; Kaser, Lisa [National Center for Atmospheric Research, P.O. Box 3000 Boulder Colorado 80307 USA; Pallardy, Stephen G. [Department of Forestry, University of Missouri, 203 Anheuser-Busch Natural Resources Building Columbia Missouri 65211 USA; Peñuelas, Josep [CREAF, Cerdanyola del Vallès 08193 Catalonia Spain; Global Ecology Unit CREAF-CSIC-UAB, CSIC, Cerdanyola del Vallès 08193 Catalonia Spain; Potosnak, Mark J. [Department of Environmental Science and Studies, DePaul University, McGowan South, Suite 203 Chicago Illinois 60604 USA; Seco, Roger [Department of Earth System Science, University of California Irvine, 3200 Croul Hall Street Irvine California 92697 USA

    2016-11-01

    Stress-induced emissions of biogenic volatile organic compounds (VOCs) from terrestrial ecosystems may be one of the dominant sources of VOC emissions world-wide. Understanding the ecosystem stress response could reveal how ecosystems will respond and adapt to climate change and, in turn, quantify changes in the atmospheric burden of VOC oxidants and secondary organic aerosols. Here we argue, based on preliminary evidence from several opportunistic measurement sources, that chemical signatures of stress can be identified and quantified at the ecosystem scale. We also outline future endeavors that we see as next steps toward uncovering quantitative signatures of stress, including new advances in both VOC data collection and analysis of "big data."

  2. A framework for quantifying net benefits of alternative prognostic models

    DEFF Research Database (Denmark)

    Rapsomaniki, Eleni; White, Ian R; Wood, Angela M

    2012-01-01

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit......) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk...... reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple...

  3. Quantifying the value of SHM for wind turbine blades

    DEFF Research Database (Denmark)

    Nielsen, Jannie Sønderkær; Tcherniak, Dmitri; Ulriksen, Martin Dalgaard

    2018-01-01

    is developed to quantify the value of SHM for an 8 MW OWT using a decision framework based on Bayesian pre-posterior decision analysis. Deterioration is modelled as a Markov chain developed based on data, and the costs are obtained from a service provider for OWTs. Discrete Bayesian networks are used......In this paper, the value of information (VoI) from structural health monitoring (SHM) is quantified in a case study for offshore wind turbines (OWTs). This is done by combining data from an operating turbine equipped with a blade SHM system with cost information from a service provider for OWTs...... is compared to a statistical model from the healthy state using a metric that yields a damage index representing the structural integrity. As the damage was introduced artificially, it is possible to statistically estimate the confusion matrix corresponding to different threshold values, and here we opt...

  4. Quantifiers for randomness of chaotic pseudo-random number generators.

    Science.gov (United States)

    De Micco, L; Larrondo, H A; Plastino, A; Rosso, O A

    2009-08-28

    We deal with randomness quantifiers and concentrate on their ability to discern the hallmark of chaos in time series used in connection with pseudo-random number generators (PRNGs). Workers in the field are motivated to use chaotic maps for generating PRNGs because of the simplicity of their implementation. Although there exist very efficient general-purpose benchmarks for testing PRNGs, we feel that the analysis provided here sheds additional didactic light on the importance of the main statistical characteristics of a chaotic map, namely (i) its invariant measure and (ii) the mixing constant. This is of help in answering two questions that arise in applications: (i) which is the best PRNG among the available ones? and (ii) if a given PRNG turns out not to be good enough and a randomization procedure must still be applied to it, which is the best applicable randomization procedure? Our answer provides a comparative analysis of several quantifiers advanced in the extant literature.

  5. Resolving and quantifying overlapped chromatographic bands by transmutation

    Science.gov (United States)

    Malinowski

    2000-09-15

    A new chemometric technique called "transmutation" is developed for the purpose of sharpening overlapped chromatographic bands in order to quantify the components. The "transmutation function" is created from the chromatogram of the pure component of interest, obtained from the same instrument, operating under the same experimental conditions used to record the unresolved chromatogram of the sample mixture. The method is used to quantify mixtures containing toluene, ethylbenzene, m-xylene, naphthalene, and biphenyl from unresolved chromatograms previously reported. The results are compared to those obtained using window factor analysis, rank annihilation factor analysis, and matrix regression analysis. Unlike the latter methods, the transmutation method is not restricted to two-dimensional arrays of data, such as those obtained from HPLC/DAD, but is also applicable to chromatograms obtained from single detector experiments. Limitations of the method are discussed.

  6. Pitfalls in quantifying species turnover: the residency effect

    Directory of Open Access Journals (Sweden)

    Kevin Chase Burns

    2014-03-01

    Full Text Available The composition of ecological communities changes continuously through time and space. Understanding this turnover in species composition is a central goal in biogeography, but quantifying species turnover can be problematic. Here, I describe an underappreciated source of bias in quantifying species turnover, namely ‘the residency effect’, which occurs when the contiguous distributions of species across sampling domains are small relative to census intervals. I present the results of a simulation model that illustrates the problem theoretically and then I demonstrate the problem empirically using a long-term dataset of plant species turnover on islands. Results from both exercises indicate that empirical estimates of species turnover may be susceptible to significant observer bias, which may potentially cloud a better understanding of how the composition of ecological communities changes through time.

  7. VLSI signal processing technology

    CERN Document Server

    Swartzlander, Earl

    1994-01-01

    This book is the first in a set of forthcoming books focussed on state-of-the-art development in the VLSI Signal Processing area. It is a response to the tremendous research activities taking place in that field. These activities have been driven by two factors: the dramatic increase in demand for high speed signal processing, especially in consumer elec­ tronics, and the evolving microelectronic technologies. The available technology has always been one of the main factors in determining al­ gorithms, architectures, and design strategies to be followed. With every new technology, signal processing systems go through many changes in concepts, design methods, and implementation. The goal of this book is to introduce the reader to the main features of VLSI Signal Processing and the ongoing developments in this area. The focus of this book is on: • Current developments in Digital Signal Processing (DSP) pro­ cessors and architectures - several examples and case studies of existing DSP chips are discussed in...

  8. Quantifying resilience for resilience engineering of socio technical systems

    OpenAIRE

    Häring, Ivo; Ebenhöch, Stefan; Stolz, Alexander

    2016-01-01

    Resilience engineering can be defined to comprise originally technical, engineering and natural science approaches to improve the resilience and sustainability of socio technical cyber-physical systems of various complexities with respect to disruptive events. It is argued how this emerging interdisciplinary technical and societal science approach may contribute to civil and societal security research. In this context, the article lists expected benefits of quantifying resilience. Along the r...

  9. Quantifying the Lateral Bracing Provided by Standing Steam Roof Systems

    OpenAIRE

    Sorensen, Taylor J.

    2016-01-01

    One of the major challenges of engineering is finding the proper balance between economical and safe. Currently engineers at Nucor Corporation have ignored the additional lateral bracing provided by standing seam roofing systems to joists because of the lack of methods available to quantify the amount of bracing provided. Based on the results of testing performed herein, this bracing is significant, potentially resulting in excessively conservative designs and unnecessary costs. This proje...

  10. A framework for quantifying net benefits of alternative prognostic models

    OpenAIRE

    Rapsomaniki, E.; White, I.R.; Wood, A.M.; Thompson, S.G.; Ford, I.

    2012-01-01

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measure...

  11. PREDICTION OF SURGICAL TREATMENT WITH POUR PERITONITIS QUANTIFYING RISK FACTORS

    Directory of Open Access Journals (Sweden)

    І. К. Churpiy

    2012-11-01

    Full Text Available Explored the possibility of quantitative assessment of risk factors of complications in the treatment of diffuse peritonitis. Highlighted 53 groups of features that are important in predicting the course of diffuse peritonitis. The proposed scheme of defining the risk of clinical course of diffuse peritonitis can quantify the severity of the source of patients and in most cases correctly predict the results of treatment of disease.

  12. Simulating non-prenex cuts in quantified propositional calculus

    Czech Academy of Sciences Publication Activity Database

    Jeřábek, Emil; Nguyen, P.

    2011-01-01

    Roč. 57, č. 5 (2011), s. 524-532 ISSN 0942-5616 R&D Projects: GA AV ČR IAA100190902; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10190503 Keywords : proof complexity * prenex cuts * quantified propositional calculus Subject RIV: BA - General Mathematics Impact factor: 0.496, year: 2011 http://onlinelibrary.wiley.com/doi/10.1002/malq.201020093/abstract

  13. Quantifying high dimensional entanglement with two mutually unbiased bases

    Directory of Open Access Journals (Sweden)

    Paul Erker

    2017-07-01

    Full Text Available We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.

  14. Parkinson's Law Quantified: Three Investigations on Bureaucratic Inefficiency

    OpenAIRE

    Klimek, Peter; Hanel, Rudolf; Thurner, Stefan

    2008-01-01

    We formulate three famous, descriptive essays of C.N. Parkinson on bureaucratic inefficiency in a quantifiable and dynamical socio-physical framework. In the first model we show how the use of recent opinion formation models for small groups can be used to understand Parkinson's observation that decision making bodies such as cabinets or boards become highly inefficient once their size exceeds a critical 'Coefficient of Inefficiency', typically around 20. A second observation of Parkinson - w...

  15. The quantified self a sociology of self-tracking

    CERN Document Server

    Lupton, Deborah

    2016-01-01

    With the advent of digital devices and software, self-tracking practices have gained new adherents and have spread into a wide array of social domains. The Quantified Self movement has emerged to promote 'self knowledge through numbers'. In this ground-breaking book, Deborah Lupton critically analyses the social, cultural and political dimensions of contemporary self-tracking and identifies the concepts of selfhood, human embodiment and the value of data that underpin them.

  16. Quantifying the ice-albedo feedback through decoupling

    Science.gov (United States)

    Kravitz, B.; Rasch, P. J.

    2017-12-01

    The ice-albedo feedback involves numerous individual components, whereby warming induces sea ice melt, inducing reduced surface albedo, inducing increased surface shortwave absorption, causing further warming. Here we attempt to quantify the sea ice albedo feedback using an analogue of the "partial radiative perturbation" method, but where the governing mechanisms are directly decoupled in a climate model. As an example, we can isolate the insulating effects of sea ice on surface energy and moisture fluxes by allowing sea ice thickness to change but fixing Arctic surface albedo, or vice versa. Here we present results from such idealized simulations using the Community Earth System Model in which individual components are successively fixed, effectively decoupling the ice-albedo feedback loop. We isolate the different components of this feedback, including temperature change, sea ice extent/thickness, and air-sea exchange of heat and moisture. We explore the interactions between these different components, as well as the strengths of the total feedback in the decoupled feedback loop, to quantify contributions from individual pieces. We also quantify the non-additivity of the effects of the components as a means of investigating the dominant sources of nonlinearity in the ice-albedo feedback.

  17. A novel approach to quantify cybersecurity for electric power systems

    Science.gov (United States)

    Kaster, Paul R., Jr.

    Electric Power grid cybersecurity is a topic gaining increased attention in academia, industry, and government circles, yet a method of quantifying and evaluating a system's security is not yet commonly accepted. In order to be useful, a quantification scheme must be able to accurately reflect the degree to which a system is secure, simply determine the level of security in a system using real-world values, model a wide variety of attacker capabilities, be useful for planning and evaluation, allow a system owner to publish information without compromising the security of the system, and compare relative levels of security between systems. Published attempts at quantifying cybersecurity fail at one or more of these criteria. This document proposes a new method of quantifying cybersecurity that meets those objectives. This dissertation evaluates the current state of cybersecurity research, discusses the criteria mentioned previously, proposes a new quantification scheme, presents an innovative method of modeling cyber attacks, demonstrates that the proposed quantification methodology meets the evaluation criteria, and proposes a line of research for future efforts.

  18. Information criteria for quantifying loss of reversibility in parallelized KMC

    Energy Technology Data Exchange (ETDEWEB)

    Gourgoulias, Konstantinos, E-mail: gourgoul@math.umass.edu; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Rey-Bellet, Luc, E-mail: luc@math.umass.edu

    2017-01-01

    Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot be computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.

  19. Quantifying Human Performance of a Dynamic Military Target Detection Task: An Application of the Theory of Signal Detection.

    Science.gov (United States)

    1995-06-01

    applied to analyze numerous experimental tasks (Macmillan and Creelman , 1991). One of these tasks, target detection, is the subject research. In...between each associated pair of false alarm rate and hit rate z-scores is d’ for the bias level associated with the pairing (Macmillan and Creelman , 1991...unequal variance in normal distributions (Macmillan and Creelman , 1991). 61 1966). It is described in detail for the interested reader by Green and

  20. Measuring center of pressure signals to quantify human balance using multivariate multiscale entropy by designing a force platform.

    Science.gov (United States)

    Huang, Cheng-Wei; Sue, Pei-Der; Abbod, Maysam F; Jiang, Bernard C; Shieh, Jiann-Shing

    2013-08-08

    To assess the improvement of human body balance, a low cost and portable measuring device of center of pressure (COP), known as center of pressure and complexity monitoring system (CPCMS), has been developed for data logging and analysis. In order to prove that the system can estimate the different magnitude of different sways in comparison with the commercial Advanced Mechanical Technology Incorporation (AMTI) system, four sway tests have been developed (i.e., eyes open, eyes closed, eyes open with water pad, and eyes closed with water pad) to produce different sway displacements. Firstly, static and dynamic tests were conducted to investigate the feasibility of the system. Then, correlation tests of the CPCMS and AMTI systems have been compared with four sway tests. The results are within the acceptable range. Furthermore, multivariate empirical mode decomposition (MEMD) and enhanced multivariate multiscale entropy (MMSE) analysis methods have been used to analyze COP data reported by the CPCMS and compare it with the AMTI system. The improvements of the CPCMS are 35% to 70% (open eyes test) and 60% to 70% (eyes closed test) with and without water pad. The AMTI system has shown an improvement of 40% to 80% (open eyes test) and 65% to 75% (closed eyes test). The results indicate that the CPCMS system can achieve similar results to the commercial product so it can determine the balance.

  1. Measuring Center of Pressure Signals to Quantify Human Balance Using Multivariate Multiscale Entropy by Designing a Force Platform

    Directory of Open Access Journals (Sweden)

    Cheng-Wei Huang

    2013-08-01

    Full Text Available To assess the improvement of human body balance, a low cost and portable measuring device of center of pressure (COP, known as center of pressure and complexity monitoring system (CPCMS, has been developed for data logging and analysis. In order to prove that the system can estimate the different magnitude of different sways in comparison with the commercial Advanced Mechanical Technology Incorporation (AMTI system, four sway tests have been developed (i.e., eyes open, eyes closed, eyes open with water pad, and eyes closed with water pad to produce different sway displacements. Firstly, static and dynamic tests were conducted to investigate the feasibility of the system. Then, correlation tests of the CPCMS and AMTI systems have been compared with four sway tests. The results are within the acceptable range. Furthermore, multivariate empirical mode decomposition (MEMD and enhanced multivariate multiscale entropy (MMSE analysis methods have been used to analyze COP data reported by the CPCMS and compare it with the AMTI system. The improvements of the CPCMS are 35% to 70% (open eyes test and 60% to 70% (eyes closed test with and without water pad. The AMTI system has shown an improvement of 40% to 80% (open eyes test and 65% to 75% (closed eyes test. The results indicate that the CPCMS system can achieve similar results to the commercial product so it can determine the balance.

  2. Signal integrity characterization techniques

    CERN Document Server

    Bogatin, Eric

    2009-01-01

    "Signal Integrity Characterization Techniques" addresses the gap between traditional digital and microwave curricula all while focusing on a practical and intuitive understanding of signal integrity effects within the data transmission channel. High-speed interconnects such as connectors, PCBs, cables, IC packages, and backplanes are critical elements of differential channels that must be designed using today's most powerful analysis and characterization tools.Both measurements and simulation must be done on the device under test, and both activities must yield data that correlates with each other. Most of this book focuses on real-world applications of signal integrity measurements - from backplane for design challenges to error correction techniques to jitter measurement technologies. The authors' approach wisely addresses some of these new high-speed technologies, and it also provides valuable insight into its future direction and will teach the reader valuable lessons on the industry.

  3. Quantum signaling game

    International Nuclear Information System (INIS)

    Frackiewicz, Piotr

    2014-01-01

    We present a quantum approach to a signaling game; a special kind of extensive game of incomplete information. Our model is based on quantum schemes for games in strategic form where players perform unitary operators on their own qubits of some fixed initial state and the payoff function is given by a measurement on the resulting final state. We show that the quantum game induced by our scheme coincides with a signaling game as a special case and outputs nonclassical results in general. As an example, we consider a quantum extension of the signaling game in which the chance move is a three-parameter unitary operator whereas the players' actions are equivalent to classical ones. In this case, we study the game in terms of Nash equilibria and refine the pure Nash equilibria adapting to the quantum game the notion of a weak perfect Bayesian equilibrium. (paper)

  4. PKD signaling and pancreatitis

    Science.gov (United States)

    Yuan, Jingzhen; Pandol, Stephen J.

    2016-01-01

    Background Acute pancreatitis is a serious medical disorder with no current therapies directed to the molecular pathogenesis of the disorder. Inflammation, inappropriate intracellular activation of digestive enzymes, and parenchymal acinar cell death by necrosis are the critical pathophysiologic processes of acute pancreatitis. Thus, it is necessary to elucidate the key molecular signals that mediate these pathobiologic processes and develop new therapeutic strategies to attenuate the appropriate signaling pathways in order to improve outcomes for this disease. A novel serine/threonine protein kinase D (PKD) family has emerged as key participants in signal transduction, and this family is increasingly being implicated in the regulation of multiple cellular functions and diseases. Methods This review summarizes recent findings of our group and others regarding the signaling pathway and the biological roles of the PKD family in pancreatic acinar cells. In particular, we highlight our studies of the functions of PKD in several key pathobiologic processes associated with acute pancreatitis in experimental models. Results Our findings reveal that PKD signaling is required for NF-κB activation/inflammation, intracellular zymogen activation, and acinar cell necrosis in rodent experimental pancreatitis. Novel small-molecule PKD inhibitors attenuate the severity of pancreatitis in both in vitro and in vivo experimental models. Further, this review emphasizes our latest advances in the therapeutic application of PKD inhibitors to experimental pancreatitis after the initiation of pancreatitis. Conclusions These novel findings suggest that PKD signaling is a necessary modulator in key initiating pathobiologic processes of pancreatitis, and that it constitutes a novel therapeutic target for treatments of this disorder. PMID:26879861

  5. Signal processing in microdosimetry

    International Nuclear Information System (INIS)

    Arbel, A.

    1984-01-01

    Signals occurring in microdosimetric measurements cover a dynamic range of 100 dB at a counting rate which normally stays below 10 4 but could increase significantly in case of an accident. The need for high resolution at low energies, non-linear signal processing to accommodate the specified dynamic range, easy calibration and thermal stability are conflicting requirements which pose formidable design problems. These problems are reviewed, and a practical approach to their solution is given employing a single processing channel. (author)

  6. Understanding signal integrity

    CERN Document Server

    Thierauf, Stephen C

    2010-01-01

    This unique book provides you with practical guidance on understanding and interpreting signal integrity (SI) performance to help you with your challenging circuit board design projects. You find high-level discussions of important SI concepts presented in a clear and easily accessible format, including question and answer sections and bulleted lists.This valuable resource features rules of thumb and simple equations to help you make estimates of critical signal integrity parameters without using circuit simulators of CAD (computer-aided design). The book is supported with over 120 illustratio

  7. Electronic signal conditioning

    CERN Document Server

    NEWBY, BRUCE

    1994-01-01

    At technician level, brief references to signal conditioning crop up in a fragmented way in various textbooks, but there has been no single textbook, until now!More advanced texts do exist but they are more mathematical and presuppose a higher level of understanding of electronics and statistics. Electronic Signal Conditioning is designed for HNC/D students and City & Guilds Electronics Servicing 2240 Parts 2 & 3. It will also be useful for BTEC National, Advanced GNVQ, A-level electronics and introductory courses at degree level.

  8. TOR signalling in plants.

    Science.gov (United States)

    Rexin, Daniel; Meyer, Christian; Robaglia, Christophe; Veit, Bruce

    2015-08-15

    Although the eukaryotic TOR (target of rapamycin) kinase signalling pathway has emerged as a key player for integrating nutrient-, energy- and stress-related cues with growth and metabolic outputs, relatively little is known of how this ancient regulatory mechanism has been adapted in higher plants. Drawing comparisons with the substantial knowledge base around TOR kinase signalling in fungal and animal systems, functional aspects of this pathway in plants are reviewed. Both conserved and divergent elements are discussed in relation to unique aspects associated with an autotrophic mode of nutrition and adaptive strategies for multicellular development exhibited by plants. © 2015 Authors; published by Portland Press Limited.

  9. Genomic signal processing

    CERN Document Server

    Shmulevich, Ilya

    2007-01-01

    Genomic signal processing (GSP) can be defined as the analysis, processing, and use of genomic signals to gain biological knowledge, and the translation of that knowledge into systems-based applications that can be used to diagnose and treat genetic diseases. Situated at the crossroads of engineering, biology, mathematics, statistics, and computer science, GSP requires the development of both nonlinear dynamical models that adequately represent genomic regulation, and diagnostic and therapeutic tools based on these models. This book facilitates these developments by providing rigorous mathema

  10. MICROSLEEPS AND THEIR DETECTION FROM THE BIOLOGICAL SIGNALS

    Directory of Open Access Journals (Sweden)

    Martin Holub

    2017-12-01

    Full Text Available Microsleeps (MS are a frequently discussed topic due to their fatal consequences. Their detection is necessary for the purpose of sleep laboratories, where they provide an option for the quantifying rate of sleep deprivation level and objective evaluation of subjective sleepiness. Many studies are dealing with this topic for automotive usage to design a fatigue countermeasure device. We made a research of recent attitude to the development of the automated MS detection methods. We created an overview of several MS detection approaches based on the measurement of biological signals. We also summarized the changes in EEG, EOG and ECG signals, which have been published over the last few years. The reproducible changes in the entire EEG spectrum, primarily with the increased activity of delta and theta, were noticed during a transition to fatigue. There were observed changes of blinking rate and reduction of eye movements during the fatigue tasks. MS correspond with variations in the autonomic regulation of the cardiovascular function, which can be quantified by HRV parameters. The decrease in HR, VLF, and LF/HF before falling asleep was revealed. EEG signal, especially its slow wave activity, considered to be the most predictive and reliable for the level of alertness. In spite of the detection from EEG signal is the most common method, EOG based approaches can also be very efficient and more driver-friendly. Besides, the signal processing in the time domain can improve the detection accuracy of the short events like MS.

  11. Quantifying uncertainty due to internal variability using high-resolution regional climate model simulations

    Science.gov (United States)

    Gutmann, E. D.; Ikeda, K.; Deser, C.; Rasmussen, R.; Clark, M. P.; Arnold, J. R.

    2015-12-01

    The uncertainty in future climate predictions is as large or larger than the mean climate change signal. As such, any predictions of future climate need to incorporate and quantify the sources of this uncertainty. One of the largest sources comes from the internal, chaotic, variability within the climate system itself. This variability has been approximated using the 30 ensemble members of the Community Earth System Model (CESM) large ensemble. Here we examine the wet and dry end members of this ensemble for cool-season precipitation in the Colorado Rocky Mountains with a set of high-resolution regional climate model simulations. We have used the Weather Research and Forecasting model (WRF) to simulate the periods 1990-2000, 2025-2035, and 2070-2080 on a 4km grid. These simulations show that the broad patterns of change depicted in CESM are inherited by the high-resolution simulations; however, the differences in the height and location of the mountains in the WRF simulation, relative to the CESM simulation, means that the location and magnitude of the precipitation changes are very different. We further show that high-resolution simulations with the Intermediate Complexity Atmospheric Research model (ICAR) predict a similar spatial pattern in the change signal as WRF for these ensemble members. We then use ICAR to examine the rest of the CESM Large Ensemble as well as the uncertainty in the regional climate model due to the choice of physics parameterizations.

  12. Quantified pH imaging with hyperpolarized (13) C-bicarbonate.

    Science.gov (United States)

    Scholz, David Johannes; Janich, Martin A; Köllisch, Ulrich; Schulte, Rolf F; Ardenkjaer-Larsen, Jan H; Frank, Annette; Haase, Axel; Schwaiger, Markus; Menzel, Marion I

    2015-06-01

    Because pH plays a crucial role in several diseases, it is desirable to measure pH in vivo noninvasively and in a spatially localized manner. Spatial maps of pH were quantified in vitro, with a focus on method-based errors, and applied in vivo. In vitro and in vivo (13) C mapping were performed for various flip angles for bicarbonate (BiC) and CO2 with spectral-spatial excitation and spiral readout in healthy Lewis rats in five slices. Acute subcutaneous sterile inflammation was induced with Concanavalin A in the right leg of Buffalo rats. pH and proton images were measured 2 h after induction. After optimizing the signal to noise ratio of the hyperpolarized (13) C-bicarbonate, error estimation of the spectral-spatial excited spectrum reveals that the method covers the biologically relevant pH range of 6 to 8 with low pH error (< 0.2). Quantification of pH maps shows negligible impact of the residual bicarbonate signal. pH maps reflect the induction of acute metabolic alkalosis. Inflamed, infected regions exhibit lower pH. Hyperpolarized (13) C-bicarbonate pH mapping was shown to be sensitive in the biologically relevant pH range. The mapping of pH was applied to healthy in vivo organs and interpreted within inflammation and acute metabolic alkalosis models. © 2014 Wiley Periodicals, Inc.

  13. In vivo biochemistry: quantifying ion and metabolite levels in individual cells or cultures of yeast.

    Science.gov (United States)

    Bermejo, Clara; Ewald, Jennifer C; Lanquar, Viviane; Jones, Alexander M; Frommer, Wolf B

    2011-08-15

    Over the past decade, we have learned that cellular processes, including signalling and metabolism, are highly compartmentalized, and that relevant changes in metabolic state can occur at sub-second timescales. Moreover, we have learned that individual cells in populations, or as part of a tissue, exist in different states. If we want to understand metabolic processes and signalling better, it will be necessary to measure biochemical and biophysical responses of individual cells with high temporal and spatial resolution. Fluorescence imaging has revolutionized all aspects of biology since it has the potential to provide information on the cellular and subcellular distribution of ions and metabolites with sub-second time resolution. In the present review we summarize recent progress in quantifying ions and metabolites in populations of yeast cells as well as in individual yeast cells with the help of quantitative fluorescent indicators, namely FRET metabolite sensors. We discuss the opportunities and potential pitfalls and the controls that help preclude misinterpretation. © The Authors Journal compilation © 2011 Biochemical Society

  14. Diagnostic value of MRS-quantified brain tissue lactate level in identifying children with mitochondrial disorders

    Energy Technology Data Exchange (ETDEWEB)

    Lunsing, Roelineke J.; Strating, Kim [University Medical Centre Groningen, University of Groningen, Department of Child Neurology, Groningen (Netherlands); Koning, Tom J. de [University Medical Centre Groningen, University of Groningen, Department of Pediatric Metabolic Diseases, Groningen (Netherlands); Sijens, Paul E. [University Medical Centre Groningen, University of Groningen, Department of Radiology, Groningen (Netherlands)

    2017-03-15

    Magnetic resonance spectroscopy (MRS) of children with or without neurometabolic disease is used for the first time for quantitative assessment of brain tissue lactate signals, to elaborate on previous suggestions of MRS-detected lactate as a marker of mitochondrial disease. Multivoxel MRS of a transverse plane of brain tissue cranial to the ventricles was performed in 88 children suspected of having neurometabolic disease, divided into 'definite' (n = 17, ≥1 major criteria), 'probable' (n = 10, ≥2 minor criteria), 'possible' (n = 17, 1 minor criterion) and 'unlikely' mitochondrial disease (n = 44, none of the criteria). Lactate levels, expressed in standardized arbitrary units or relative to creatine, were derived from summed signals from all voxels. Ten 'unlikely' children with a normal neurological exam served as the MRS reference subgroup. For 61 of 88 children, CSF lactate values were obtained. MRS lactate level (>12 arbitrary units) and the lactate-to-creatine ratio (L/Cr >0.22) differed significantly between the definite and the unlikely group (p = 0.015 and p = 0.001, respectively). MRS L/Cr also differentiated between the probable and the MRS reference subgroup (p = 0.03). No significant group differences were found for CSF lactate. MRS-quantified brain tissue lactate levels can serve as diagnostic marker for identifying mitochondrial disease in children. (orig.)

  15. Homo-FRET imaging as a tool to quantify protein and lipid clustering.

    Science.gov (United States)

    Bader, Arjen N; Hoetzl, Sandra; Hofman, Erik G; Voortman, Jarno; van Bergen en Henegouwen, Paul M P; van Meer, Gerrit; Gerritsen, Hans C

    2011-02-25

    Homo-FRET, Förster resonance energy transfer between identical fluorophores, can be conveniently measured by observing its effect on the fluorescence anisotropy. This review aims to summarize the possibilities of fluorescence anisotropy imaging techniques to investigate clustering of identical proteins and lipids. Homo-FRET imaging has the ability to determine distances between fluorophores. In addition it can be employed to quantify cluster sizes as well as cluster size distributions. The interpretation of homo-FRET signals is complicated by the fact that both the mutual orientations of the fluorophores and the number of fluorophores per cluster affect the fluorescence anisotropy in a similar way. The properties of the fluorescence probes are very important. Taking these properties into account is critical for the correct interpretation of homo-FRET signals in protein- and lipid-clustering studies. This is be exemplified by studies on the clustering of the lipid raft markers GPI and K-ras, as well as for EGF receptor clustering in the plasma membrane. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Diagnostic value of MRS-quantified brain tissue lactate level in identifying children with mitochondrial disorders

    International Nuclear Information System (INIS)

    Lunsing, Roelineke J.; Strating, Kim; Koning, Tom J. de; Sijens, Paul E.

    2017-01-01

    Magnetic resonance spectroscopy (MRS) of children with or without neurometabolic disease is used for the first time for quantitative assessment of brain tissue lactate signals, to elaborate on previous suggestions of MRS-detected lactate as a marker of mitochondrial disease. Multivoxel MRS of a transverse plane of brain tissue cranial to the ventricles was performed in 88 children suspected of having neurometabolic disease, divided into 'definite' (n = 17, ≥1 major criteria), 'probable' (n = 10, ≥2 minor criteria), 'possible' (n = 17, 1 minor criterion) and 'unlikely' mitochondrial disease (n = 44, none of the criteria). Lactate levels, expressed in standardized arbitrary units or relative to creatine, were derived from summed signals from all voxels. Ten 'unlikely' children with a normal neurological exam served as the MRS reference subgroup. For 61 of 88 children, CSF lactate values were obtained. MRS lactate level (>12 arbitrary units) and the lactate-to-creatine ratio (L/Cr >0.22) differed significantly between the definite and the unlikely group (p = 0.015 and p = 0.001, respectively). MRS L/Cr also differentiated between the probable and the MRS reference subgroup (p = 0.03). No significant group differences were found for CSF lactate. MRS-quantified brain tissue lactate levels can serve as diagnostic marker for identifying mitochondrial disease in children. (orig.)

  17. Quantifying the Effect of the Principal-Agent Problem on USResidential Energy Use

    Energy Technology Data Exchange (ETDEWEB)

    Murtishaw, Scott; Sathaye, Jayant

    2006-08-12

    The International Energy Agency (IEA) initiated andcoordinated this project to investigate the effects of market failures inthe end-use of energy that may isolate some markets or portions thereoffrom energy price signals in five member countries. Quantifying theamount of energy associated with market failures helps to demonstrate thesignificance of energy efficiency policies beyond price signals. In thisreport we investigate the magnitude of the principal-agent (PA) problemaffecting four of the major energy end uses in the U.S. residentialsector: refrigeration, water heating, space heating, and lighting. Usingdata from the American Housing Survey, we develop a novel approach toclassifying households into a PA matrix for each end use. End use energyvalues differentiated by housing unit type from the Residential EnergyConsumption Survey were used to estimate the final and primary energy useassociated with the PA problem. We find that the 2003 associated siteenergy use from these four end uses totaled over 3,400 trillion Btu,equal to 35 percent of the site energy consumed by the residentialsector.

  18. Information theory based approaches to cellular signaling.

    Science.gov (United States)

    Waltermann, Christian; Klipp, Edda

    2011-10-01

    Cells interact with their environment and they have to react adequately to internal and external changes such changes in nutrient composition, physical properties like temperature or osmolarity and other stresses. More specifically, they must be able to evaluate whether the external change is significant or just in the range of noise. Based on multiple external parameters they have to compute an optimal response. Cellular signaling pathways are considered as the major means of information perception and transmission in cells. Here, we review different attempts to quantify information processing on the level of individual cells. We refer to Shannon entropy, mutual information, and informal measures of signaling pathway cross-talk and specificity. Information theory in systems biology has been successfully applied to identification of optimal pathway structures, mutual information and entropy as system response in sensitivity analysis, and quantification of input and output information. While the study of information transmission within the framework of information theory in technical systems is an advanced field with high impact in engineering and telecommunication, its application to biological objects and processes is still restricted to specific fields such as neuroscience, structural and molecular biology. However, in systems biology dealing with a holistic understanding of biochemical systems and cellular signaling only recently a number of examples for the application of information theory have emerged. This article is part of a Special Issue entitled Systems Biology of Microorganisms. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan

    2017-12-12

    Our goal is to model and measure functional and effective (directional) connectivity in multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The difficulties from analyzing these data mainly come from two aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with potentially high lag order so that complex lead-lag temporal dynamics between the channels can be captured. Estimates of the VAR model will be obtained by our proposed hybrid LASSLE (LASSO + LSE) method which combines regularization (to control for sparsity) and least squares estimation (to improve bias and mean-squared error). Then we employ some measures of connectivity but put an emphasis on partial directed coherence (PDC) which can capture the directional connectivity between channels. PDC is a frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative to all possible receivers in the network. The proposed modeling approach provided key insights into potential functional relationships among simultaneously recorded sites during performance of a complex memory task. Specifically, this novel method was successful in quantifying patterns of effective connectivity across electrode locations, and in capturing how these patterns varied across trial epochs and trial types.

  20. Television picture signal processing

    NARCIS (Netherlands)

    1998-01-01

    Field or frame memories are often used in television receivers for video signal processing functions, such as noise reduction and/or flicker reduction. Television receivers also have graphic features such as teletext, menu-driven control systems, multilingual subtitling, an electronic TV-Guide, etc.

  1. Signals: Applying Academic Analytics

    Science.gov (United States)

    Arnold, Kimberly E.

    2010-01-01

    Academic analytics helps address the public's desire for institutional accountability with regard to student success, given the widespread concern over the cost of higher education and the difficult economic and budgetary conditions prevailing worldwide. Purdue University's Signals project applies the principles of analytics widely used in…

  2. Communication Signals in Lizards.

    Science.gov (United States)

    Carpenter, Charles C.

    1983-01-01

    Discusses mechanisms and functional intent of visual communication signals in iguanid/agamid lizards. Demonstrated that lizards communicate with each other by using pushups and head nods and that each species does this in its own way, conveying different types of information. (JN)

  3. Modeling binaural signal detection

    NARCIS (Netherlands)

    Breebaart, D.J.

    2001-01-01

    With the advent of multimedia technology and powerful signal processing systems, audio processing and reproduction has gained renewed interest. Examples of products that have been developed are audio coding algorithms to efficiently store and transmit music and speech, or audio reproduction systems

  4. Quantum cloning and signaling

    International Nuclear Information System (INIS)

    Simon, C.; Weihs, G.; Zeilinger, A.

    1999-01-01

    We discuss the close connections between cloning of quantum states and superluminal signaling. We present an optimal universal cloning machine based on stimulated emission recently proposed by the authors. As an instructive example, we show how a scheme for superluminal communication based on this cloning machine fails. (Authors)

  5. "Utilizing" signal detection theory.

    Science.gov (United States)

    Lynn, Spencer K; Barrett, Lisa Feldman

    2014-09-01

    What do inferring what a person is thinking or feeling, judging a defendant's guilt, and navigating a dimly lit room have in common? They involve perceptual uncertainty (e.g., a scowling face might indicate anger or concentration, for which different responses are appropriate) and behavioral risk (e.g., a cost to making the wrong response). Signal detection theory describes these types of decisions. In this tutorial, we show how incorporating the economic concept of utility allows signal detection theory to serve as a model of optimal decision making, going beyond its common use as an analytic method. This utility approach to signal detection theory clarifies otherwise enigmatic influences of perceptual uncertainty on measures of decision-making performance (accuracy and optimality) and on behavior (an inverse relationship between bias magnitude and sensitivity optimizes utility). A "utilized" signal detection theory offers the possibility of expanding the phenomena that can be understood within a decision-making framework. © The Author(s) 2014.

  6. Critical nodes in signalling pathways

    DEFF Research Database (Denmark)

    Taniguchi, Cullen M; Emanuelli, Brice; Kahn, C Ronald

    2006-01-01

    Physiologically important cell-signalling networks are complex, and contain several points of regulation, signal divergence and crosstalk with other signalling cascades. Here, we use the concept of 'critical nodes' to define the important junctions in these pathways and illustrate their unique role...... using insulin signalling as a model system....

  7. Quantifying the tibiofemoral joint space using x-ray tomosynthesis.

    Science.gov (United States)

    Kalinosky, Benjamin; Sabol, John M; Piacsek, Kelly; Heckel, Beth; Gilat Schmidt, Taly

    2011-12-01

    Digital x-ray tomosynthesis (DTS) has the potential to provide 3D information about the knee joint in a load-bearing posture, which may improve diagnosis and monitoring of knee osteoarthritis compared with projection radiography, the current standard of care. Manually quantifying and visualizing the joint space width (JSW) from 3D tomosynthesis datasets may be challenging. This work developed a semiautomated algorithm for quantifying the 3D tibiofemoral JSW from reconstructed DTS images. The algorithm was validated through anthropomorphic phantom experiments and applied to three clinical datasets. A user-selected volume of interest within the reconstructed DTS volume was enhanced with 1D multiscale gradient kernels. The edge-enhanced volumes were divided by polarity into tibial and femoral edge maps and combined across kernel scales. A 2D connected components algorithm was performed to determine candidate tibial and femoral edges. A 2D joint space width map (JSW) was constructed to represent the 3D tibiofemoral joint space. To quantify the algorithm accuracy, an adjustable knee phantom was constructed, and eleven posterior-anterior (PA) and lateral DTS scans were acquired with the medial minimum JSW of the phantom set to 0-5 mm in 0.5 mm increments (VolumeRad™, GE Healthcare, Chalfont St. Giles, United Kingdom). The accuracy of the algorithm was quantified by comparing the minimum JSW in a region of interest in the medial compartment of the JSW map to the measured phantom setting for each trial. In addition, the algorithm was applied to DTS scans of a static knee phantom and the JSW map compared to values estimated from a manually segmented computed tomography (CT) dataset. The algorithm was also applied to three clinical DTS datasets of osteoarthritic patients. The algorithm segmented the JSW and generated a JSW map for all phantom and clinical datasets. For the adjustable phantom, the estimated minimum JSW values were plotted against the measured values for all

  8. Radiology reports: a quantifiable and objective textual approach

    International Nuclear Information System (INIS)

    Scott, J.A.; Palmer, E.L.

    2015-01-01

    Aim: To examine the feasibility of using automated lexical analysis in conjunction with machine learning to create a means of objectively characterising radiology reports for quality improvement. Materials and methods: Twelve lexical parameters were quantified from the collected reports of four radiologists. These included the number of different words used, number of sentences, reading grade, readability, usage of the passive voice, and lexical metrics of concreteness, ambivalence, complexity, passivity, embellishment, communication and cognition. Each radiologist was statistically compared to the mean of the group for each parameter to determine outlying report characteristics. The reproducibility of these parameters in a given radiologist's reporting style was tested by using only these 12 parameters as input to a neural network designed to establish the authorship of 60 unknown reports. Results: Significant differences in report characteristics were observed between radiologists, quantifying and characterising deviations of individuals from the group reporting style. The 12 metrics employed in a neural network correctly identified the author in each of 60 unknown reports tested, indicating a robust parametric signature. Conclusion: Automated and quantifiable methods can be used to analyse reporting style and provide impartial and objective feedback as well as to detect and characterise significant differences from the group. The parameters examined are sufficiently specific to identify the authors of reports and can potentially be useful in quality improvement and residency training. - Highlights: • Radiology reports can be objectively studied based upon their lexical characteristics. • This analysis can help establish norms for reporting, resident training and authorship attribution. • This analysis can complement higher level subjective analysis in quality improvement efforts.

  9. Using nitrate to quantify quick flow in a karst aquifer

    Science.gov (United States)

    Mahler, B.J.; Garner, B.D.

    2009-01-01

    In karst aquifers, contaminated recharge can degrade spring water quality, but quantifying the rapid recharge (quick flow) component of spring flow is challenging because of its temporal variability. Here, we investigate the use of nitrate in a two-endmember mixing model to quantify quick flow in Barton Springs, Austin, Texas. Historical nitrate data from recharging creeks and Barton Springs were evaluated to determine a representative nitrate concentration for the aquifer water endmember (1.5 mg/L) and the quick flow endmember (0.17 mg/L for nonstormflow conditions and 0.25 mg/L for stormflow conditions). Under nonstormflow conditions for 1990 to 2005, model results indicated that quick flow contributed from 0% to 55% of spring flow. The nitrate-based two-endmember model was applied to the response of Barton Springs to a storm and results compared to those produced using the same model with ??18O and specific conductance (SC) as tracers. Additionally, the mixing model was modified to allow endmember quick flow values to vary over time. Of the three tracers, nitrate appears to be the most advantageous because it is conservative and because the difference between the concentrations in the two endmembers is large relative to their variance. The ??18O- based model was very sensitive to variability within the quick flow endmember, and SC was not conservative over the timescale of the storm response. We conclude that a nitrate-based two-endmember mixing model might provide a useful approach for quantifying the temporally variable quick flow component of spring flow in some karst systems. ?? 2008 National Ground Water Association.

  10. A simple method for quantifying jump loads in volleyball athletes.

    Science.gov (United States)

    Charlton, Paula C; Kenneally-Dabrowski, Claire; Sheppard, Jeremy; Spratford, Wayne

    2017-03-01

    Evaluate the validity of a commercially available wearable device, the Vert, for measuring vertical displacement and jump count in volleyball athletes. Propose a potential method of quantifying external load during training and match play within this population. Validation study. The ability of the Vert device to measure vertical displacement in male, junior elite volleyball athletes was assessed against reference standard laboratory motion analysis. The ability of the Vert device to count jumps during training and match-play was assessed via comparison with retrospective video analysis to determine precision and recall. A method of quantifying external load, known as the load index (LdIx) algorithm was proposed using the product of the jump count and average kinetic energy. Correlation between two separate Vert devices and three-dimensional trajectory data were good to excellent for all jump types performed (r=0.83-0.97), with a mean bias of between 3.57-4.28cm. When matched against jumps identified through video analysis, the Vert demonstrated excellent precision (0.995-1.000) evidenced by a low number of false positives. The number of false negatives identified with the Vert was higher resulting in lower recall values (0.814-0.930). The Vert is a commercially available tool that has potential for measuring vertical displacement and jump count in elite junior volleyball athletes without the need for time-consuming analysis and bespoke software. Subsequently, allowing the collected data to better quantify load using the proposed algorithm (LdIx). Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  11. A simplified score to quantify comorbidity in COPD.

    Directory of Open Access Journals (Sweden)

    Nirupama Putcha

    Full Text Available Comorbidities are common in COPD, but quantifying their burden is difficult. Currently there is a COPD-specific comorbidity index to predict mortality and another to predict general quality of life. We sought to develop and validate a COPD-specific comorbidity score that reflects comorbidity burden on patient-centered outcomes.Using the COPDGene study (GOLD II-IV COPD, we developed comorbidity scores to describe patient-centered outcomes employing three techniques: 1 simple count, 2 weighted score, and 3 weighted score based upon statistical selection procedure. We tested associations, area under the Curve (AUC and calibration statistics to validate scores internally with outcomes of respiratory disease-specific quality of life (St. George's Respiratory Questionnaire, SGRQ, six minute walk distance (6MWD, modified Medical Research Council (mMRC dyspnea score and exacerbation risk, ultimately choosing one score for external validation in SPIROMICS.Associations between comorbidities and all outcomes were comparable across the three scores. All scores added predictive ability to models including age, gender, race, current smoking status, pack-years smoked and FEV1 (p<0.001 for all comparisons. Area under the curve (AUC was similar between all three scores across outcomes: SGRQ (range 0·7624-0·7676, MMRC (0·7590-0·7644, 6MWD (0·7531-0·7560 and exacerbation risk (0·6831-0·6919. Because of similar performance, the comorbidity count was used for external validation. In the SPIROMICS cohort, the comorbidity count performed well to predict SGRQ (AUC 0·7891, MMRC (AUC 0·7611, 6MWD (AUC 0·7086, and exacerbation risk (AUC 0·7341.Quantifying comorbidity provides a more thorough understanding of the risk for patient-centered outcomes in COPD. A comorbidity count performs well to quantify comorbidity in a diverse population with COPD.

  12. Quantifying the ventilatory control contribution to sleep apnoea using polysomnography.

    Science.gov (United States)

    Terrill, Philip I; Edwards, Bradley A; Nemati, Shamim; Butler, James P; Owens, Robert L; Eckert, Danny J; White, David P; Malhotra, Atul; Wellman, Andrew; Sands, Scott A

    2015-02-01

    Elevated loop gain, consequent to hypersensitive ventilatory control, is a primary nonanatomical cause of obstructive sleep apnoea (OSA) but it is not possible to quantify this in the clinic. Here we provide a novel method to estimate loop gain in OSA patients using routine clinical polysomnography alone. We use the concept that spontaneous ventilatory fluctuations due to apnoeas/hypopnoeas (disturbance) result in opposing changes in ventilatory drive (response) as determined by loop gain (response/disturbance). Fitting a simple ventilatory control model (including chemical and arousal contributions to ventilatory drive) to the ventilatory pattern of OSA reveals the underlying loop gain. Following mathematical-model validation, we critically tested our method in patients with OSA by comparison with a standard (continuous positive airway pressure (CPAP) drop method), and by assessing its ability to detect the known reduction in loop gain with oxygen and acetazolamide. Our method quantified loop gain from baseline polysomnography (correlation versus CPAP-estimated loop gain: n=28; r=0.63, p<0.001), detected the known reduction in loop gain with oxygen (n=11; mean±sem change in loop gain (ΔLG) -0.23±0.08, p=0.02) and acetazolamide (n=11; ΔLG -0.20±0.06, p=0.005), and predicted the OSA response to loop gain-lowering therapy. We validated a means to quantify the ventilatory control contribution to OSA pathogenesis using clinical polysomnography, enabling identification of likely responders to therapies targeting ventilatory control. Copyright ©ERS 2015.

  13. Sinusoidal Representation of Acoustic Signals

    Science.gov (United States)

    Honda, Masaaki

    Sinusoidal representation of acoustic signals has been an important tool in speech and music processing like signal analysis, synthesis and time scale or pitch modifications. It can be applicable to arbitrary signals, which is an important advantage over other signal representations like physical modeling of acoustic signals. In sinusoidal representation, acoustic signals are composed as sums of sinusoid (sine wave) with different amplitudes, frequencies and phases, which is based on the timedependent short-time Fourier transform (STFT). This article describes the principles of acoustic signal analysis/synthesis based on a sinusoid representation with focus on sine waves with rapidly varying frequency.

  14. The newest digital signal processing

    International Nuclear Information System (INIS)

    Lee, Chae Uk

    2002-08-01

    This book deal with the newest digital signal processing, which contains introduction on conception of digital signal processing, constitution and purpose, signal and system such as signal, continuos signal, discrete signal and discrete system, I/O expression on impress response, convolution, mutual connection of system and frequency character,z transform of definition, range, application of z transform and relationship with laplace transform, Discrete fourier, Fast fourier transform on IDFT algorithm and FFT application, foundation of digital filter of notion, expression, types, frequency characteristic of digital filter and design order of filter, Design order of filter, Design of FIR digital filter, Design of IIR digital filter, Adaptive signal processing, Audio signal processing, video signal processing and application of digital signal processing.

  15. Probabilistic structural analysis to quantify uncertainties associated with turbopump blades

    Science.gov (United States)

    Nagpal, Vinod K.; Rubinstein, Robert; Chamis, Christos C.

    1987-01-01

    A probabilistic study of turbopump blades has been in progress at NASA Lewis Research Center for over the last two years. The objectives of this study are to evaluate the effects of uncertainties in geometry and material properties on the structural response of the turbopump blades to evaluate the tolerance limits on the design. A methodology based on probabilistic approach has been developed to quantify the effects of the random uncertainties. The results of this study indicate that only the variations in geometry have significant effects.

  16. Quantifying uncertainties in the structural response of SSME blades

    Science.gov (United States)

    Nagpal, Vinod K.

    1987-01-01

    To quantify the uncertainties associated with the geometry and material properties of a Space Shuttle Main Engine (SSME) turbopump blade, a computer code known as STAEBL was used. A finite element model of the blade used 80 triangular shell elements with 55 nodes and five degrees of freedom per node. The whole study was simulated on the computer and no real experiments were conducted. The structural response has been evaluated in terms of three variables which are natural frequencies, root (maximum) stress, and blade tip displacements. The results of the study indicate that only the geometric uncertainties have significant effects on the response. Uncertainties in material properties have insignificant effects.

  17. Quantifying DNA melting transitions using single-molecule force spectroscopy

    International Nuclear Information System (INIS)

    Calderon, Christopher P; Chen, W-H; Harris, Nolan C; Kiang, C-H; Lin, K-J

    2009-01-01

    We stretched a DNA molecule using an atomic force microscope (AFM) and quantified the mechanical properties associated with B and S forms of double-stranded DNA (dsDNA), molten DNA, and single-stranded DNA. We also fit overdamped diffusion models to the AFM time series and used these models to extract additional kinetic information about the system. Our analysis provides additional evidence supporting the view that S-DNA is a stable intermediate encountered during dsDNA melting by mechanical force. In addition, we demonstrated that the estimated diffusion models can detect dynamical signatures of conformational degrees of freedom not directly observed in experiments.

  18. Quantifying DNA melting transitions using single-molecule force spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Calderon, Christopher P [Department of Computational and Applied Mathematics, Rice University, Houston, TX (United States); Chen, W-H; Harris, Nolan C; Kiang, C-H [Department of Physics and Astronomy, Rice University, Houston, TX (United States); Lin, K-J [Department of Chemistry, National Chung Hsing University, Taichung, Taiwan (China)], E-mail: chkiang@rice.edu

    2009-01-21

    We stretched a DNA molecule using an atomic force microscope (AFM) and quantified the mechanical properties associated with B and S forms of double-stranded DNA (dsDNA), molten DNA, and single-stranded DNA. We also fit overdamped diffusion models to the AFM time series and used these models to extract additional kinetic information about the system. Our analysis provides additional evidence supporting the view that S-DNA is a stable intermediate encountered during dsDNA melting by mechanical force. In addition, we demonstrated that the estimated diffusion models can detect dynamical signatures of conformational degrees of freedom not directly observed in experiments.

  19. Quantifying unsteadiness and dynamics of pulsatory volcanic activity

    Science.gov (United States)

    Dominguez, L.; Pioli, L.; Bonadonna, C.; Connor, C. B.; Andronico, D.; Harris, A. J. L.; Ripepe, M.

    2016-06-01

    Pulsatory eruptions are marked by a sequence of explosions which can be separated by time intervals ranging from a few seconds to several hours. The quantification of the periodicities associated with these eruptions is essential not only for the comprehension of the mechanisms controlling explosivity, but also for classification purposes. We focus on the dynamics of pulsatory activity and quantify unsteadiness based on the distribution of the repose time intervals between single explosive events in relation to magma properties and eruptive styles. A broad range of pulsatory eruption styles are considered, including Strombolian, violent Strombolian and Vulcanian explosions. We find a general relationship between the median of the observed repose times in eruptive sequences and the viscosity of magma given by η ≈ 100 ṡtmedian. This relationship applies to the complete range of magma viscosities considered in our study (102 to 109 Pa s) regardless of the eruption length, eruptive style and associated plume heights, suggesting that viscosity is the main magma property controlling eruption periodicity. Furthermore, the analysis of the explosive sequences in terms of failure time through statistical survival analysis provides further information: dynamics of pulsatory activity can be successfully described in terms of frequency and regularity of the explosions, quantified based on the log-logistic distribution. A linear relationship is identified between the log-logistic parameters, μ and s. This relationship is useful for quantifying differences among eruptive styles from very frequent and regular mafic events (Strombolian activity) to more sporadic and irregular Vulcanian explosions in silicic systems. The time scale controlled by the parameter μ, as a function of the median of the distribution, can be therefore correlated with the viscosity of magmas; while the complexity of the erupting system, including magma rise rate, degassing and fragmentation efficiency

  20. Quantifying environmental performance using an environmental footprint calculator

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D.B.; Loney, A.C.; Chan, V. [Conestoga-Rovers & Associates, Waterloo, Ontario (Canada)

    2009-07-01

    This paper provides a case study using relevant key performance indicators (KPIs) to evaluate the environmental performance of a business. Using recognized calculation and reporting frameworks, Conestoga-Rovers & Associates (CRA) designed the Environmental Footprint Calculator to quantify the environmental performance of a Canadian construction materials company. CRA designed the Environmental Footprint calculator for our client to track and report their environmental performance in accordance with their targets, based on requirements of relevant guidance documents. The objective was to design a tool that effectively manages, calculates, and reports environmental performance to various stakeholders in a user-friendly format. (author)

  1. Quantifying capital goods for biological treatment of organic waste

    DEFF Research Database (Denmark)

    Brogaard, Line Kai-Sørensen; Petersen, Per H.; Nielsen, Peter D.

    2015-01-01

    for the AD plant. For the composting plants, gravel and concrete slabs for the pavement were used in large amounts. To frame the quantification, environmental impact assessments (EIAs) showed that the steel used for tanks at the AD plant and the concrete slabs at the composting plants made the highest...... on the different sizes for the three different types of waste (garden and park waste, food waste and sludge from wastewater treatment) in amounts of 10,000 or 50,000 tonnes per year. The AD plant was quantified for a capacity of 80,000 tonnes per year. Concrete and steel for the tanks were the main materials...

  2. Quantifying capital goods for collection and transport of waste

    DEFF Research Database (Denmark)

    Brogaard, Line Kai-Sørensen; Christensen, Thomas Højlund

    2012-01-01

    he capital goods for collection and transport of waste were quantified for different types of containers (plastic containers, cubes and steel containers) and an 18-tonnes compacting collection truck. The data were collected from producers and vendors of the bins and the truck. The service lifetime...... tonne of waste handled. The impact of producing the capital goods for waste collection and transport cannot be neglected as the capital goods dominate (>85%) the categories human-toxicity (non-cancer and cancer), ecotoxicity, resource depletion and aquatic eutrophication, but also play a role (>13...

  3. Quantifying the energetics of cooperativity in a ternary protein complex

    DEFF Research Database (Denmark)

    Andersen, Peter S; Schuck, Peter; Sundberg, Eric J

    2002-01-01

    and mathematical modeling to describe the energetics of cooperativity in a trimolecular protein complex. As a model system for quantifying cooperativity, we studied the ternary complex formed by the simultaneous interaction of a superantigen with major histocompatibility complex and T cell receptor, for which...... a structural model is available. This system exhibits positive and negative cooperativity, as well as augmentation of the temperature dependence of binding kinetics upon the cooperative interaction of individual protein components in the complex. Our experimental and theoretical analysis may be applicable...... to other systems involving cooperativity....

  4. The quantified patient of the future: Opportunities and challenges.

    Science.gov (United States)

    Majmudar, Maulik D; Colucci, Lina Avancini; Landman, Adam B

    2015-09-01

    The healthcare system is undergoing rapid transformation as national policies increase patient access, reward positive health outcomes, and push for an end to the current era of episodic care. Advances in health sensors are rapidly moving diagnostic and monitoring capabilities into consumer products, enabling new care models. Although hospitals and health care providers have been slow to embrace novel health technologies, such innovations may help meet mounting pressure to provide timely, high quality, and low-cost care to large populations. This leading edge perspective focuses on the quantified-self movement and highlights the opportunities and challenges for patients, providers, and researchers. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Quantifying environmental performance using an environmental footprint calculator

    International Nuclear Information System (INIS)

    Smith, D.B.; Loney, A.C.; Chan, V.

    2009-01-01

    This paper provides a case study using relevant key performance indicators (KPIs) to evaluate the environmental performance of a business. Using recognized calculation and reporting frameworks, Conestoga-Rovers & Associates (CRA) designed the Environmental Footprint Calculator to quantify the environmental performance of a Canadian construction materials company. CRA designed the Environmental Footprint calculator for our client to track and report their environmental performance in accordance with their targets, based on requirements of relevant guidance documents. The objective was to design a tool that effectively manages, calculates, and reports environmental performance to various stakeholders in a user-friendly format. (author)

  6. A sentinel protein assay for simultaneously quantifying cellular processes

    Czech Academy of Sciences Publication Activity Database

    Soste, M.; Hrabáková, Rita; Wanka, S.; Melnik, A.; Boersema, P.; Maiolica, A.; Wernas, T.; Tognetti, M.; von Mering, Ch.; Picotti, P.

    2014-01-01

    Roč. 11, č. 10 (2014), s. 1045-1048 ISSN 1548-7091 R&D Projects: GA MŠk ED2.1.00/03.0124 Institutional support: RVO:67985904 Keywords : targeted proteomics * selected reaction monitoring * cellular signaling Subject RIV: CE - Biochemistry Impact factor: 32.072, year: 2014

  7. Using OSL dating to quantify rates of Earth surface processes

    Science.gov (United States)

    Rhodes, E. J.; Rittenour, T. M.

    2010-12-01

    In Optically Stimulated Luminescence (OSL), the dating signal is reset when mineral grains are exposed to light or heat, and gradually rebuilds during subsequent burial by interaction with ionising radiation. Quartz and feldspar provide useful OSL signals demonstrating rapid signal reduction in only seconds of light exposure. Age estimates ranging from under 1 year to around 200,000 years can be determined for a wide range of sedimentary contexts, including dunes, marine deposits, fluvial and glacial environments, and recent developments provide the framework for low temperature thermochronometric applications on timescales comparable with rapid climate fluctuations. In this presentation, we explore the range of applications for determining rates of Earth surface processes using OSL. We examine technical limitations, and provide a framework for overcoming current difficulties experienced in several specific regions and contexts. We will focus on OSL dating applications to glacigenic and fluvial records, along with use of the technique in tectonic and paleoseismic contexts. In many ways, these represent the most challenging environments for OSL; rapid high energy deposition is associated with incomplete signal zeroing, and the characteristics of quartz in many of these environments make it difficult to derive precise age estimates using this mineral. We will introduce innovative methods to overcome these limitations, both existing and those under development.

  8. Variable reflectivity signal mirrors and signal response measurements

    International Nuclear Information System (INIS)

    Vine, Glenn de; Shaddock, Daniel A; McClelland, David E

    2002-01-01

    Future gravitational wave detectors will include some form of signal mirror in order to alter the signal response of the device. We introduce interferometer configurations which utilize a variable reflectivity signal mirror allowing a tunable peak frequency and variable signal bandwidth. A detector configured with a Fabry-Perot cavity as the signal mirror is compared theoretically with one using a Michelson interferometer for a signal mirror. A system for the measurement of the interferometer signal responses is introduced. This technique is applied to a power-recycled Michelson interferometer with resonant sideband extraction. We present broadband measurements of the benchtop prototype's signal response for a range of signal cavity detunings. This technique is also applicable to most other gravitational wave detector configurations

  9. Variable reflectivity signal mirrors and signal response measurements

    CERN Document Server

    Vine, G D; McClelland, D E

    2002-01-01

    Future gravitational wave detectors will include some form of signal mirror in order to alter the signal response of the device. We introduce interferometer configurations which utilize a variable reflectivity signal mirror allowing a tunable peak frequency and variable signal bandwidth. A detector configured with a Fabry-Perot cavity as the signal mirror is compared theoretically with one using a Michelson interferometer for a signal mirror. A system for the measurement of the interferometer signal responses is introduced. This technique is applied to a power-recycled Michelson interferometer with resonant sideband extraction. We present broadband measurements of the benchtop prototype's signal response for a range of signal cavity detunings. This technique is also applicable to most other gravitational wave detector configurations.

  10. Lymphocyte signaling: beyond knockouts.

    Science.gov (United States)

    Saveliev, Alexander; Tybulewicz, Victor L J

    2009-04-01

    The analysis of lymphocyte signaling was greatly enhanced by the advent of gene targeting, which allows the selective inactivation of a single gene. Although this gene 'knockout' approach is often informative, in many cases, the phenotype resulting from gene ablation might not provide a complete picture of the function of the corresponding protein. If a protein has multiple functions within a single or several signaling pathways, or stabilizes other proteins in a complex, the phenotypic consequences of a gene knockout may manifest as a combination of several different perturbations. In these cases, gene targeting to 'knock in' subtle point mutations might provide more accurate insight into protein function. However, to be informative, such mutations must be carefully based on structural and biophysical data.

  11. Sphingosine signaling and atherogenesis

    DEFF Research Database (Denmark)

    Xu, Cang-bao; Hansen-Schwartz, Jacob; Edvinsson, Lars

    2004-01-01

    Sphingosine-1-phosphate (S1P) has diverse biological functions acting inside cells as a second messenger to regulate cell proliferation and survival, and extracellularly, as a ligand for a group of G protein-coupled receptors (GPCRs) named the endothelial differentiation gene (EDG) family. Five...... closely related GPCRs of EDG family (EDG1, EDG3, EDG5, EDG6, and EDG8) have recently been identified as high-affinity S1P receptors. These receptors are coupled via Gi, Gq, G12/13, and Rho. The signaling pathways are linked to vascular cell migration, proliferation, apoptosis, intracellular Ca2......+ mobilization, and expression of adhesion molecules. The formation of an atherosclerotic lesion occurs through activation of cellular events that include monocyte adhesion to the endothelium and vascular smooth muscle cell (VSMC) migration and proliferation. Thus, S1P signaling may play an important role...

  12. NMR signal transducer

    International Nuclear Information System (INIS)

    Kucheryaev, A.G.; Oliferchuk, N.L.

    1975-01-01

    A signal transducer of nuclear magnetic resonance for simultaneously measuring frequency and intensitivity of two various isotope signals, which are in one specimen is described. The transducer represents radiofrequency circuit with two resonance frequences, which is common for two autodyne generators. To decrease measuring time and to increase recording diagram stability the radiofrequency circuit has LC netork, in the inductivity of which investigated specimen is located; a circuit variable capacity is connected in parallel with one of the autodyne generators. Besides the radiofrequency circuit has an inductance coil in series with a standard specimen inside as well as a variable capacitor connected in parallel with the second autodyne generator. An amplitude of oscillation of each resonance frequency is controlled and adjusted separately. The transducer described can be used for the measurement of a nuclei concentration, isotope concentration and for the spin determination

  13. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  14. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-09-28

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  15. Quantifying the measurement uncertainty of results from environmental analytical methods.

    Science.gov (United States)

    Moser, J; Wegscheider, W; Sperka-Gottlieb, C

    2001-07-01

    The Eurachem-CITAC Guide Quantifying Uncertainty in Analytical Measurement was put into practice in a public laboratory devoted to environmental analytical measurements. In doing so due regard was given to the provisions of ISO 17025 and an attempt was made to base the entire estimation of measurement uncertainty on available data from the literature or from previously performed validation studies. Most environmental analytical procedures laid down in national or international standards are the result of cooperative efforts and put into effect as part of a compromise between all parties involved, public and private, that also encompasses environmental standards and statutory limits. Central to many procedures is the focus on the measurement of environmental effects rather than on individual chemical species. In this situation it is particularly important to understand the measurement process well enough to produce a realistic uncertainty statement. Environmental analytical methods will be examined as far as necessary, but reference will also be made to analytical methods in general and to physical measurement methods where appropriate. This paper describes ways and means of quantifying uncertainty for frequently practised methods of environmental analysis. It will be shown that operationally defined measurands are no obstacle to the estimation process as described in the Eurachem/CITAC Guide if it is accepted that the dominating component of uncertainty comes from the actual practice of the method as a reproducibility standard deviation.

  16. Statistical Measures to Quantify Similarity between Molecular Dynamics Simulation Trajectories

    Directory of Open Access Journals (Sweden)

    Jenny Farmer

    2017-11-01

    Full Text Available Molecular dynamics simulation is commonly employed to explore protein dynamics. Despite the disparate timescales between functional mechanisms and molecular dynamics (MD trajectories, functional differences are often inferred from differences in conformational ensembles between two proteins in structure-function studies that investigate the effect of mutations. A common measure to quantify differences in dynamics is the root mean square fluctuation (RMSF about the average position of residues defined by C α -atoms. Using six MD trajectories describing three native/mutant pairs of beta-lactamase, we make comparisons with additional measures that include Jensen-Shannon, modifications of Kullback-Leibler divergence, and local p-values from 1-sample Kolmogorov-Smirnov tests. These additional measures require knowing a probability density function, which we estimate by using a nonparametric maximum entropy method that quantifies rare events well. The same measures are applied to distance fluctuations between C α -atom pairs. Results from several implementations for quantitative comparison of a pair of MD trajectories are made based on fluctuations for on-residue and residue-residue local dynamics. We conclude that there is almost always a statistically significant difference between pairs of 100 ns all-atom simulations on moderate-sized proteins as evident from extraordinarily low p-values.

  17. Quantifying construction and demolition waste: An analytical review

    International Nuclear Information System (INIS)

    Wu, Zezhou; Yu, Ann T.W.; Shen, Liyin; Liu, Guiwen

    2014-01-01

    Highlights: • Prevailing C and D waste quantification methodologies are identified and compared. • One specific methodology cannot fulfill all waste quantification scenarios. • A relevance tree for appropriate quantification methodology selection is proposed. • More attentions should be paid to civil and infrastructural works. • Classified information is suggested for making an effective waste management plan. - Abstract: Quantifying construction and demolition (C and D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C and D waste generation at both regional and project levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C and D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested

  18. Methods to Quantify Nickel in Soils and Plant Tissues

    Directory of Open Access Journals (Sweden)

    Bruna Wurr Rodak

    2015-06-01

    Full Text Available In comparison with other micronutrients, the levels of nickel (Ni available in soils and plant tissues are very low, making quantification very difficult. The objective of this paper is to present optimized determination methods of Ni availability in soils by extractants and total content in plant tissues for routine commercial laboratory analyses. Samples of natural and agricultural soils were processed and analyzed by Mehlich-1 extraction and by DTPA. To quantify Ni in the plant tissues, samples were digested with nitric acid in a closed system in a microwave oven. The measurement was performed by inductively coupled plasma/optical emission spectrometry (ICP-OES. There was a positive and significant correlation between the levels of available Ni in the soils subjected to Mehlich-1 and DTPA extraction, while for plant tissue samples the Ni levels recovered were high and similar to the reference materials. The availability of Ni in some of the natural soil and plant tissue samples were lower than the limits of quantification. Concentrations of this micronutrient were higher in the soil samples in which Ni had been applied. Nickel concentration differed in the plant parts analyzed, with highest levels in the grains of soybean. The grain, in comparison with the shoot and leaf concentrations, were better correlated with the soil available levels for both extractants. The methods described in this article were efficient in quantifying Ni and can be used for routine laboratory analysis of soils and plant tissues.

  19. Development of an algorithm for quantifying extremity biological tissue

    International Nuclear Information System (INIS)

    Pavan, Ana L.M.; Miranda, Jose R.A.; Pina, Diana R. de

    2013-01-01

    The computerized radiology (CR) has become the most widely used device for image acquisition and production, since its introduction in the 80s. The detection and early diagnosis, obtained via CR, are important for the successful treatment of diseases such as arthritis, metabolic bone diseases, tumors, infections and fractures. However, the standards used for optimization of these images are based on international protocols. Therefore, it is necessary to compose radiographic techniques for CR system that provides a secure medical diagnosis, with doses as low as reasonably achievable. To this end, the aim of this work is to develop a quantifier algorithm of tissue, allowing the construction of a homogeneous end used phantom to compose such techniques. It was developed a database of computed tomography images of hand and wrist of adult patients. Using the Matlab ® software, was developed a computational algorithm able to quantify the average thickness of soft tissue and bones present in the anatomical region under study, as well as the corresponding thickness in simulators materials (aluminium and lucite). This was possible through the application of mask and Gaussian removal technique of histograms. As a result, was obtained an average thickness of soft tissue of 18,97 mm and bone tissue of 6,15 mm, and their equivalents in materials simulators of 23,87 mm of acrylic and 1,07mm of aluminum. The results obtained agreed with the medium thickness of biological tissues of a patient's hand pattern, enabling the construction of an homogeneous phantom

  20. A framework for quantifying net benefits of alternative prognostic models.

    Science.gov (United States)

    Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G

    2012-01-30

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd.

  1. A framework for quantifying net benefits of alternative prognostic models‡

    Science.gov (United States)

    Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G

    2012-01-01

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd. PMID:21905066

  2. Quantifying potential recharge in mantled sinkholes using ERT.

    Science.gov (United States)

    Schwartz, Benjamin F; Schreiber, Madeline E

    2009-01-01

    Potential recharge through thick soils in mantled sinkholes was quantified using differential electrical resistivity tomography (ERT). Conversion of time series two-dimensional (2D) ERT profiles into 2D volumetric water content profiles using a numerically optimized form of Archie's law allowed us to monitor temporal changes in water content in soil profiles up to 9 m in depth. Combining Penman-Monteith daily potential evapotranspiration (PET) and daily precipitation data with potential recharge calculations for three sinkhole transects indicates that potential recharge occurred only during brief intervals over the study period and ranged from 19% to 31% of cumulative precipitation. Spatial analysis of ERT-derived water content showed that infiltration occurred both on sinkhole flanks and in sinkhole bottoms. Results also demonstrate that mantled sinkholes can act as regions of both rapid and slow recharge. Rapid recharge is likely the result of flow through macropores (such as root casts and thin gravel layers), while slow recharge is the result of unsaturated flow through fine-grained sediments. In addition to developing a new method for quantifying potential recharge at the field scale in unsaturated conditions, we show that mantled sinkholes are an important component of storage in a karst system.

  3. Quantifying Diffuse Contamination: Method and Application to Pb in Soil.

    Science.gov (United States)

    Fabian, Karl; Reimann, Clemens; de Caritat, Patrice

    2017-06-20

    A new method for detecting and quantifying diffuse contamination at the continental to regional scale is based on the analysis of cumulative distribution functions (CDFs). It uses cumulative probability (CP) plots for spatially representative data sets, preferably containing >1000 determinations. Simulations demonstrate how different types of contamination influence elemental CDFs of different sample media. It is found that diffuse contamination is characterized by a distinctive shift of the low-concentration end of the distribution of the studied element in its CP plot. Diffuse contamination can be detected and quantified via either (1) comparing the distribution of the contaminating element to that of an element with a geochemically comparable behavior but no contamination source (e.g., Pb vs Rb), or (2) comparing the top soil distribution of an element to the distribution of the same element in subsoil samples from the same area, taking soil forming processes into consideration. Both procedures are demonstrated for geochemical soil data sets from Europe, Australia, and the U.S.A. Several different data sets from Europe deliver comparable results at different scales. Diffuse Pb contamination in surface soil is estimated to be contamination sources and can be used to efficiently monitor diffuse contamination at the continental to regional scale.

  4. Quantifying facial paralysis using the Kinect v2.

    Science.gov (United States)

    Gaber, Amira; Taher, Mona F; Wahed, Manal Abdel

    2015-01-01

    Assessment of facial paralysis (FP) and quantitative grading of facial asymmetry are essential in order to quantify the extent of the condition as well as to follow its improvement or progression. As such, there is a need for an accurate quantitative grading system that is easy to use, inexpensive and has minimal inter-observer variability. A comprehensive automated system to quantify and grade FP is the main objective of this work. An initial prototype has been presented by the authors. The present research aims to enhance the accuracy and robustness of one of this system's modules: the resting symmetry module. This is achieved by including several modifications to the computation method of the symmetry index (SI) for the eyebrows, eyes and mouth. These modifications are the gamma correction technique, the area of the eyes, and the slope of the mouth. The system was tested on normal subjects and showed promising results. The mean SI of the eyebrows decreased slightly from 98.42% to 98.04% using the modified method while the mean SI for the eyes and mouth increased from 96.93% to 99.63% and from 95.6% to 98.11% respectively while using the modified method. The system is easy to use, inexpensive, automated and fast, has no inter-observer variability and is thus well suited for clinical use.

  5. Digitally quantifying cerebral hemorrhage using Photoshop and Image J.

    Science.gov (United States)

    Tang, Xian Nan; Berman, Ari Ethan; Swanson, Raymond Alan; Yenari, Midori Anne

    2010-07-15

    A spectrophotometric hemoglobin assay is widely used to estimate the extent of brain hemorrhage by measuring the amount of hemoglobin in the brain. However, this method requires using the entire brain sample, leaving none for histology or other assays. Other widely used measures of gross brain hemorrhage are generally semi-quantitative and can miss subtle differences. Semi-quantitative brain hemorrhage scales may also be subject to bias. Here, we present a method to digitally quantify brain hemorrhage using Photoshop and Image J, and compared this method to the spectrophotometric hemoglobin assay. Male Sprague-Dawley rats received varying amounts of autologous blood injected into the cerebral hemispheres in order to generate different sized hematomas. 24h later, the brains were harvested, sectioned, photographed then prepared for the hemoglobin assay. From the brain section photographs, pixels containing hemorrhage were identified by Photoshop and the optical intensity was measured by Image J. Identification of hemorrhage size using optical intensities strongly correlated to the hemoglobin assay (R=0.94). We conclude that our method can accurately quantify the extent of hemorrhage. An advantage of this technique is that brain tissue can be used for additional studies. Published by Elsevier B.V.

  6. DIGITALLY QUANTIFYING CEREBRAL HEMORRHAGE USING PHOTOSHOP® AND IMAGE J

    Science.gov (United States)

    Tang, Xian Nan; Berman, Ari Ethan; Swanson, Raymond Alan; Yenari, Midori Anne

    2010-01-01

    A spectrophotometric hemoglobin assay is widely used to estimate the extent of brain hemorrhage by measuring the amount of hemoglobin in the brain. However, this method requires using the entire brain sample, leaving none for histology or other assays. Other widely used measures of gross brain hemorrhage are generally semi-quantitative and can miss subtle differences. Semi-quantitative brain hemorrhage scales may also be subject to bias. Here, we present a method to digitally quantify brain hemorrhage using Photoshop and Image J, and compared this method to the spectrophotometric hemoglobin assay. Male Sprague-Dawley rats received varying amounts of autologous blood injected into the cerebral hemispheres in order to generate different sized hematomas. 24 hours later, the brains were harvested, sectioned, photographed then prepared for the hemoglobin assay. From the brain section photographs, pixels containing hemorrhage were identified by Photoshop® and the optical intensity was measured by Image J. Identification of hemorrhage size using optical intensities strongly correlated to the hemoglobin assay (R=0.94). We conclude that our method can accurately quantify the extent of hemorrhage. An advantage of this technique is that brain tissue can be used for additional studies. PMID:20452374

  7. Quantifying construction and demolition waste: an analytical review.

    Science.gov (United States)

    Wu, Zezhou; Yu, Ann T W; Shen, Liyin; Liu, Guiwen

    2014-09-01

    Quantifying construction and demolition (C&D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C&D waste generation at both regional and project levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C&D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Quantifying Urban Fragmentation under Economic Transition in Shanghai City, China

    Directory of Open Access Journals (Sweden)

    Heyuan You

    2015-12-01

    Full Text Available Urban fragmentation affects sustainability through multiple impacts on economic, social, and environmental cost. Characterizing the dynamics of urban fragmentation in relation to economic transition should provide implications for sustainability. However, rather few efforts have been made in this issue. Using the case of Shanghai (China, this paper quantifies urban fragmentation in relation to economic transition. In particular, urban fragmentation is quantified by a time-series of remotely sensed images and a set of landscape metrics; and economic transition is described by a set of indicators from three aspects (globalization, decentralization, and marketization. Results show that urban fragmentation presents an increasing linear trend. Multivariate regression identifies positive linear correlation between urban fragmentation and economic transition. More specifically, the relative influence is different for the three components of economic transition. The relative influence of decentralization is stronger than that of globalization and marketization. The joint influences of decentralization and globalization are the strongest for urban fragmentation. The demonstrated methodology can be applicable to other places after making suitable adjustment of the economic transition indicators and fragmentation metrics.

  9. Design Life Level: Quantifying risk in a changing climate

    Science.gov (United States)

    Rootzén, Holger; Katz, Richard W.

    2013-09-01

    In the past, the concepts of return levels and return periods have been standard and important tools for engineering design. However, these concepts are based on the assumption of a stationary climate and do not apply to a changing climate, whether local or global. In this paper, we propose a refined concept, Design Life Level, which quantifies risk in a nonstationary climate and can serve as the basis for communication. In current practice, typical hydrologic risk management focuses on a standard (e.g., in terms of a high quantile corresponding to the specified probability of failure for a single year). Nevertheless, the basic information needed for engineering design should consist of (i) the design life period (e.g., the next 50 years, say 2015-2064); and (ii) the probability (e.g., 5% chance) of a hazardous event (typically, in the form of the hydrologic variable exceeding a high level) occurring during the design life period. Capturing both of these design characteristics, the Design Life Level is defined as an upper quantile (e.g., 5%) of the distribution of the maximum value of the hydrologic variable (e.g., water level) over the design life period. We relate this concept and variants of it to existing literature and illustrate how they, and some useful complementary plots, may be computed and used. One practically important consideration concerns quantifying the statistical uncertainty in estimating a high quantile under nonstationarity.

  10. Quantifying complexity in translational research: an integrated approach.

    Science.gov (United States)

    Munoz, David A; Nembhard, Harriet Black; Kraschnewski, Jennifer L

    2014-01-01

    The purpose of this paper is to quantify complexity in translational research. The impact of major operational steps and technical requirements is calculated with respect to their ability to accelerate moving new discoveries into clinical practice. A three-phase integrated quality function deployment (QFD) and analytic hierarchy process (AHP) method was used to quantify complexity in translational research. A case study in obesity was used to usability. Generally, the evidence generated was valuable for understanding various components in translational research. Particularly, the authors found that collaboration networks, multidisciplinary team capacity and community engagement are crucial for translating new discoveries into practice. As the method is mainly based on subjective opinion, some argue that the results may be biased. However, a consistency ratio is calculated and used as a guide to subjectivity. Alternatively, a larger sample may be incorporated to reduce bias. The integrated QFD-AHP framework provides evidence that could be helpful to generate agreement, develop guidelines, allocate resources wisely, identify benchmarks and enhance collaboration among similar projects. Current conceptual models in translational research provide little or no clue to assess complexity. The proposed method aimed to fill this gap. Additionally, the literature review includes various features that have not been explored in translational research.

  11. Using multiscale norms to quantify mixing and transport

    International Nuclear Information System (INIS)

    Thiffeault, Jean-Luc

    2012-01-01

    Mixing is relevant to many areas of science and engineering, including the pharmaceutical and food industries, oceanography, atmospheric sciences and civil engineering. In all these situations one goal is to quantify and often then to improve the degree of homogenization of a substance being stirred, referred to as a passive scalar or tracer. A classical measure of mixing is the variance of the concentration of the scalar, which is the L 2 norm of a mean-zero concentration field. Recently, other norms have been used to quantify mixing, in particular the mix-norm as well as negative Sobolev norms. These norms have the advantage that unlike variance they decay even in the absence of diffusion, and their decay corresponds to the flow being mixing in the sense of ergodic theory. General Sobolev norms weigh scalar gradients differently, and are known as multiscale norms for mixing. We review the applications of such norms to mixing and transport, and show how they can be used to optimize the stirring and mixing of a decaying passive scalar. We then review recent work on the less-studied case of a continuously replenished scalar field—the source–sink problem. In that case the flows that optimally reduce the norms are associated with transport rather than mixing: they push sources onto sinks, and vice versa. (invited article)

  12. Constructing carbon offsets: The obstacles to quantifying emission reductions

    International Nuclear Information System (INIS)

    Millard-Ball, Adam; Ortolano, Leonard

    2010-01-01

    The existing literature generally ascribes the virtual absence of the transport sector from the Clean Development Mechanism (CDM) to the inherent complexity of quantifying emission reductions from mobile sources. We use archival analysis and interviews with CDM decision-makers and experts to identify two additional groups of explanations. First, we show the significance of aspects of the CDM's historical evolution, such as the order in which methodologies were considered and the assignment of expert desk reviewers. Second, we highlight inconsistencies in the treatment of uncertainty across sectors. In contrast to transport methodologies, other sectors are characterized by a narrow focus on sources of measurement uncertainty and a neglect of economic effects ('market leakages'). We do not argue that the rejection of transport methodologies was unjustified, but rather than many of the same problems are inherent in other sectors. Thus, the case of transport sheds light on fundamental problems in quantifying emission reductions under the CDM. We argue that a key theoretical attraction of the CDM-equalization of marginal abatement costs across all sectors-has been difficult to achieve in practice.

  13. A kernel plus method for quantifying wind turbine performance upgrades

    KAUST Repository

    Lee, Giwhyun

    2014-04-21

    Power curves are commonly estimated using the binning method recommended by the International Electrotechnical Commission, which primarily incorporates wind speed information. When such power curves are used to quantify a turbine\\'s upgrade, the results may not be accurate because many other environmental factors in addition to wind speed, such as temperature, air pressure, turbulence intensity, wind shear and humidity, all potentially affect the turbine\\'s power output. Wind industry practitioners are aware of the need to filter out effects from environmental conditions. Toward that objective, we developed a kernel plus method that allows incorporation of multivariate environmental factors in a power curve model, thereby controlling the effects from environmental factors while comparing power outputs. We demonstrate that the kernel plus method can serve as a useful tool for quantifying a turbine\\'s upgrade because it is sensitive to small and moderate changes caused by certain turbine upgrades. Although we demonstrate the utility of the kernel plus method in this specific application, the resulting method is a general, multivariate model that can connect other physical factors, as long as their measurements are available, with a turbine\\'s power output, which may allow us to explore new physical properties associated with wind turbine performance. © 2014 John Wiley & Sons, Ltd.

  14. Disordered crystals from first principles I: Quantifying the configuration space

    Science.gov (United States)

    Kühne, Thomas D.; Prodan, Emil

    2018-04-01

    This work represents the first chapter of a project on the foundations of first-principle calculations of the electron transport in crystals at finite temperatures. We are interested in the range of temperatures, where most electronic components operate, that is, room temperature and above. The aim is a predictive first-principle formalism that combines ab-initio molecular dynamics and a finite-temperature Kubo-formula for homogeneous thermodynamic phases. The input for this formula is the ergodic dynamical system (Ω , G , dP) defining the thermodynamic crystalline phase, where Ω is the configuration space for the atomic degrees of freedom, G is the space group acting on Ω and dP is the ergodic Gibbs measure relative to the G-action. The present work develops an algorithmic method for quantifying (Ω , G , dP) from first principles. Using the silicon crystal as a working example, we find the Gibbs measure to be extremely well characterized by a multivariate normal distribution, which can be quantified using a small number of parameters. The latter are computed at various temperatures and communicated in the form of a table. Using this table, one can generate large and accurate thermally-disordered atomic configurations to serve, for example, as input for subsequent simulations of the electronic degrees of freedom.

  15. Quantifying Fire Cycle from Dendroecological Records Using Survival Analyses

    Directory of Open Access Journals (Sweden)

    Dominic Cyr

    2016-06-01

    Full Text Available Quantifying fire regimes in the boreal forest ecosystem is crucial for understanding the past and present dynamics, as well as for predicting its future dynamics. Survival analyses have often been used to estimate the fire cycle in eastern Canada because they make it possible to take into account the censored information that is made prevalent by the typically long fire return intervals and the limited scope of the dendroecological methods that are used to quantify them. Here, we assess how the true length of the fire cycle, the short-term temporal variations in fire activity, and the sampling effort affect the accuracy and precision of estimates obtained from two types of parametric survival models, the Weibull and the exponential models, and one non-parametric model obtained with the Cox regression. Then, we apply those results in a case area located in eastern Canada. Our simulation experiment confirms some documented concerns regarding the detrimental effects of temporal variations in fire activity on parametric estimation of the fire cycle. Cox regressions appear to provide the most accurate and robust estimator, being by far the least affected by temporal variations in fire activity. The Cox-based estimate of the fire cycle for the last 300 years in the case study area is 229 years (CI95: 162–407, compared with the likely overestimated 319 years obtained with the commonly used exponential model.

  16. Modeling of Nonlinear Beat Signals of TAE's

    Science.gov (United States)

    Zhang, Bo; Berk, Herbert; Breizman, Boris; Zheng, Linjin

    2012-03-01

    Experiments on Alcator C-Mod reveal Toroidal Alfven Eigenmodes (TAE) together with signals at various beat frequencies, including those at twice the mode frequency. The beat frequencies are sidebands driven by quadratic nonlinear terms in the MHD equations. These nonlinear sidebands have not yet been quantified by any existing codes. We extend the AEGIS code to capture nonlinear effects by treating the nonlinear terms as a driving source in the linear MHD solver. Our goal is to compute the spatial structure of the sidebands for realistic geometry and q-profile, which can be directly compared with experiment in order to interpret the phase contrast imaging diagnostic measurements and to enable the quantitative determination of the Alfven wave amplitude in the plasma core

  17. Biomedical signal analysis

    CERN Document Server

    Rangayyan, Rangaraj M

    2015-01-01

    The book will help assist a reader in the development of techniques for analysis of biomedical signals and computer aided diagnoses with a pedagogical examination of basic and advanced topics accompanied by over 350 figures and illustrations. Wide range of filtering techniques presented to address various applications. 800 mathematical expressions and equations. Practical questions, problems and laboratory exercises. Includes fractals and chaos theory with biomedical applications.

  18. Calcium signaling in liver.

    Science.gov (United States)

    Gaspers, Lawrence D; Thomas, Andrew P

    2005-01-01

    In hepatocytes, hormones linked to the formation of the second messenger inositol 1,4,5-trisphosphate (InsP3) evoke transient increases or spikes in cytosolic free calcium ([Ca2+]i), that increase in frequency with the agonist concentration. These oscillatory Ca2+ signals are thought to transmit the information encoded in the extracellular stimulus to down-stream Ca2+-sensitive metabolic processes. We have utilized both confocal and wide field fluorescence microscopy techniques to study the InsP3-dependent signaling pathway at the cellular and subcellular levels in the intact perfused liver. Typically InsP3-dependent [Ca2+]i spikes manifest as Ca2+ waves that propagate throughout the entire cytoplasm and nucleus, and in the intact liver these [Ca2+]i increases are conveyed through gap junctions to encompass entire lobular units. The translobular movement of Ca2+ provides a means to coordinate the function of metabolic zones of the lobule and thus, liver function. In this article, we describe the characteristics of agonist-evoked [Ca2+]i signals in the liver and discuss possible mechanisms to explain the propagation of intercellular Ca2+ waves in the intact organ.

  19. Superlative Quantifiers as Modifiers of Meta-Speech Acts

    Directory of Open Access Journals (Sweden)

    Ariel Cohen

    2010-12-01

    Full Text Available The superlative quantifiers, at least and  at most, are commonly assumed  to have the same truth-conditions as the comparative quantifiers  more than and  fewer than. However, as Geurts & Nouwen (2007  have demonstrated, this is wrong, and several theories have been proposed to account for them. In this paper we propose that superlative quantifiers are illocutionary operators; specifically, they modify meta-speech acts. Meta speech-acts are operators that do not express a speech act, but a willingness to make or refrain from making a certain speech  act. The classic example is speech act denegation, e.g. I don't promise to come, where the speaker is explicitly refraining from performing the speech act of promising What denegations do is to delimit the future development of conversation, that is, they delimit future admissible speech acts. Hence we call them meta-speech acts. They are not moves in a game, but rather commitments to behave in certain ways in the future. We formalize the notion of meta speech acts as commitment development spaces, which are rooted graphs: The root of the graph describes the commitment development up to the current point in conversation; the continuations from the  root describe the admissible future directions. We define and formalize the meta-speech act GRANT, which indicates that the speaker, while not necessarily subscribing to a proposition, refrains from asserting its negation. We propose that superlative quantifiers are quantifiers over GRANTs. Thus, Mary petted at least three rabbits means that the minimal number n such that the speaker GRANTs that Mary petted  n rabbits is n = 3. In other words, the speaker denies that Mary petted two, one, or no rabbits, but GRANTs that she petted more. We formalize this interpretation of superlative quantifiers in terms of commitment development spaces, and show how the truth conditions that are derived from it are partly entailed and partly conversationally

  20. Quantifying the influence of agricultural fires in northwest India on urban air pollution in Delhi, India

    Science.gov (United States)

    Cusworth, Daniel H.; Mickley, Loretta J.; Sulprizio, Melissa P.; Liu, Tianjia; Marlier, Miriam E.; DeFries, Ruth S.; Guttikunda, Sarath K.; Gupta, Pawan

    2018-04-01

    Since at least the 1980s, many farmers in northwest India have switched to mechanized combine harvesting to boost efficiency. This harvesting technique leaves abundant crop residue on the fields, which farmers typically burn to prepare their fields for subsequent planting. A key question is to what extent the large quantity of smoke emitted by these fires contributes to the already severe pollution in Delhi and across other parts of the heavily populated Indo-Gangetic Plain located downwind of the fires. Using a combination of observed and modeled variables, including surface measurements of PM2.5, we quantify the magnitude of the influence of agricultural fire emissions on surface air pollution in Delhi. With surface measurements, we first derive the signal of regional PM2.5 enhancements (i.e. the pollution above an anthropogenic baseline) during each post-monsoon burning season for 2012–2016. We next use the Stochastic Time-Inverted Lagrangian Transport model (STILT) to simulate surface PM2.5 using five fire emission inventories. We reproduce up to 25% of the weekly variability in total observed PM2.5 using STILT. Depending on year and emission inventory, our method attributes 7.0%–78% of the maximum observed PM2.5 enhancements in Delhi to fires. The large range in these attribution estimates points to the uncertainties in fire emission parameterizations, especially in regions where thick smoke may interfere with hotspots of fire radiative power. Although our model can generally reproduce the largest PM2.5 enhancements in Delhi air quality for 1–3 consecutive days each fire season, it fails to capture many smaller daily enhancements, which we attribute to the challenge of detecting small fires in the satellite retrieval. By quantifying the influence of upwind agricultural fire emissions on Delhi air pollution, our work underscores the potential health benefits of changes in farming practices to reduce fires.

  1. Biomedical signal and image processing

    CERN Document Server

    Najarian, Kayvan

    2012-01-01

    INTRODUCTION TO DIGITAL SIGNAL AND IMAGE PROCESSINGSignals and Biomedical Signal ProcessingIntroduction and OverviewWhat is a ""Signal""?Analog, Discrete, and Digital SignalsProcessing and Transformation of SignalsSignal Processing for Feature ExtractionSome Characteristics of Digital ImagesSummaryProblemsFourier TransformIntroduction and OverviewOne-Dimensional Continuous Fourier TransformSampling and NYQUIST RateOne-Dimensional Discrete Fourier TransformTwo-Dimensional Discrete Fourier TransformFilter DesignSummaryProblemsImage Filtering, Enhancement, and RestorationIntroduction and Overview

  2. Signal multiplexing scheme for LINAC

    International Nuclear Information System (INIS)

    Sujo, C.I.; Mohan, Shyam; Joshi, Gopal; Singh, S.K.; Karande, Jitendra

    2004-01-01

    For the proper operation of the LINAC some signals, RF (radio frequency) as well as LF (low frequency) have to be available at the Master Control Station (MCS). These signals are needed to control, calibrate and characterize the RF fields in the resonators. This can be achieved by proper multiplexing of various signals locally and then routing the selected signals to the MCS. A multiplexing scheme has been designed and implemented, which will allow the signals from the selected cavity to the MCS. High isolation between channels and low insertion loss for a given signal are important issues while selecting the multiplexing scheme. (author)

  3. Quantifying the microvascular origin of BOLD-fMRI from first principles with two-photon microscopy and an oxygen-sensitive nanoprobe.

    Science.gov (United States)

    Gagnon, Louis; Sakadžić, Sava; Lesage, Frédéric; Musacchia, Joseph J; Lefebvre, Joël; Fang, Qianqian; Yücel, Meryem A; Evans, Karleyton C; Mandeville, Emiri T; Cohen-Adad, Jülien; Polimeni, Jonathan R; Yaseen, Mohammad A; Lo, Eng H; Greve, Douglas N; Buxton, Richard B; Dale, Anders M; Devor, Anna; Boas, David A

    2015-02-25

    The blood oxygenation level-dependent (BOLD) contrast is widely used in functional magnetic resonance imaging (fMRI) studies aimed at investigating neuronal activity. However, the BOLD signal reflects changes in blood volume and oxygenation rather than neuronal activity per se. Therefore, understanding the transformation of microscopic vascular behavior into macroscopic BOLD signals is at the foundation of physiologically informed noninvasive neuroimaging. Here, we use oxygen-sensitive two-photon microscopy to measure the BOLD-relevant microvascular physiology occurring within a typical rodent fMRI voxel and predict the BOLD signal from first principles using those measurements. The predictive power of the approach is illustrated by quantifying variations in the BOLD signal induced by the morphological folding of the human cortex. This framework is then used to quantify the contribution of individual vascular compartments and other factors to the BOLD signal for different magnet strengths and pulse sequences. Copyright © 2015 the authors 0270-6474/15/353663-13$15.00/0.

  4. A new similarity index for nonlinear signal analysis based on local extrema patterns

    Science.gov (United States)

    Niknazar, Hamid; Motie Nasrabadi, Ali; Shamsollahi, Mohammad Bagher

    2018-02-01

    Common similarity measures of time domain signals such as cross-correlation and Symbolic Aggregate approximation (SAX) are not appropriate for nonlinear signal analysis. This is because of the high sensitivity of nonlinear systems to initial points. Therefore, a similarity measure for nonlinear signal analysis must be invariant to initial points and quantify the similarity by considering the main dynamics of signals. The statistical behavior of local extrema (SBLE) method was previously proposed to address this problem. The SBLE similarity index uses quantized amplitudes of local extrema to quantify the dynamical similarity of signals by considering patterns of sequential local extrema. By adding time information of local extrema as well as fuzzifying quantized values, this work proposes a new similarity index for nonlinear and long-term signal analysis, which extends the SBLE method. These new features provide more information about signals and reduce noise sensitivity by fuzzifying them. A number of practical tests were performed to demonstrate the ability of the method in nonlinear signal clustering and classification on synthetic data. In addition, epileptic seizure detection based on electroencephalography (EEG) signal processing was done by the proposed similarity to feature the potentials of the method as a real-world application tool.

  5. Fit by Bits: An Explorative Study of Sports Physiotherapists' Perception of Quantified Self Technologies.

    Science.gov (United States)

    Allouch, Somaya Ben; van Velsen, Lex

    2018-01-01

    Our aim was to determine sport physiotherapists' attitudes towards Quantified Self technology usage and adoption and to analyze factors that may influence this attitude. A survey was used to study a sample in the Netherlands. Assessment was made of attitudes of towards Quantified Self technology usage by clients of therapists, by therapists themselves and intention to adopt Quantified Self technology. Results show that the uptake of Quantified Self technology by sports physiotherapists is rather low but that the intention to adopt Quantified Self technology by sports physiotherapists is quite high. These results can provide a foundation to provide an infrastructure for sports physiotherapists to fulfill their wishes with regard to Quantified Self technology.

  6. Quantifying the relationship between financial news and the stock market.

    Science.gov (United States)

    Alanyali, Merve; Moat, Helen Susannah; Preis, Tobias

    2013-12-20

    The complex behavior of financial markets emerges from decisions made by many traders. Here, we exploit a large corpus of daily print issues of the Financial Times from 2(nd) January 2007 until 31(st) December 2012 to quantify the relationship between decisions taken in financial markets and developments in financial news. We find a positive correlation between the daily number of mentions of a company in the Financial Times and the daily transaction volume of a company's stock both on the day before the news is released, and on the same day as the news is released. Our results provide quantitative support for the suggestion that movements in financial markets and movements in financial news are intrinsically interlinked.

  7. Fuzzy Entropy Method for Quantifying Supply Chain Networks Complexity

    Science.gov (United States)

    Zhang, Jihui; Xu, Junqin

    Supply chain is a special kind of complex network. Its complexity and uncertainty makes it very difficult to control and manage. Supply chains are faced with a rising complexity of products, structures, and processes. Because of the strong link between a supply chain’s complexity and its efficiency the supply chain complexity management becomes a major challenge of today’s business management. The aim of this paper is to quantify the complexity and organization level of an industrial network working towards the development of a ‘Supply Chain Network Analysis’ (SCNA). By measuring flows of goods and interaction costs between different sectors of activity within the supply chain borders, a network of flows is built and successively investigated by network analysis. The result of this study shows that our approach can provide an interesting conceptual perspective in which the modern supply network can be framed, and that network analysis can handle these issues in practice.

  8. Quantifying human response capabilities towards tsunami threats at community level

    Science.gov (United States)

    Post, J.; Mück, M.; Zosseder, K.; Wegscheider, S.; Taubenböck, H.; Strunz, G.; Muhari, A.; Anwar, H. Z.; Birkmann, J.; Gebert, N.

    2009-04-01

    Decision makers at the community level need detailed information on tsunami risks in their area. Knowledge on potential hazard impact, exposed elements such as people, critical facilities and lifelines, people's coping capacity and recovery potential are crucial to plan precautionary measures for adaptation and to mitigate potential impacts of tsunamis on society and the environment. A crucial point within a people-centred tsunami risk assessment is to quantify the human response capabilities towards tsunami threats. Based on this quantification and spatial representation in maps tsunami affected and safe areas, difficult-to-evacuate areas, evacuation target points and evacuation routes can be assigned and used as an important contribution to e.g. community level evacuation planning. Major component in the quantification of human response capabilities towards tsunami impacts is the factor time. The human response capabilities depend on the estimated time of arrival (ETA) of a tsunami, the time until technical or natural warning signs (ToNW) can be received, the reaction time (RT) of the population (human understanding of a tsunami warning and the decision to take appropriate action), the evacuation time (ET, time people need to reach a safe area) and the actual available response time (RsT = ETA - ToNW - RT). If RsT is larger than ET, people in the respective areas are able to reach a safe area and rescue themselves. Critical areas possess RsT values equal or even smaller ET and hence people whin these areas will be directly affected by a tsunami. Quantifying the factor time is challenging and an attempt to this is presented here. The ETA can be derived by analyzing pre-computed tsunami scenarios for a respective area. For ToNW we assume that the early warning center is able to fulfil the Indonesian presidential decree to issue a warning within 5 minutes. RT is difficult as here human intrinsic factors as educational level, believe, tsunami knowledge and experience

  9. Quantifying cardiovascular disease risk factors in patients with psoriasis

    DEFF Research Database (Denmark)

    Miller, I M; Skaaby, T; Ellervik, C

    2013-01-01

    BACKGROUND: In a previous meta-analysis on categorical data we found an association between psoriasis and cardiovascular disease and associated risk factors. OBJECTIVES: To quantify the level of cardiovascular disease risk factors in order to provide additional data for the clinical management...... of the increased risk. METHODS: This was a meta-analysis of observational studies with continuous outcome using random-effects statistics. A systematic search of studies published before 25 October 2012 was conducted using the databases Medline, EMBASE, International Pharmaceutical Abstracts, PASCAL and BIOSIS......·65 mmol L(-1) )] and a higher HbA1c [1·09 mmol mol(-1) , 95% CI 0·87-1·31, P controls are significant, and therefore relevant to the clinical management of patients with psoriasis....

  10. Quantifying the Impact of Unavailability in Cyber-Physical Environments

    Energy Technology Data Exchange (ETDEWEB)

    Aissa, Anis Ben [Université de Tunis El Manar, Tunisia; Abercrombie, Robert K [ORNL; Sheldon, Federick T. [University of Memphis; Mili, Ali [New Jersey Insitute of Technology

    2014-01-01

    The Supervisory Control and Data Acquisition (SCADA) system discussed in this work manages a distributed control network for the Tunisian Electric & Gas Utility. The network is dispersed over a large geographic area that monitors and controls the flow of electricity/gas from both remote and centralized locations. The availability of the SCADA system in this context is critical to ensuring the uninterrupted delivery of energy, including safety, security, continuity of operations and revenue. Such SCADA systems are the backbone of national critical cyber-physical infrastructures. Herein, we propose adapting the Mean Failure Cost (MFC) metric for quantifying the cost of unavailability. This new metric combines the classic availability formulation with MFC. The resulting metric, so-called Econometric Availability (EA), offers a computational basis to evaluate a system in terms of the gain/loss ($/hour of operation) that affects each stakeholder due to unavailability.

  11. Quantifying chemical uncertainties in simulations of the ISM

    Science.gov (United States)

    Glover, Simon

    2018-06-01

    The ever-increasing power of large parallel computers now makes it possible to include increasingly sophisticated chemical models in three-dimensional simulations of the interstellar medium (ISM). This allows us to study the role that chemistry plays in the thermal balance of a realistically-structured, turbulent ISM, as well as enabling us to generated detailed synthetic observations of important atomic or molecular tracers. However, one major constraint on the accuracy of these models is the accuracy with which the input chemical rate coefficients are known. Uncertainties in these chemical rate coefficients inevitably introduce uncertainties into the model predictions. In this talk, I will review some of the methods we can use to quantify these uncertainties and to identify the key reactions where improved chemical data is most urgently required. I will also discuss a few examples, ranging from the local ISM to the high-redshift universe.

  12. Quantified safety objectives in high technology: Meaning and demonstration

    International Nuclear Information System (INIS)

    Vinck, W.F.; Gilby, E.; Chicken, J.

    1986-01-01

    An overview and trends-analysis is given of the types of quantified criteria and objectives which are presently applied or envisaged and discussed in Europe in the nuclear application, more specifically Nuclear Power Plants (NPPs), and in non-nuclear applications, more specifically in the chemical and petrochemical process industry. Some comparative deductions are made. Attention is paid to the similarities or discrepancies between such criteria and objectives and to problems associated with the demonstration that they are implemented. The role of cost-effectiveness of Risk deduction is briefly discussed and mention made of a search made into combining the technical, economic and socio-political factors playing a role in Risk acceptance

  13. Quantifying ground impact fatality rate for small unmanned aircraft

    DEFF Research Database (Denmark)

    La Cour-Harbo, Anders

    2018-01-01

    is based on a standard stochastic model, and employs a parameterized high fidelity ground impact distribution model that accounts for both aircraft specifications, parameter uncertainties, and wind. The method also samples the flight path to create an almost continuous quantification of the risk......One of the major challenges of conducting operation of unmanned aircraft, especially operations beyond visual line-of-sight (BVLOS), is to make a realistic and sufficiently detailed risk assessment. An important part of such an assessment is to identify the risk of fatalities, preferably...... in a quantitative way since this allows for comparison with manned aviation to determine whether an equivalent level of safety is achievable. This work presents a method for quantifying the probability of fatalities resulting from an uncontrolled descent of an unmanned aircraft conducting a BVLOS flight. The method...

  14. Quantifying tidally driven benthic oxygen exchange across permeable sediments

    DEFF Research Database (Denmark)

    McGinnis, Daniel F.; Sommer, Stefan; Lorke, Andreas

    2014-01-01

    Continental shelves are predominately (approximate to 70%) covered with permeable, sandy sediments. While identified as critical sites for intense oxygen, carbon, and nutrient turnover, constituent exchange across permeable sediments remains poorly quantified. The central North Sea largely consists...... of permeable sediments and has been identified as increasingly at risk for developing hypoxia. Therefore, we investigate the benthic O-2 exchange across the permeable North Sea sediments using a combination of in situ microprofiles, a benthic chamber, and aquatic eddy correlation. Tidal bottom currents drive...... the variable sediment O-2 penetration depth (from approximate to 3 to 8 mm) and the concurrent turbulence-driven 25-fold variation in the benthic sediment O-2 uptake. The O-2 flux and variability were reproduced using a simple 1-D model linking the benthic turbulence to the sediment pore water exchange...

  15. Planck and the local Universe: quantifying the tension

    CERN Document Server

    Verde, Licia; Protopapas, Pavlos

    2013-01-01

    We use the latest Planck constraints, and in particular constraints on the derived parameters (Hubble constant and age of the Universe) for the local universe and compare them with local measurements of the same quantities. We propose a way to quantify whether cosmological parameters constraints from two different experiments are in tension or not. Our statistic, T, is an evidence ratio and therefore can be interpreted with the widely used Jeffrey's scale. We find that in the framework of the LCDM model, the Planck inferred two dimensional, joint, posterior distribution for the Hubble constant and age of the Universe is in "strong" tension with the local measurements; the odds being ~ 1:50. We explore several possibilities for explaining this tension and examine the consequences both in terms of unknown errors and deviations from the LCDM model. In some one-parameter LCDM model extensions, tension is reduced whereas in other extensions, tension is instead increased. In particular, small total neutrino masses ...

  16. National survey describing and quantifying students with communication needs.

    Science.gov (United States)

    Andzik, Natalie R; Schaefer, John M; Nichols, Robert T; Chung, Yun-Ching

    2018-01-01

    Research literature has yet to quantify and describe how students with complex communication needs are supported in the classroom and how special educators are being prepared to offer support. This study sought out special educators to complete a survey about their students with complex communication needs. Over 4,000 teachers representing 50 states reported on the communicative and behavioral characteristics of 15,643 students. Teachers described the training they have received and instructional approaches they used. The majority of students were reported to use speech as their primary communication mode. Over half of students utilizing alternative and augmentative communication (AAC) were reported to have non-proficient communication. Teacher training varied across respondents as well as the supports they used to support these students in the classroom. The majority of students with disabilities using AAC when communicating across the nation are not proficiently communicating. Implications and recommendations will be discussed.

  17. Gradient approach to quantify the gradation smoothness for output media

    Science.gov (United States)

    Kim, Youn Jin; Bang, Yousun; Choh, Heui-Keun

    2010-01-01

    We aim to quantify the perception of color gradation smoothness using objectively measurable properties. We propose a model to compute the smoothness of hardcopy color-to-color gradations. It is a gradient-based method that can be determined as a function of the 95th percentile of second derivative for the tone-jump estimator and the fifth percentile of first derivative for the tone-clipping estimator. Performance of the model and a previously suggested method were psychophysically appreciated, and their prediction accuracies were compared to each other. Our model showed a stronger Pearson correlation to the corresponding visual data, and the magnitude of the Pearson correlation reached up to 0.87. Its statistical significance was verified through analysis of variance. Color variations of the representative memory colors-blue sky, green grass and Caucasian skin-were rendered as gradational scales and utilized as the test stimuli.

  18. Quantifying intrinsic quality of commercial varieties of Basmati rice

    International Nuclear Information System (INIS)

    Sagar, M.A.; Salim, M.; Siddiqui, H.

    2003-01-01

    Twelve quality-trait of five commercial varieties of Basmati rice viz: Basmati 370, Basmati 385, Basmati 198, Basmati 6129 and Super Basmati, were determined according to standard methods. Under existing conditions, it is very difficult to assess overall quality of Basmati rice-varieties, because if a variety is superior in one quality trait, it is inferior in another quality trait, to some other Basmati rice variety, which creates confusion. In order to determine the overall quality-status, we have made an effort to quantify the various quality-traits. Each quality-trait has been allotted a score, in terms of its importance, and the overall status of each variety is computed. Based on our estimation, Super Basmati is of highest quality (98.21%), followed by Basmati 6129 (96.46%), Basmati 370 (95.59%), Basmati 385 (95.26%) and Basmati 198 (92.95%). (author)

  19. GRAPH THEORY APPROACH TO QUANTIFY UNCERTAINTY OF PERFORMANCE MEASURES

    Directory of Open Access Journals (Sweden)

    Sérgio D. Sousa

    2015-03-01

    Full Text Available In this work, the performance measurement process is studied to quantify the uncertainty induced in the resulting performance measure (PM. To that end, the causes of uncertainty are identified, analysing the activities undertaken in the three following stages of the performance measurement process: design and implementation, data collection and record, and determination and analysis. A quantitative methodology based on graph theory and on the sources of uncertainty of the performance measurement process is used to calculate an uncertainty index to evaluate the level of uncertainty of a given PM or (key performance indicator. An application example is presented. The quantification of PM uncertainty could contribute to better represent the risk associated with a given decision and also to improve the PM to increase its precision and reliability.

  20. Quantifying the limits of transition state theory in enzymatic catalysis.

    Science.gov (United States)

    Zinovjev, Kirill; Tuñón, Iñaki

    2017-11-21

    While being one of the most popular reaction rate theories, the applicability of transition state theory to the study of enzymatic reactions has been often challenged. The complex dynamic nature of the protein environment raised the question about the validity of the nonrecrossing hypothesis, a cornerstone in this theory. We present a computational strategy to quantify the error associated to transition state theory from the number of recrossings observed at the equicommittor, which is the best possible dividing surface. Application of a direct multidimensional transition state optimization to the hydride transfer step in human dihydrofolate reductase shows that both the participation of the protein degrees of freedom in the reaction coordinate and the error associated to the nonrecrossing hypothesis are small. Thus, the use of transition state theory, even with simplified reaction coordinates, provides a good theoretical framework for the study of enzymatic catalysis. Copyright © 2017 the Author(s). Published by PNAS.

  1. Quantifying entanglement in two-mode Gaussian states

    Science.gov (United States)

    Tserkis, Spyros; Ralph, Timothy C.

    2017-12-01

    Entangled two-mode Gaussian states are a key resource for quantum information technologies such as teleportation, quantum cryptography, and quantum computation, so quantification of Gaussian entanglement is an important problem. Entanglement of formation is unanimously considered a proper measure of quantum correlations, but for arbitrary two-mode Gaussian states no analytical form is currently known. In contrast, logarithmic negativity is a measure that is straightforward to calculate and so has been adopted by most researchers, even though it is a less faithful quantifier. In this work, we derive an analytical lower bound for entanglement of formation of generic two-mode Gaussian states, which becomes tight for symmetric states and for states with balanced correlations. We define simple expressions for entanglement of formation in physically relevant situations and use these to illustrate the problematic behavior of logarithmic negativity, which can lead to spurious conclusions.

  2. Quantifying phenomenological importance in best-estimate plus uncertainty analyses

    International Nuclear Information System (INIS)

    Martin, Robert P.

    2009-01-01

    This paper describes a general methodology for quantifying the importance of specific phenomenological elements to analysis measures evaluated from non-parametric best-estimate plus uncertainty evaluation methodologies. The principal objective of an importance analysis is to reveal those uncertainty contributors having the greatest influence on key analysis measures. This characterization supports the credibility of the uncertainty analysis, the applicability of the analytical tools, and even the generic evaluation methodology through the validation of the engineering judgments that guided the evaluation methodology development. A demonstration of the importance analysis is provided using data from a sample problem considered in the development of AREVA's Realistic LBLOCA methodology. The results are presented against the original large-break LOCA Phenomena Identification and Ranking Table developed by the Technical Program Group responsible for authoring the Code Scaling, Applicability and Uncertainty methodology. (author)

  3. Quantified risk assessment for hazardous industry: The Australian approach

    International Nuclear Information System (INIS)

    Haddad, S.

    1994-01-01

    The paper presents the key conceptual and methodological aspects of Quantified Risk Assessment (QRA) and Hazard Analysis techniques as applied in the process industry, mostly in New South Wales, Australia. Variations in the range of applications of the techniques between the nuclear and non-nuclear industries are highlighted. The opportunity is taken to discuss cur-rent and future issues and trends concerning QRA, including: uncertainties and limitations; acceptability of risk criteria; toxicity and chronic health effects; new technology; modelling topics; and, environmental risk. The paper concludes by indicating that the next generation QRA, as applicable to Australian conditions in particular, will benefit from are think in two areas: a multi-level approach to QRA, and a range of not fully explored applications

  4. Quantified risk assessment for hazardous industry: the Australian approach

    International Nuclear Information System (INIS)

    Haddad, S.

    1994-01-01

    The paper presents the key conceptual and methodological aspects of Quantified Risk Assessment (QRA) and Hazard Analysis techniques as applied in the process industry, mostly in New South Wales, Australia. Variations in the range of applications of the techniques between the nuclear and non-nuclear industries are highlighted. The opportunity is taken to discuss current and future issues and trends concerning QRA, including: uncertainties and limitations; acceptability of risk criteria; toxicity and chronic health effects; new technology; modelling topics; and environmental risk. The paper concludes by indicating that the next generation QRA, as applicable to Australian conditions in particular, will benefit from a rethink in two areas: a multi-level approach to QRA, and a range of not fully explored applications. 8 refs., 2 tabs

  5. Quantifying the BICEP2-Planck tension over gravitational waves.

    Science.gov (United States)

    Smith, Kendrick M; Dvorkin, Cora; Boyle, Latham; Turok, Neil; Halpern, Mark; Hinshaw, Gary; Gold, Ben

    2014-07-18

    The recent BICEP2 measurement of B-mode polarization in the cosmic microwave background (r = 0.2(-0.05)(+0.07)), a possible indication of primordial gravity waves, appears to be in tension with the upper limit from WMAP (r < 0.13 at 95% C.L.) and Planck (r < 0.11 at 95% C.L.). We carefully quantify the level of tension and show that it is very significant (around 0.1% unlikely) when the observed deficit of large-scale temperature power is taken into account. We show that measurements of TE and EE power spectra in the near future will discriminate between the hypotheses that this tension is either a statistical fluke or a sign of new physics. We also discuss extensions of the standard cosmological model that relieve the tension and some novel ways to constrain them.

  6. Quantifying the Beauty of Words: A Neurocognitive Poetics Perspective

    Directory of Open Access Journals (Sweden)

    Arthur M. Jacobs

    2017-12-01

    Full Text Available In this paper I would like to pave the ground for future studies in Computational Stylistics and (Neuro-Cognitive Poetics by describing procedures for predicting the subjective beauty of words. A set of eight tentative word features is computed via Quantitative Narrative Analysis (QNA and a novel metric for quantifying word beauty, the aesthetic potential is proposed. Application of machine learning algorithms fed with this QNA data shows that a classifier of the decision tree family excellently learns to split words into beautiful vs. ugly ones. The results shed light on surface and semantic features theoretically relevant for affective-aesthetic processes in literary reading and generate quantitative predictions for neuroaesthetic studies of verbal materials.

  7. Quantifying the Beauty of Words: A Neurocognitive Poetics Perspective.

    Science.gov (United States)

    Jacobs, Arthur M

    2017-01-01

    In this paper I would like to pave the ground for future studies in Computational Stylistics and (Neuro-)Cognitive Poetics by describing procedures for predicting the subjective beauty of words. A set of eight tentative word features is computed via Quantitative Narrative Analysis (QNA) and a novel metric for quantifying word beauty, the aesthetic potential is proposed. Application of machine learning algorithms fed with this QNA data shows that a classifier of the decision tree family excellently learns to split words into beautiful vs. ugly ones. The results shed light on surface and semantic features theoretically relevant for affective-aesthetic processes in literary reading and generate quantitative predictions for neuroaesthetic studies of verbal materials.

  8. Quantifying and analysing food waste generated by Indonesian undergraduate students

    Science.gov (United States)

    Mandasari, P.

    2018-03-01

    Despite the fact that environmental consequences derived from food waste have been widely known, studies on the amount of food waste and its influencing factors have relatively been paid little attention. Addressing this shortage, this paper aimed to quantify monthly avoidable food waste generated by Indonesian undergraduate students and analyse factors influencing the occurrence of avoidable food waste. Based on data from 106 undergraduate students, descriptive statistics and logistic regression were applied in this study. The results indicated that 4,987.5 g of food waste was generated in a month (equal to 59,850 g yearly); or 47.05 g per person monthly (equal to 564.62 g per person per a year). Meanwhile, eating out frequency and gender were found to be significant predictors of food waste occurrence.

  9. Quantifying greenhouse gas emissions from waste treatment facilities

    DEFF Research Database (Denmark)

    Mønster, Jacob

    to be in-stalled in any vehicle and thereby enabling measurements wherever there were roads. The validation of the measurement method was done by releasing a controlled amount of methane and quantifying the emission using the release of tracer gas. The validation test showed that even in areas with large...... treatment plants. The PhD study reviewed and evaluated previously used methane measurement methods and found the tracer dispersion method promising. The method uses release of tracer gas and the use of mobile equipment with high analytical sensitivity, to measure the downwind plumes of methane and tracer...... ranged from 10 to 92 kg per hour and was found to change in even short timescales of a few hours. The periods with large emissions correlated with a drop in methane utilization, indicating that emissions came from the digesters tanks or gas storage/use. The measurements indicated that the main emissions...

  10. Quantifying population genetic differentiation from next-generation sequencing data

    DEFF Research Database (Denmark)

    Fumagalli, Matteo; Garrett Vieira, Filipe Jorge; Korneliussen, Thorfinn Sand

    2013-01-01

    method for quantifying population genetic differentiation from next-generation sequencing data. In addition, we present a strategy to investigate population structure via Principal Components Analysis. Through extensive simulations, we compare the new method herein proposed to approaches based...... on genotype calling and demonstrate a marked improvement in estimation accuracy for a wide range of conditions. We apply the method to a large-scale genomic data set of domesticated and wild silkworms sequenced at low coverage. We find that we can infer the fine-scale genetic structure of the sampled......Over the last few years, new high-throughput DNA sequencing technologies have dramatically increased speed and reduced sequencing costs. However, the use of these sequencing technologies is often challenged by errors and biases associated with the bioinformatical methods used for analyzing the data...

  11. Word embeddings quantify 100 years of gender and ethnic stereotypes.

    Science.gov (United States)

    Garg, Nikhil; Schiebinger, Londa; Jurafsky, Dan; Zou, James

    2018-04-17

    Word embeddings are a powerful machine-learning framework that represents each English word by a vector. The geometric relationship between these vectors captures meaningful semantic relationships between the corresponding words. In this paper, we develop a framework to demonstrate how the temporal dynamics of the embedding helps to quantify changes in stereotypes and attitudes toward women and ethnic minorities in the 20th and 21st centuries in the United States. We integrate word embeddings trained on 100 y of text data with the US Census to show that changes in the embedding track closely with demographic and occupation shifts over time. The embedding captures societal shifts-e.g., the women's movement in the 1960s and Asian immigration into the United States-and also illuminates how specific adjectives and occupations became more closely associated with certain populations over time. Our framework for temporal analysis of word embedding opens up a fruitful intersection between machine learning and quantitative social science.

  12. Quantifying light exposure patterns in young adult students

    Science.gov (United States)

    Alvarez, Amanda A.; Wildsoet, Christine F.

    2013-08-01

    Exposure to bright light appears to be protective against myopia in both animals (chicks, monkeys) and children, but quantitative data on human light exposure are limited. In this study, we report on a technique for quantifying light exposure using wearable sensors. Twenty-seven young adult subjects wore a light sensor continuously for two weeks during one of three seasons, and also completed questionnaires about their visual activities. Light data were analyzed with respect to refractive error and season, and the objective sensor data were compared with subjects' estimates of time spent indoors and outdoors. Subjects' estimates of time spent indoors and outdoors were in poor agreement with durations reported by the sensor data. The results of questionnaire-based studies of light exposure should thus be interpreted with caution. The role of light in refractive error development should be investigated using multiple methods such as sensors to complement questionnaires.

  13. Quantifying the global cellular thiol-disulfide status

    DEFF Research Database (Denmark)

    Hansen, Rosa E; Roth, Doris; Winther, Jakob R

    2009-01-01

    It is widely accepted that the redox status of protein thiols is of central importance to protein structure and folding and that glutathione is an important low-molecular-mass redox regulator. However, the total cellular pools of thiols and disulfides and their relative abundance have never been...... determined. In this study, we have assembled a global picture of the cellular thiol-disulfide status in cultured mammalian cells. We have quantified the absolute levels of protein thiols, protein disulfides, and glutathionylated protein (PSSG) in all cellular protein, including membrane proteins. These data...... cell types. However, when cells are exposed to a sublethal dose of the thiol-specific oxidant diamide, PSSG levels increase to >15% of all protein cysteine. Glutathione is typically characterized as the "cellular redox buffer"; nevertheless, our data show that protein thiols represent a larger active...

  14. Quantifying chaotic dynamics from integrate-and-fire processes

    Energy Technology Data Exchange (ETDEWEB)

    Pavlov, A. N. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Saratov State Technical University, Politehnicheskaya Str. 77, 410054 Saratov (Russian Federation); Pavlova, O. N. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Mohammad, Y. K. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Tikrit University Salahudin, Tikrit Qadisiyah, University Str. P.O. Box 42, Tikrit (Iraq); Kurths, J. [Potsdam Institute for Climate Impact Research, Telegraphenberg A 31, 14473 Potsdam (Germany); Institute of Physics, Humboldt University Berlin, 12489 Berlin (Germany)

    2015-01-15

    Characterizing chaotic dynamics from integrate-and-fire (IF) interspike intervals (ISIs) is relatively easy performed at high firing rates. When the firing rate is low, a correct estimation of Lyapunov exponents (LEs) describing dynamical features of complex oscillations reflected in the IF ISI sequences becomes more complicated. In this work we discuss peculiarities and limitations of quantifying chaotic dynamics from IF point processes. We consider main factors leading to underestimated LEs and demonstrate a way of improving numerical determining of LEs from IF ISI sequences. We show that estimations of the two largest LEs can be performed using around 400 mean periods of chaotic oscillations in the regime of phase-coherent chaos. Application to real data is discussed.

  15. Quantifying mast cells in bladder pain syndrome by immunohistochemical analysis

    DEFF Research Database (Denmark)

    Larsen, M.S.; Mortensen, S.; Nordling, J.

    2008-01-01

    OBJECTIVES To evaluate a simple method for counting mast cells, thought to have a role in the pathophysiology of bladder pain syndrome (BPS, formerly interstitial cystitis, a syndrome of pelvic pain perceived to be related to the urinary bladder and accompanied by other urinary symptoms, e. g....... frequency and nocturia), as > 28 mast cells/mm(2) is defined as mastocytosis and correlated with clinical outcome. PATIENTS AND METHODS The current enzymatic staining method (naphtolesterase) on 10 mu m sections for quantifying mast cells is complicated. In the present study, 61 patients had detrusor...... sections between, respectively. Mast cells were counted according to a well-defined procedure. RESULTS The old and the new methods, on 10 and 3 mu m sections, showed a good correlation between mast cell counts. When using tryptase staining and 3 mu m sections, the mast cell number correlated well...

  16. Methods for quantifying T cell receptor binding affinities and thermodynamics

    Science.gov (United States)

    Piepenbrink, Kurt H.; Gloor, Brian E.; Armstrong, Kathryn M.; Baker, Brian M.

    2013-01-01

    αβ T cell receptors (TCRs) recognize peptide antigens bound and presented by class I or class II major histocompatibility complex (MHC) proteins. Recognition of a peptide/MHC complex is required for initiation and propagation of a cellular immune response, as well as the development and maintenance of the T cell repertoire. Here we discuss methods to quantify the affinities and thermodynamics of interactions between soluble ectodomains of TCRs and their peptide/MHC ligands, focusing on titration calorimetry, surface plasmon resonance, and fluorescence anisotropy. As TCRs typically bind ligand with weak-to-moderate affinities, we focus the discussion on means to enhance the accuracy and precision of low affinity measurements. In addition to further elucidating the biology of the T cell mediated immune response, more reliable low affinity measurements will aid with more probing studies with mutants or altered peptides that can help illuminate the physical underpinnings of how TCRs achieve their remarkable recognition properties. PMID:21609868

  17. Current challenges in quantifying preferential flow through the vadose zone

    Science.gov (United States)

    Koestel, John; Larsbo, Mats; Jarvis, Nick

    2017-04-01

    In this presentation, we give an overview of current challenges in quantifying preferential flow through the vadose zone. A review of the literature suggests that current generation models do not fully reflect the present state of process understanding and empirical knowledge of preferential flow. We believe that the development of improved models will be stimulated by the increasingly widespread application of novel imaging technologies as well as future advances in computational power and numerical techniques. One of the main challenges in this respect is to bridge the large gap between the scales at which preferential flow occurs (pore to Darcy scales) and the scale of interest for management (fields, catchments, regions). Studies at the pore scale are being supported by the development of 3-D non-invasive imaging and numerical simulation techniques. These studies are leading to a better understanding of how macropore network topology and initial/boundary conditions control key state variables like matric potential and thus the strength of preferential flow. Extrapolation of this knowledge to larger scales would require support from theoretical frameworks such as key concepts from percolation and network theory, since we lack measurement technologies to quantify macropore networks at these large scales. Linked hydro-geophysical measurement techniques that produce highly spatially and temporally resolved data enable investigation of the larger-scale heterogeneities that can generate preferential flow patterns at pedon, hillslope and field scales. At larger regional and global scales, improved methods of data-mining and analyses of large datasets (machine learning) may help in parameterizing models as well as lead to new insights into the relationships between soil susceptibility to preferential flow and site attributes (climate, land uses, soil types).

  18. Quantifiably secure power grid operation, management, and evolution :

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Genetha Anne.; Watson, Jean-Paul; Silva Monroy, Cesar Augusto; Gramacy, Robert B.

    2013-09-01

    This report summarizes findings and results of the Quantifiably Secure Power Grid Operation, Management, and Evolution LDRD. The focus of the LDRD was to develop decisionsupport technologies to enable rational and quantifiable risk management for two key grid operational timescales: scheduling (day-ahead) and planning (month-to-year-ahead). Risk or resiliency metrics are foundational in this effort. The 2003 Northeast Blackout investigative report stressed the criticality of enforceable metrics for system resiliency the grids ability to satisfy demands subject to perturbation. However, we neither have well-defined risk metrics for addressing the pervasive uncertainties in a renewable energy era, nor decision-support tools for their enforcement, which severely impacts efforts to rationally improve grid security. For day-ahead unit commitment, decision-support tools must account for topological security constraints, loss-of-load (economic) costs, and supply and demand variability especially given high renewables penetration. For long-term planning, transmission and generation expansion must ensure realized demand is satisfied for various projected technological, climate, and growth scenarios. The decision-support tools investigated in this project paid particular attention to tailoriented risk metrics for explicitly addressing high-consequence events. Historically, decisionsupport tools for the grid consider expected cost minimization, largely ignoring risk and instead penalizing loss-of-load through artificial parameters. The technical focus of this work was the development of scalable solvers for enforcing risk metrics. Advanced stochastic programming solvers were developed to address generation and transmission expansion and unit commitment, minimizing cost subject to pre-specified risk thresholds. Particular attention was paid to renewables where security critically depends on production and demand prediction accuracy. To address this

  19. THE SEGUE K GIANT SURVEY. III. QUANTIFYING GALACTIC HALO SUBSTRUCTURE

    Energy Technology Data Exchange (ETDEWEB)

    Janesh, William; Morrison, Heather L.; Ma, Zhibo; Harding, Paul [Department of Astronomy, Case Western Reserve University, Cleveland, OH 44106 (United States); Rockosi, Constance [UCO/Lick Observatory, University of California, Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States); Starkenburg, Else [Department of Physics and Astronomy, University of Victoria, P.O. Box 1700, STN CSC, Victoria BC V8W 3P6 (Canada); Xue, Xiang Xiang; Rix, Hans-Walter [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Beers, Timothy C. [Department of Physics and JINA Center for the Evolution of the Elements, University of Notre Dame, Notre Dame, IN 46556 (United States); Johnson, Jennifer [Department of Astronomy, Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Lee, Young Sun [Department of Astronomy and Space Science, Chungnam National University, Daejeon 34134 (Korea, Republic of); Schneider, Donald P. [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States)

    2016-01-10

    We statistically quantify the amount of substructure in the Milky Way stellar halo using a sample of 4568 halo K giant stars at Galactocentric distances ranging over 5–125 kpc. These stars have been selected photometrically and confirmed spectroscopically as K giants from the Sloan Digital Sky Survey’s Sloan Extension for Galactic Understanding and Exploration project. Using a position–velocity clustering estimator (the 4distance) and a model of a smooth stellar halo, we quantify the amount of substructure in the halo, divided by distance and metallicity. Overall, we find that the halo as a whole is highly structured. We also confirm earlier work using blue horizontal branch (BHB) stars which showed that there is an increasing amount of substructure with increasing Galactocentric radius, and additionally find that the amount of substructure in the halo increases with increasing metallicity. Comparing to resampled BHB stars, we find that K giants and BHBs have similar amounts of substructure over equivalent ranges of Galactocentric radius. Using a friends-of-friends algorithm to identify members of individual groups, we find that a large fraction (∼33%) of grouped stars are associated with Sgr, and identify stars belonging to other halo star streams: the Orphan Stream, the Cetus Polar Stream, and others, including previously unknown substructures. A large fraction of sample K giants (more than 50%) are not grouped into any substructure. We find also that the Sgr stream strongly dominates groups in the outer halo for all except the most metal-poor stars, and suggest that this is the source of the increase of substructure with Galactocentric radius and metallicity.

  20. Quantifying center of pressure variability in chondrodystrophoid dogs.

    Science.gov (United States)

    Blau, S R; Davis, L M; Gorney, A M; Dohse, C S; Williams, K D; Lim, J-H; Pfitzner, W G; Laber, E; Sawicki, G S; Olby, N J

    2017-08-01

    The center of pressure (COP) position reflects a combination of proprioceptive, motor and mechanical function. As such, it can be used to quantify and characterize neurologic dysfunction. The aim of this study was to describe and quantify the movement of COP and its variability in healthy chondrodystrophoid dogs while walking to provide a baseline for comparison to dogs with spinal cord injury due to acute intervertebral disc herniations. Fifteen healthy adult chondrodystrophoid dogs were walked on an instrumented treadmill that recorded the location of each dog's COP as it walked. Center of pressure (COP) was referenced from an anatomical marker on the dogs' back. The root mean squared (RMS) values of changes in COP location in the sagittal (y) and horizontal (x) directions were calculated to determine the range of COP variability. Three dogs would not walk on the treadmill. One dog was too small to collect interpretable data. From the remaining 11 dogs, 206 trials were analyzed. Mean RMS for change in COPx per trial was 0.0138 (standard deviation, SD 0.0047) and for COPy was 0.0185 (SD 0.0071). Walking speed but not limb length had a significant effect on COP RMS. Repeat measurements in six dogs had high test retest consistency in the x and fair consistency in the y direction. In conclusion, COP variability can be measured consistently in dogs, and a range of COP variability for normal chondrodystrophoid dogs has been determined to provide a baseline for future studies on dogs with spinal cord injury. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Quantifying climatological ranges and anomalies for Pacific coral reef ecosystems.

    Science.gov (United States)

    Gove, Jamison M; Williams, Gareth J; McManus, Margaret A; Heron, Scott F; Sandin, Stuart A; Vetter, Oliver J; Foley, David G

    2013-01-01

    Coral reef ecosystems are exposed to a range of environmental forcings that vary on daily to decadal time scales and across spatial scales spanning from reefs to archipelagos. Environmental variability is a major determinant of reef ecosystem structure and function, including coral reef extent and growth rates, and the abundance, diversity, and morphology of reef organisms. Proper characterization of environmental forcings on coral reef ecosystems is critical if we are to understand the dynamics and implications of abiotic-biotic interactions on reef ecosystems. This study combines high-resolution bathymetric information with remotely sensed sea surface temperature, chlorophyll-a and irradiance data, and modeled wave data to quantify environmental forcings on coral reefs. We present a methodological approach to develop spatially constrained, island- and atoll-scale metrics that quantify climatological range limits and anomalous environmental forcings across U.S. Pacific coral reef ecosystems. Our results indicate considerable spatial heterogeneity in climatological ranges and anomalies across 41 islands and atolls, with emergent spatial patterns specific to each environmental forcing. For example, wave energy was greatest at northern latitudes and generally decreased with latitude. In contrast, chlorophyll-a was greatest at reef ecosystems proximate to the equator and northern-most locations, showing little synchrony with latitude. In addition, we find that the reef ecosystems with the highest chlorophyll-a concentrations; Jarvis, Howland, Baker, Palmyra and Kingman are each uninhabited and are characterized by high hard coral cover and large numbers of predatory fishes. Finally, we find that scaling environmental data to the spatial footprint of individual islands and atolls is more likely to capture local environmental forcings, as chlorophyll-a concentrations decreased at relatively short distances (>7 km) from 85% of our study locations. These metrics will help

  2. Quantifying polypeptide conformational space: sensitivity to conformation and ensemble definition.

    Science.gov (United States)

    Sullivan, David C; Lim, Carmay

    2006-08-24

    Quantifying the density of conformations over phase space (the conformational distribution) is needed to model important macromolecular processes such as protein folding. In this work, we quantify the conformational distribution for a simple polypeptide (N-mer polyalanine) using the cumulative distribution function (CDF), which gives the probability that two randomly selected conformations are separated by less than a "conformational" distance and whose inverse gives conformation counts as a function of conformational radius. An important finding is that the conformation counts obtained by the CDF inverse depend critically on the assignment of a conformation's distance span and the ensemble (e.g., unfolded state model): varying ensemble and conformation definition (1 --> 2 A) varies the CDF-based conformation counts for Ala(50) from 10(11) to 10(69). In particular, relatively short molecular dynamics (MD) relaxation of Ala(50)'s random-walk ensemble reduces the number of conformers from 10(55) to 10(14) (using a 1 A root-mean-square-deviation radius conformation definition) pointing to potential disconnections in comparing the results from simplified models of unfolded proteins with those from all-atom MD simulations. Explicit waters are found to roughen the landscape considerably. Under some common conformation definitions, the results herein provide (i) an upper limit to the number of accessible conformations that compose unfolded states of proteins, (ii) the optimal clustering radius/conformation radius for counting conformations for a given energy and solvent model, (iii) a means of comparing various studies, and (iv) an assessment of the applicability of random search in protein folding.

  3. A robust nonparametric method for quantifying undetected extinctions.

    Science.gov (United States)

    Chisholm, Ryan A; Giam, Xingli; Sadanandan, Keren R; Fung, Tak; Rheindt, Frank E

    2016-06-01

    How many species have gone extinct in modern times before being described by science? To answer this question, and thereby get a full assessment of humanity's impact on biodiversity, statistical methods that quantify undetected extinctions are required. Such methods have been developed recently, but they are limited by their reliance on parametric assumptions; specifically, they assume the pools of extant and undetected species decay exponentially, whereas real detection rates vary temporally with survey effort and real extinction rates vary with the waxing and waning of threatening processes. We devised a new, nonparametric method for estimating undetected extinctions. As inputs, the method requires only the first and last date at which each species in an ensemble was recorded. As outputs, the method provides estimates of the proportion of species that have gone extinct, detected, or undetected and, in the special case where the number of undetected extant species in the present day is assumed close to zero, of the absolute number of undetected extinct species. The main assumption of the method is that the per-species extinction rate is independent of whether a species has been detected or not. We applied the method to the resident native bird fauna of Singapore. Of 195 recorded species, 58 (29.7%) have gone extinct in the last 200 years. Our method projected that an additional 9.6 species (95% CI 3.4, 19.8) have gone extinct without first being recorded, implying a true extinction rate of 33.0% (95% CI 31.0%, 36.2%). We provide R code for implementing our method. Because our method does not depend on strong assumptions, we expect it to be broadly useful for quantifying undetected extinctions. © 2016 Society for Conservation Biology.

  4. Quantifying Selective Pressures Driving Bacterial Evolution Using Lineage Analysis

    Science.gov (United States)

    Lambert, Guillaume; Kussell, Edo

    2015-01-01

    Organisms use a variety of strategies to adapt to their environments and maximize long-term growth potential, but quantitative characterization of the benefits conferred by the use of such strategies, as well as their impact on the whole population's rate of growth, remains challenging. Here, we use a path-integral framework that describes how selection acts on lineages—i.e., the life histories of individuals and their ancestors—to demonstrate that lineage-based measurements can be used to quantify the selective pressures acting on a population. We apply this analysis to Escherichia coli bacteria exposed to cyclical treatments of carbenicillin, an antibiotic that interferes with cell-wall synthesis and affects cells in an age-dependent manner. While the extensive characterization of the life history of thousands of cells is necessary to accurately extract the age-dependent selective pressures caused by carbenicillin, the same measurement can be recapitulated using lineage-based statistics of a single surviving cell. Population-wide evolutionary pressures can be extracted from the properties of the surviving lineages within a population, providing an alternative and efficient procedure to quantify the evolutionary forces acting on a population. Importantly, this approach is not limited to age-dependent selection, and the framework can be generalized to detect signatures of other trait-specific selection using lineage-based measurements. Our results establish a powerful way to study the evolutionary dynamics of life under selection and may be broadly useful in elucidating selective pressures driving the emergence of antibiotic resistance and the evolution of survival strategies in biological systems.

  5. Quantifying changes and influences on mottled duck density in Texas

    Science.gov (United States)

    Ross, Beth; Haukos, David A.; Walther, Patrick

    2018-01-01

    Understanding the relative influence of environmental and intrinsic effects on populations is important for managing and conserving harvested species, especially those species inhabiting changing environments. Additionally, climate change can increase the uncertainty associated with management of species in these changing environments, making understanding factors affecting their populations even more important. Coastal ecosystems are particularly threatened by climate change; the combined effects of increasing severe weather events, sea level rise, and drought will likely have non-linear effects on coastal marsh wildlife species and their associated habitats. A species of conservation concern that persists in these coastal areas is the mottled duck (Anas fulvigula). Mottled ducks in the western Gulf Coast are approximately 50% below target abundance numbers established by the Gulf Coast Joint Venture for Texas and Louisiana, USA. Although evidence for declines in mottled duck abundance is apparent, specific causes of the decrease remain unknown. Our goals were to determine where the largest declines in mottled duck population were occurring along the system of Texas Gulf Coast National Wildlife Refuges and quantify the relative contribution of environmental and intrinsic effects on changes to relative population density. We modeled aerial survey data of mottled duck density along the Texas Gulf Coast from 1986–2015 to quantify effects of extreme weather events on an index to mottled duck density using the United States Climate Extremes Index and Palmer Drought Severity Index. Our results indicate that decreases in abundance are best described by an increase in days with extreme 1-day precipitation from June to November (hurricane season) and an increase in drought severity. Better understanding those portions of the life cycle affected by environmental conditions, and how to manage mottled duck habitat in conjunction with these events will likely be key to

  6. Quantifying the emissions reduction effectiveness and costs of oxygenated gasoline

    International Nuclear Information System (INIS)

    Lyons, C.E.

    1993-01-01

    During the fall, winter, and spring of 1991-1992, a measurement program was conducted in Denver, Colorado to quantify the technical and economic effectiveness of oxygenated gasoline in reducing automobile carbon monoxide (CO) emissions. Emissions from 80,000 vehicles under a variety of operating conditions were measured before, during, and after the seasonal introduction of oxygenated gasoline into the region. Gasoline samples were taken from several hundred vehicles to confirm the actual oxygen content of the fuel in use. Vehicle operating conditions, such as cold starts and warm operations, and ambient conditions were characterized. The variations in emissions attributable to fuel type and to operating conditions were then quantified. This paper describes the measurement program and its results. The 1991-1992 Colorado oxygenated gasoline program contributed to a reduction in carbon monoxide (CO) emissions from gasoline-powered vehicles. The measurement program demonstrated that most of the reduction is concentrated in a small percentage of the vehicles that use oxygenated gasoline. The remainder experience little or not reduction in emissions. The oxygenated gasoline program outlays are approximately $25 to $30 million per year in Colorado. These are directly measurable costs, incurred through increased government expenditures, higher costs to private industry, and losses in fuel economy. The measurement program determined the total costs of oxygenated gasoline as an air pollution control strategy for the region. Costs measured included government administration and enforcement, industry production and distribution, and consumer and other user costs. This paper describes the ability of the oxygenated gasoline program to reduce pollution; the overall cost of the program to government, industry, and consumers; and the effectiveness of the program in reducing pollution compared to its costs

  7. Quantifying Climatological Ranges and Anomalies for Pacific Coral Reef Ecosystems

    Science.gov (United States)

    Gove, Jamison M.; Williams, Gareth J.; McManus, Margaret A.; Heron, Scott F.; Sandin, Stuart A.; Vetter, Oliver J.; Foley, David G.

    2013-01-01

    Coral reef ecosystems are exposed to a range of environmental forcings that vary on daily to decadal time scales and across spatial scales spanning from reefs to archipelagos. Environmental variability is a major determinant of reef ecosystem structure and function, including coral reef extent and growth rates, and the abundance, diversity, and morphology of reef organisms. Proper characterization of environmental forcings on coral reef ecosystems is critical if we are to understand the dynamics and implications of abiotic–biotic interactions on reef ecosystems. This study combines high-resolution bathymetric information with remotely sensed sea surface temperature, chlorophyll-a and irradiance data, and modeled wave data to quantify environmental forcings on coral reefs. We present a methodological approach to develop spatially constrained, island- and atoll-scale metrics that quantify climatological range limits and anomalous environmental forcings across U.S. Pacific coral reef ecosystems. Our results indicate considerable spatial heterogeneity in climatological ranges and anomalies across 41 islands and atolls, with emergent spatial patterns specific to each environmental forcing. For example, wave energy was greatest at northern latitudes and generally decreased with latitude. In contrast, chlorophyll-a was greatest at reef ecosystems proximate to the equator and northern-most locations, showing little synchrony with latitude. In addition, we find that the reef ecosystems with the highest chlorophyll-a concentrations; Jarvis, Howland, Baker, Palmyra and Kingman are each uninhabited and are characterized by high hard coral cover and large numbers of predatory fishes. Finally, we find that scaling environmental data to the spatial footprint of individual islands and atolls is more likely to capture local environmental forcings, as chlorophyll-a concentrations decreased at relatively short distances (>7 km) from 85% of our study locations. These metrics will

  8. Quantifying uncertainties of seismic Bayesian inversion of Northern Great Plains

    Science.gov (United States)

    Gao, C.; Lekic, V.

    2017-12-01

    Elastic waves excited by earthquakes are the fundamental observations of the seismological studies. Seismologists measure information such as travel time, amplitude, and polarization to infer the properties of earthquake source, seismic wave propagation, and subsurface structure. Across numerous applications, seismic imaging has been able to take advantage of complimentary seismic observables to constrain profiles and lateral variations of Earth's elastic properties. Moreover, seismic imaging plays a unique role in multidisciplinary studies of geoscience by providing direct constraints on the unreachable interior of the Earth. Accurate quantification of uncertainties of inferences made from seismic observations is of paramount importance for interpreting seismic images and testing geological hypotheses. However, such quantification remains challenging and subjective due to the non-linearity and non-uniqueness of geophysical inverse problem. In this project, we apply a reverse jump Markov chain Monte Carlo (rjMcMC) algorithm for a transdimensional Bayesian inversion of continental lithosphere structure. Such inversion allows us to quantify the uncertainties of inversion results by inverting for an ensemble solution. It also yields an adaptive parameterization that enables simultaneous inversion of different elastic properties without imposing strong prior information on the relationship between them. We present retrieved profiles of shear velocity (Vs) and radial anisotropy in Northern Great Plains using measurements from USArray stations. We use both seismic surface wave dispersion and receiver function data due to their complementary constraints of lithosphere structure. Furthermore, we analyze the uncertainties of both individual and joint inversion of those two data types to quantify the benefit of doing joint inversion. As an application, we infer the variation of Moho depths and crustal layering across the northern Great Plains.

  9. Quantified social and aesthetic values in environmental decision making

    International Nuclear Information System (INIS)

    Burnham, J.B.; Maynard, W.S.; Jones, G.R.

    1975-01-01

    A method has been devised for quantifying the social criteria to be considered when selecting a nuclear design and/or site option. Community judgement of social values is measured directly and indirectly on eight siting factors. These same criteria are independently analysed by experts using techno-economic methods. The combination of societal and technical indices yields a weighted score for each alternative. The aesthetic impact was selected as the first to be quantified. A visual quality index was developed to measure the change in the visual quality of a viewscape caused by construction of a facility. Visual quality was measured by reducing it to its component parts - intactness, vividness and unity - and rating each part with and without the facility. Urban planners and landscape architects used the technique to analyse three viewscapes, testing three different methods on each viewscape. The three methods used the same aesthetic elements but varied in detail and depth. As expected, the technique with the greatest analytical detail (and least subjective judgement) was the most reliable method. Social value judgements were measured by social psychologists applying a questionnaire technique, using a number of design and site options to illustrate the range of criteria. Three groups of predictably different respondents - environmentalists, high-school students and businessmen - were selected. The three groups' response patterns were remarkably similar, though businessmen were consistently more biased towards nuclear power than were environmentalists. Correlational and multiple regression analyses provided indirect estimates of the relative importance of each impact category. Only the environmentalists showed a high correlation between the two methods. This is partially explained by their interest and knowledge. Also, the regression analysis encounters problems when small samples are used, and the environmental sample was considerably larger than the other two

  10. Quantifying induced effects of subsurface renewable energy storage

    Science.gov (United States)

    Bauer, Sebastian; Beyer, Christof; Pfeiffer, Tilmann; Boockmeyer, Anke; Popp, Steffi; Delfs, Jens-Olaf; Wang, Bo; Li, Dedong; Dethlefsen, Frank; Dahmke, Andreas

    2015-04-01

    New methods and technologies for energy storage are required for the transition to renewable energy sources. Subsurface energy storage systems such as salt caverns or porous formations offer the possibility of hosting large amounts of energy or substance. When employing these systems, an adequate system and process understanding is required in order to assess the feasibility of the individual storage option at the respective site and to predict the complex and interacting effects induced. This understanding is the basis for assessing the potential as well as the risks connected with a sustainable usage of these storage options, especially when considering possible mutual influences. For achieving this aim, in this work synthetic scenarios for the use of the geological underground as an energy storage system are developed and parameterized. The scenarios are designed to represent typical conditions in North Germany. The types of subsurface use investigated here include gas storage and heat storage in porous formations. The scenarios are numerically simulated and interpreted with regard to risk analysis and effect forecasting. For this, the numerical simulators Eclipse and OpenGeoSys are used. The latter is enhanced to include the required coupled hydraulic, thermal, geomechanical and geochemical processes. Using the simulated and interpreted scenarios, the induced effects are quantified individually and monitoring concepts for observing these effects are derived. This presentation will detail the general investigation concept used and analyze the parameter availability for this type of model applications. Then the process implementation and numerical methods required and applied for simulating the induced effects of subsurface storage are detailed and explained. Application examples show the developed methods and quantify induced effects and storage sizes for the typical settings parameterized. This work is part of the ANGUS+ project, funded by the German Ministry

  11. Global tropospheric ozone modeling: Quantifying errors due to grid resolution

    Science.gov (United States)

    Wild, Oliver; Prather, Michael J.

    2006-06-01

    Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes on a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63, and T106 resolution is likewise monotonic but indicates that there are still large errors at 120 km scales, suggesting that T106 resolution is too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over east Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution. However, subsequent ozone production in the free troposphere is not greatly affected. We find that the export of short-lived precursors such as NOx by convection is overestimated at coarse resolution.

  12. Quantifying climatological ranges and anomalies for Pacific coral reef ecosystems.

    Directory of Open Access Journals (Sweden)

    Jamison M Gove

    Full Text Available Coral reef ecosystems are exposed to a range of environmental forcings that vary on daily to decadal time scales and across spatial scales spanning from reefs to archipelagos. Environmental variability is a major determinant of reef ecosystem structure and function, including coral reef extent and growth rates, and the abundance, diversity, and morphology of reef organisms. Proper characterization of environmental forcings on coral reef ecosystems is critical if we are to understand the dynamics and implications of abiotic-biotic interactions on reef ecosystems. This study combines high-resolution bathymetric information with remotely sensed sea surface temperature, chlorophyll-a and irradiance data, and modeled wave data to quantify environmental forcings on coral reefs. We present a methodological approach to develop spatially constrained, island- and atoll-scale metrics that quantify climatological range limits and anomalous environmental forcings across U.S. Pacific coral reef ecosystems. Our results indicate considerable spatial heterogeneity in climatological ranges and anomalies across 41 islands and atolls, with emergent spatial patterns specific to each environmental forcing. For example, wave energy was greatest at northern latitudes and generally decreased with latitude. In contrast, chlorophyll-a was greatest at reef ecosystems proximate to the equator and northern-most locations, showing little synchrony with latitude. In addition, we find that the reef ecosystems with the highest chlorophyll-a concentrations; Jarvis, Howland, Baker, Palmyra and Kingman are each uninhabited and are characterized by high hard coral cover and large numbers of predatory fishes. Finally, we find that scaling environmental data to the spatial footprint of individual islands and atolls is more likely to capture local environmental forcings, as chlorophyll-a concentrations decreased at relatively short distances (>7 km from 85% of our study locations

  13. Automated clustering procedure for TJ-II experimental signals

    International Nuclear Information System (INIS)

    Duro, N.; Vega, J.; Dormido, R.; Farias, G.; Dormido-Canto, S.; Sanchez, J.; Santos, M.; Pajares, G.

    2006-01-01

    Databases in fusion experiments are made up of thousands of signals. For this reason, data analysis must be simplified by developing automatic mechanisms for fast search and retrieval of specific data in the waveform database. In particular, a method for finding similar waveforms would be very helpful. The term 'similar' implies the use of proximity measurements in order to quantify how close two signals are. In this way, it would be possible to define several categories (clusters) and to classify the waveforms according to them, where this classification can be a starting point for exploratory data analysis in large databases. The clustering process is divided in two stages. The first one is feature extraction, i.e., to choose the set of properties that allow us to encode as much information as possible concerning a signal. The second one establishes the number of clusters according to a proximity measure

  14. Predator and prey perception in copepods due to hydromechanical signals

    DEFF Research Database (Denmark)

    Kiørboe, Thomas; Visser, Andre

    1999-01-01

    of the different components of the fluid disturbance. We use this model to argue that prey perception depends on the absolute magnitude of the fluid velocity generated by the moving prey, while predator perception depends on the magnitude of one or several of the components of the fluid velocity gradients...... (deformation rate, vorticity, acceleration) generated by the predator. On the assumption that hydrodynamic disturbances are perceived through the mechanical bending of sensory setae, we estimate the magnitude of the signal strength due to each of the fluid disturbance components. We then derive equations...... for reaction distances as a function of threshold signal strength and the size and velocity of the prey or predator. We provide a conceptual framework for quantifying threshold signal strengths and, hence, perception distances. The model is illustrated by several examples, and we demonstrate, for example, (1...

  15. Impact of Improper Gaussian Signaling on Hardware Impaired Systems

    KAUST Repository

    Javed, Sidrah; Amin, Osama; Ikki, Salam S.; Alouini, Mohamed-Slim

    2016-01-01

    In this paper, we accurately model the hardware impairments (HWI) as improper Gaussian signaling (IGS) which can characterize the asymmetric characteristics of different HWI sources. The proposed model encourages us to adopt IGS scheme for transmitted signal that represents a general study compared with the conventional scheme, proper Gaussian signaling (PGS). First, we express the achievable rate of HWI systems when both PGS and IGS schemes are used when the aggregate effect of HWI is modeled as IGS. Moreover, we tune the IGS statistical characteristics to maximize the achievable rate. Then, we analyze the outage probability for both schemes and derive closed form expressions. Finally, we validate the analytic expressions through numerical and simulation results. In addition, we quantify through the numerical results the performance degradation in the absence of ideal transceivers and the gain reaped from adopting IGS scheme compared with PGS scheme.

  16. Impact of Improper Gaussian Signaling on Hardware Impaired Systems

    KAUST Repository

    Javed, Sidrah

    2016-12-18

    In this paper, we accurately model the hardware impairments (HWI) as improper Gaussian signaling (IGS) which can characterize the asymmetric characteristics of different HWI sources. The proposed model encourages us to adopt IGS scheme for transmitted signal that represents a general study compared with the conventional scheme, proper Gaussian signaling (PGS). First, we express the achievable rate of HWI systems when both PGS and IGS schemes are used when the aggregate effect of HWI is modeled as IGS. Moreover, we tune the IGS statistical characteristics to maximize the achievable rate. Then, we analyze the outage probability for both schemes and derive closed form expressions. Finally, we validate the analytic expressions through numerical and simulation results. In addition, we quantify through the numerical results the performance degradation in the absence of ideal transceivers and the gain reaped from adopting IGS scheme compared with PGS scheme.

  17. Pathophysiology of Glucocorticoid Signaling.

    Science.gov (United States)

    Vitellius, Géraldine; Trabado, Séverine; Bouligand, Jérôme; Delemer, Brigitte; Lombès, Marc

    2018-06-01

    Glucocorticoids (GC), such as cortisol or dexamethasone, control various physiological functions, notably those involved in development, metabolism, inflammatory processes and stress, and exert most of their effects upon binding to the glucocorticoid receptor (GR, encoded by NR3C1 gene). GC signaling follows several consecutive steps leading to target gene transactivation, including ligand binding, nuclear translocation of ligand-activated GR complexes, DNA binding, coactivator interaction and recruitment of functional transcriptional machinery. Any step may be impaired and may account for altered GC signaling. Partial or generalized glucocorticoid resistance syndrome may result in a reduced level of functional GR, a decreased hormone affinity and binding, a defect in nuclear GR translocation, a decrease or lack of DNA binding and/or post-transcriptional GR modifications. To date, 26 loss-of-function NR3C1 mutations have been reported in the context of hypertension, hirsutism, adrenal hyperplasia or metabolic disorders. These clinical signs are generally associated with biological features including hypercortisolism without negative regulatory feedback loop on the hypothalamic-pituitary-adrenal axis. Patients had often low plasma aldosterone and renin levels despite hypertension. Only one GR gain-of-function mutation has been described associating Cushing's syndrome phenotype with normal urinary-free cortisol. Some GR polymorphisms (ER22/23EK, GR-9β) have been linked to glucocorticoid resistance and a healthier metabolic profile whereas some others seemed to be associated with GC hypersensitivity (N363S, BclI), increasing cardiovascular risk (diabetes type 2, visceral obesity). This review focuses on the earlier findings on the pathophysiology of GR signaling and presents criteria facilitating identification of novel NR3C1 mutations in selected patients. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  18. Purinergic Signalling: Therapeutic Developments

    Directory of Open Access Journals (Sweden)

    Geoffrey Burnstock

    2017-09-01

    Full Text Available Purinergic signalling, i.e., the role of nucleotides as extracellular signalling molecules, was proposed in 1972. However, this concept was not well accepted until the early 1990’s when receptor subtypes for purines and pyrimidines were cloned and characterised, which includes four subtypes of the P1 (adenosine receptor, seven subtypes of P2X ion channel receptors and 8 subtypes of the P2Y G protein-coupled receptor. Early studies were largely concerned with the physiology, pharmacology and biochemistry of purinergic signalling. More recently, the focus has been on the pathophysiology and therapeutic potential. There was early recognition of the use of P1 receptor agonists for the treatment of supraventricular tachycardia and A2A receptor antagonists are promising for the treatment of Parkinson’s disease. Clopidogrel, a P2Y12 antagonist, is widely used for the treatment of thrombosis and stroke, blocking P2Y12 receptor-mediated platelet aggregation. Diquafosol, a long acting P2Y2 receptor agonist, is being used for the treatment of dry eye. P2X3 receptor antagonists have been developed that are orally bioavailable and stable in vivo and are currently in clinical trials for the treatment of chronic cough, bladder incontinence, visceral pain and hypertension. Antagonists to P2X7 receptors are being investigated for the treatment of inflammatory disorders, including neurodegenerative diseases. Other investigations are in progress for the use of purinergic agents for the treatment of osteoporosis, myocardial infarction, irritable bowel syndrome, epilepsy, atherosclerosis, depression, autism, diabetes, and cancer.

  19. Active voltammetric microsensors with neural signal processing.

    Energy Technology Data Exchange (ETDEWEB)

    Vogt, M. C.

    1998-12-11

    Many industrial and environmental processes, including bioremediation, would benefit from the feedback and control information provided by a local multi-analyte chemical sensor. For most processes, such a sensor would need to be rugged enough to be placed in situ for long-term remote monitoring, and inexpensive enough to be fielded in useful numbers. The multi-analyte capability is difficult to obtain from common passive sensors, but can be provided by an active device that produces a spectrum-type response. Such new active gas microsensor technology has been developed at Argonne National Laboratory. The technology couples an electrocatalytic ceramic-metallic (cermet) microsensor with a voltammetric measurement technique and advanced neural signal processing. It has been demonstrated to be flexible, rugged, and very economical to produce and deploy. Both narrow interest detectors and wide spectrum instruments have been developed around this technology. Much of this technology's strength lies in the active measurement technique employed. The technique involves applying voltammetry to a miniature electrocatalytic cell to produce unique chemical ''signatures'' from the analytes. These signatures are processed with neural pattern recognition algorithms to identify and quantify the components in the analyte. The neural signal processing allows for innovative sampling and analysis strategies to be employed with the microsensor. In most situations, the whole response signature from the voltammogram can be used to identify, classify, and quantify an analyte, without dissecting it into component parts. This allows an instrument to be calibrated once for a specific gas or mixture of gases by simple exposure to a multi-component standard rather than by a series of individual gases. The sampled unknown analytes can vary in composition or in concentration, the calibration, sensing, and processing methods of these active voltammetric microsensors can

  20. Active voltammetric microsensors with neural signal processing

    Science.gov (United States)

    Vogt, Michael C.; Skubal, Laura R.

    1999-02-01

    Many industrial and environmental processes, including bioremediation, would benefit from the feedback and control information provided by a local multi-analyte chemical sensor. For most processes, such a sensor would need to be rugged enough to be placed in situ for long-term remote monitoring, and inexpensive enough to be fielded in useful numbers. The multi-analyte capability is difficult to obtain from common passive sensors, but can be provided by an active device that produces a spectrum-type response. Such new active gas microsensor technology has been developed at Argonne National Laboratory. The technology couples an electrocatalytic ceramic-metallic (cermet) microsensor with a voltammetric measurement technique and advanced neural signal processing. It has been demonstrated to be flexible, rugged, and very economical to produce and deploy. Both narrow interest detectors and wide spectrum instruments have been developed around this technology. Much of this technology's strength lies in the active measurement technique employed. The technique involves applying voltammetry to a miniature electrocatalytic cell to produce unique chemical 'signatures' from the analytes. These signatures are processed with neural pattern recognition algorithms to identify and quantify the components in the analyte. The neural signal processing allows for innovative sampling and analysis strategies to be employed with the microsensor. In most situations, the whole response signature from the voltammogram can be used to identify, classify, and quantify an analyte, without dissecting it into component parts. This allows an instrument to be calibrated once for a specific gas or mixture of gases by simple exposure to a multi-component standard rather than by a series of individual gases. The sampled unknown analytes can vary in composition or in concentration; the calibration, sensing, and processing methods of these active voltammetric microsensors can detect, recognize, and