WorldWideScience

Sample records for quantify dynamic similarity

  1. Statistical Measures to Quantify Similarity between Molecular Dynamics Simulation Trajectories

    Directory of Open Access Journals (Sweden)

    Jenny Farmer

    2017-11-01

    Full Text Available Molecular dynamics simulation is commonly employed to explore protein dynamics. Despite the disparate timescales between functional mechanisms and molecular dynamics (MD trajectories, functional differences are often inferred from differences in conformational ensembles between two proteins in structure-function studies that investigate the effect of mutations. A common measure to quantify differences in dynamics is the root mean square fluctuation (RMSF about the average position of residues defined by C α -atoms. Using six MD trajectories describing three native/mutant pairs of beta-lactamase, we make comparisons with additional measures that include Jensen-Shannon, modifications of Kullback-Leibler divergence, and local p-values from 1-sample Kolmogorov-Smirnov tests. These additional measures require knowing a probability density function, which we estimate by using a nonparametric maximum entropy method that quantifies rare events well. The same measures are applied to distance fluctuations between C α -atom pairs. Results from several implementations for quantitative comparison of a pair of MD trajectories are made based on fluctuations for on-residue and residue-residue local dynamics. We conclude that there is almost always a statistically significant difference between pairs of 100 ns all-atom simulations on moderate-sized proteins as evident from extraordinarily low p-values.

  2. Dynamics based alignment of proteins: an alternative approach to quantify dynamic similarity

    Directory of Open Access Journals (Sweden)

    Lyngsø Rune

    2010-04-01

    Full Text Available Abstract Background The dynamic motions of many proteins are central to their function. It therefore follows that the dynamic requirements of a protein are evolutionary constrained. In order to assess and quantify this, one needs to compare the dynamic motions of different proteins. Comparing the dynamics of distinct proteins may also provide insight into how protein motions are modified by variations in sequence and, consequently, by structure. The optimal way of comparing complex molecular motions is, however, far from trivial. The majority of comparative molecular dynamics studies performed to date relied upon prior sequence or structural alignment to define which residues were equivalent in 3-dimensional space. Results Here we discuss an alternative methodology for comparative molecular dynamics that does not require any prior alignment information. We show it is possible to align proteins based solely on their dynamics and that we can use these dynamics-based alignments to quantify the dynamic similarity of proteins. Our method was tested on 10 representative members of the PDZ domain family. Conclusions As a result of creating pair-wise dynamics-based alignments of PDZ domains, we have found evolutionarily conserved patterns in their backbone dynamics. The dynamic similarity of PDZ domains is highly correlated with their structural similarity as calculated with Dali. However, significant differences in their dynamics can be detected indicating that sequence has a more refined role to play in protein dynamics than just dictating the overall fold. We suggest that the method should be generally applicable.

  3. Dynamic similarity in erosional processes

    Science.gov (United States)

    Scheidegger, A.E.

    1963-01-01

    A study is made of the dynamic similarity conditions obtaining in a variety of erosional processes. The pertinent equations for each type of process are written in dimensionless form; the similarity conditions can then easily be deduced. The processes treated are: raindrop action, slope evolution and river erosion. ?? 1963 Istituto Geofisico Italiano.

  4. Dynamical similarity of geomagnetic field reversals.

    Science.gov (United States)

    Valet, Jean-Pierre; Fournier, Alexandre; Courtillot, Vincent; Herrero-Bervera, Emilio

    2012-10-04

    No consensus has been reached so far on the properties of the geomagnetic field during reversals or on the main features that might reveal its dynamics. A main characteristic of the reversing field is a large decrease in the axial dipole and the dominant role of non-dipole components. Other features strongly depend on whether they are derived from sedimentary or volcanic records. Only thermal remanent magnetization of lava flows can capture faithful records of a rapidly varying non-dipole field, but, because of episodic volcanic activity, sequences of overlying flows yield incomplete records. Here we show that the ten most detailed volcanic records of reversals can be matched in a very satisfactory way, under the assumption of a common duration, revealing common dynamical characteristics. We infer that the reversal process has remained unchanged, with the same time constants and durations, at least since 180 million years ago. We propose that the reversing field is characterized by three successive phases: a precursory event, a 180° polarity switch and a rebound. The first and third phases reflect the emergence of the non-dipole field with large-amplitude secular variation. They are rarely both recorded at the same site owing to the rapidly changing field geometry and last for less than 2,500 years. The actual transit between the two polarities does not last longer than 1,000 years and might therefore result from mechanisms other than those governing normal secular variation. Such changes are too brief to be accurately recorded by most sediments.

  5. Quantifying the Determinants of Evolutionary Dynamics Leading to Drug Resistance.

    Directory of Open Access Journals (Sweden)

    Guillaume Chevereau

    Full Text Available The emergence of drug resistant pathogens is a serious public health problem. It is a long-standing goal to predict rates of resistance evolution and design optimal treatment strategies accordingly. To this end, it is crucial to reveal the underlying causes of drug-specific differences in the evolutionary dynamics leading to resistance. However, it remains largely unknown why the rates of resistance evolution via spontaneous mutations and the diversity of mutational paths vary substantially between drugs. Here we comprehensively quantify the distribution of fitness effects (DFE of mutations, a key determinant of evolutionary dynamics, in the presence of eight antibiotics representing the main modes of action. Using precise high-throughput fitness measurements for genome-wide Escherichia coli gene deletion strains, we find that the width of the DFE varies dramatically between antibiotics and, contrary to conventional wisdom, for some drugs the DFE width is lower than in the absence of stress. We show that this previously underappreciated divergence in DFE width among antibiotics is largely caused by their distinct drug-specific dose-response characteristics. Unlike the DFE, the magnitude of the changes in tolerated drug concentration resulting from genome-wide mutations is similar for most drugs but exceptionally small for the antibiotic nitrofurantoin, i.e., mutations generally have considerably smaller resistance effects for nitrofurantoin than for other drugs. A population genetics model predicts that resistance evolution for drugs with this property is severely limited and confined to reproducible mutational paths. We tested this prediction in laboratory evolution experiments using the "morbidostat", a device for evolving bacteria in well-controlled drug environments. Nitrofurantoin resistance indeed evolved extremely slowly via reproducible mutations-an almost paradoxical behavior since this drug causes DNA damage and increases the mutation

  6. Quantifying Differences and Similarities in Whole-Brain White Matter Architecture Using Local Connectome Fingerprints.

    Directory of Open Access Journals (Sweden)

    Fang-Cheng Yeh

    2016-11-01

    Full Text Available Quantifying differences or similarities in connectomes has been a challenge due to the immense complexity of global brain networks. Here we introduce a noninvasive method that uses diffusion MRI to characterize whole-brain white matter architecture as a single local connectome fingerprint that allows for a direct comparison between structural connectomes. In four independently acquired data sets with repeated scans (total N = 213, we show that the local connectome fingerprint is highly specific to an individual, allowing for an accurate self-versus-others classification that achieved 100% accuracy across 17,398 identification tests. The estimated classification error was approximately one thousand times smaller than fingerprints derived from diffusivity-based measures or region-to-region connectivity patterns for repeat scans acquired within 3 months. The local connectome fingerprint also revealed neuroplasticity within an individual reflected as a decreasing trend in self-similarity across time, whereas this change was not observed in the diffusivity measures. Moreover, the local connectome fingerprint can be used as a phenotypic marker, revealing 12.51% similarity between monozygotic twins, 5.14% between dizygotic twins, and 4.51% between none-twin siblings, relative to differences between unrelated subjects. This novel approach opens a new door for probing the influence of pathological, genetic, social, or environmental factors on the unique configuration of the human connectome.

  7. Quantifying selective reporting and the Proteus phenomenon for multiple datasets with similar bias.

    Directory of Open Access Journals (Sweden)

    Thomas Pfeiffer

    2011-03-01

    Full Text Available Meta-analyses play an important role in synthesizing evidence from diverse studies and datasets that address similar questions. A major obstacle for meta-analyses arises from biases in reporting. In particular, it is speculated that findings which do not achieve formal statistical significance are less likely reported than statistically significant findings. Moreover, the patterns of bias can be complex and may also depend on the timing of the research results and their relationship with previously published work. In this paper, we present an approach that is specifically designed to analyze large-scale datasets on published results. Such datasets are currently emerging in diverse research fields, particularly in molecular medicine. We use our approach to investigate a dataset on Alzheimer's disease (AD that covers 1167 results from case-control studies on 102 genetic markers. We observe that initial studies on a genetic marker tend to be substantially more biased than subsequent replications. The chances for initial, statistically non-significant results to be published are estimated to be about 44% (95% CI, 32% to 63% relative to statistically significant results, while statistically non-significant replications have almost the same chance to be published as statistically significant replications (84%; 95% CI, 66% to 107%. Early replications tend to be biased against initial findings, an observation previously termed Proteus phenomenon: The chances for non-significant studies going in the same direction as the initial result are estimated to be lower than the chances for non-significant studies opposing the initial result (73%; 95% CI, 55% to 96%. Such dynamic patterns in bias are difficult to capture by conventional methods, where typically simple publication bias is assumed to operate. Our approach captures and corrects for complex dynamic patterns of bias, and thereby helps generating conclusions from published results that are more robust

  8. Quantifying chaotic dynamics from integrate-and-fire processes

    Energy Technology Data Exchange (ETDEWEB)

    Pavlov, A. N. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Saratov State Technical University, Politehnicheskaya Str. 77, 410054 Saratov (Russian Federation); Pavlova, O. N. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Mohammad, Y. K. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Tikrit University Salahudin, Tikrit Qadisiyah, University Str. P.O. Box 42, Tikrit (Iraq); Kurths, J. [Potsdam Institute for Climate Impact Research, Telegraphenberg A 31, 14473 Potsdam (Germany); Institute of Physics, Humboldt University Berlin, 12489 Berlin (Germany)

    2015-01-15

    Characterizing chaotic dynamics from integrate-and-fire (IF) interspike intervals (ISIs) is relatively easy performed at high firing rates. When the firing rate is low, a correct estimation of Lyapunov exponents (LEs) describing dynamical features of complex oscillations reflected in the IF ISI sequences becomes more complicated. In this work we discuss peculiarities and limitations of quantifying chaotic dynamics from IF point processes. We consider main factors leading to underestimated LEs and demonstrate a way of improving numerical determining of LEs from IF ISI sequences. We show that estimations of the two largest LEs can be performed using around 400 mean periods of chaotic oscillations in the regime of phase-coherent chaos. Application to real data is discussed.

  9. Musical structure analysis using similarity matrix and dynamic programming

    Science.gov (United States)

    Shiu, Yu; Jeong, Hong; Kuo, C.-C. Jay

    2005-10-01

    Automatic music segmentation and structure analysis from audio waveforms based on a three-level hierarchy is examined in this research, where the three-level hierarchy includes notes, measures and parts. The pitch class profile (PCP) feature is first extracted at the note level. Then, a similarity matrix is constructed at the measure level, where a dynamic time warping (DTW) technique is used to enhance the similarity computation by taking the temporal distortion of similar audio segments into account. By processing the similarity matrix, we can obtain a coarse-grain music segmentation result. Finally, dynamic programming is applied to the coarse-grain segments so that a song can be decomposed into several major parts such as intro, verse, chorus, bridge and outro. The performance of the proposed music structure analysis system is demonstrated for pop and rock music.

  10. Quantifying unsteadiness and dynamics of pulsatory volcanic activity

    Science.gov (United States)

    Dominguez, L.; Pioli, L.; Bonadonna, C.; Connor, C. B.; Andronico, D.; Harris, A. J. L.; Ripepe, M.

    2016-06-01

    Pulsatory eruptions are marked by a sequence of explosions which can be separated by time intervals ranging from a few seconds to several hours. The quantification of the periodicities associated with these eruptions is essential not only for the comprehension of the mechanisms controlling explosivity, but also for classification purposes. We focus on the dynamics of pulsatory activity and quantify unsteadiness based on the distribution of the repose time intervals between single explosive events in relation to magma properties and eruptive styles. A broad range of pulsatory eruption styles are considered, including Strombolian, violent Strombolian and Vulcanian explosions. We find a general relationship between the median of the observed repose times in eruptive sequences and the viscosity of magma given by η ≈ 100 ṡtmedian. This relationship applies to the complete range of magma viscosities considered in our study (102 to 109 Pa s) regardless of the eruption length, eruptive style and associated plume heights, suggesting that viscosity is the main magma property controlling eruption periodicity. Furthermore, the analysis of the explosive sequences in terms of failure time through statistical survival analysis provides further information: dynamics of pulsatory activity can be successfully described in terms of frequency and regularity of the explosions, quantified based on the log-logistic distribution. A linear relationship is identified between the log-logistic parameters, μ and s. This relationship is useful for quantifying differences among eruptive styles from very frequent and regular mafic events (Strombolian activity) to more sporadic and irregular Vulcanian explosions in silicic systems. The time scale controlled by the parameter μ, as a function of the median of the distribution, can be therefore correlated with the viscosity of magmas; while the complexity of the erupting system, including magma rise rate, degassing and fragmentation efficiency

  11. On the Categorial Ambivalence of un montón and Other Similar Quantifiers

    Directory of Open Access Journals (Sweden)

    Javier San Julián Solana

    2016-12-01

    Full Text Available Owing to their ability to express indefinite (superlative quantification, units like montón, porrón or barbaridad (~ bestialidad ~ burrada ~ brutalidad are often included among quantifying nouns. But along with a series of clearly nominal features, they have other features which are typical of adverbs. The aim of this paper is precisely to provide a reasonable explanation for this categorial hybridism. Applying the theoretical and methodological principles of the Functional Grammar of Spanish, we try to demonstrate that they are not “amphibious” units. On the contrary, we argue that, from a synchronic point of view, two sets of signs should be distinguished, which are functionally and lexically different but have “clonal” signifiers: a nouns –with designative meaning– montón/es, porrón/es, barbaridad/es, and b adverbial phrases un montón, un porrón, una barbaridad, which are pure quantifiers, according to their lexeme.

  12. Identifying a Superfluid Reynolds Number via Dynamical Similarity.

    Science.gov (United States)

    Reeves, M T; Billam, T P; Anderson, B P; Bradley, A S

    2015-04-17

    The Reynolds number provides a characterization of the transition to turbulent flow, with wide application in classical fluid dynamics. Identifying such a parameter in superfluid systems is challenging due to their fundamentally inviscid nature. Performing a systematic study of superfluid cylinder wakes in two dimensions, we observe dynamical similarity of the frequency of vortex shedding by a cylindrical obstacle. The universality of the turbulent wake dynamics is revealed by expressing shedding frequencies in terms of an appropriately defined superfluid Reynolds number, Re(s), that accounts for the breakdown of superfluid flow through quantum vortex shedding. For large obstacles, the dimensionless shedding frequency exhibits a universal form that is well-fitted by a classical empirical relation. In this regime the transition to turbulence occurs at Re(s)≈0.7, irrespective of obstacle width.

  13. Perception of similarity: a model for social network dynamics

    International Nuclear Information System (INIS)

    Javarone, Marco Alberto; Armano, Giuliano

    2013-01-01

    Some properties of social networks (e.g., the mixing patterns and the community structure) appear deeply influenced by the individual perception of people. In this work we map behaviors by considering similarity and popularity of people, also assuming that each person has his/her proper perception and interpretation of similarity. Although investigated in different ways (depending on the specific scientific framework), from a computational perspective similarity is typically calculated as a distance measure. In accordance with this view, to represent social network dynamics we developed an agent-based model on top of a hyperbolic space on which individual distance measures are calculated. Simulations, performed in accordance with the proposed model, generate small-world networks that exhibit a community structure. We deem this model to be valuable for analyzing the relevant properties of real social networks. (paper)

  14. A Tensor Statistical Model for Quantifying Dynamic Functional Connectivity.

    Science.gov (United States)

    Zhu, Yingying; Zhu, Xiaofeng; Kim, Minjeong; Yan, Jin; Wu, Guorong

    2017-06-01

    Functional connectivity (FC) has been widely investigated in many imaging-based neuroscience and clinical studies. Since functional Magnetic Resonance Image (MRI) signal is just an indirect reflection of brain activity, it is difficult to accurately quantify the FC strength only based on signal correlation. To address this limitation, we propose a learning-based tensor model to derive high sensitivity and specificity connectome biomarkers at the individual level from resting-state fMRI images. First, we propose a learning-based approach to estimate the intrinsic functional connectivity. In addition to the low level region-to-region signal correlation, latent module-to-module connection is also estimated and used to provide high level heuristics for measuring connectivity strength. Furthermore, sparsity constraint is employed to automatically remove the spurious connections, thus alleviating the issue of searching for optimal threshold. Second, we integrate our learning-based approach with the sliding-window technique to further reveal the dynamics of functional connectivity. Specifically, we stack the functional connectivity matrix within each sliding window and form a 3D tensor where the third dimension denotes for time. Then we obtain dynamic functional connectivity (dFC) for each individual subject by simultaneously estimating the within-sliding-window functional connectivity and characterizing the across-sliding-window temporal dynamics. Third, in order to enhance the robustness of the connectome patterns extracted from dFC, we extend the individual-based 3D tensors to a population-based 4D tensor (with the fourth dimension stands for the training subjects) and learn the statistics of connectome patterns via 4D tensor analysis. Since our 4D tensor model jointly (1) optimizes dFC for each training subject and (2) captures the principle connectome patterns, our statistical model gains more statistical power of representing new subject than current state

  15. POSTFUNDOPLICATION DYSPHAGIA CAUSES SIMILAR WATER INGESTION DYNAMICS AS ACHALASIA.

    Science.gov (United States)

    Dantas, Roberto Oliveira; Santos, Carla Manfredi; Cassiani, Rachel Aguiar; Alves, Leda Maria Tavares; Nascimento, Weslania Viviane

    2016-01-01

    - After surgical treatment of gastroesophageal reflux disease dysphagia is a symptom in the majority of patients, with decrease in intensity over time. However, some patients may have persistent dysphagia. - The objective of this investigation was to evaluate the dynamics of water ingestion in patients with postfundoplication dysphagia compared with patients with dysphagia caused by achalasia, idiopathic or consequent to Chagas' disease, and controls. - Thirty-three patients with postfundoplication dysphagia, assessed more than one year after surgery, together with 50 patients with Chagas' disease, 27 patients with idiopathic achalasia and 88 controls were all evaluated by the water swallow test. They drunk, in triplicate, 50 mL of water without breaks while being precisely timed and the number of swallows counted. Also measured was: (a) inter-swallows interval - the time to complete the task, divided by the number of swallows during the task; (b) swallowing flow - volume drunk divided by the time taken; (c) volume of each swallow - volume drunk divided by the number of swallows. - Patients with postfundoplication dysphagia, Chagas' disease and idiopathic achalasia took longer to ingest all the volume, had an increased number of swallows, an increase in interval between swallows, a decrease in swallowing flow and a decrease in water volume of each swallow compared with the controls. There was no difference between the three groups of patients. There was no correlation between postfundoplication time and the results. - It was concluded that patients with postfundoplication dysphagia have similar water ingestion dynamics as patients with achalasia.

  16. Vertical-axis wind turbine experiments at full dynamic similarity

    Science.gov (United States)

    Duvvuri, Subrahmanyam; Miller, Mark; Brownstein, Ian; Dabiri, John; Hultmark, Marcus

    2017-11-01

    This study presents results from pressurized (upto 200 atm) wind tunnel tests of a self-spinning 5-blade model Vertical-Axis Wind Turbine (VAWT). The model is geometrically similar (scale ratio 1:22) to a commercially available VAWT, which has a rotor diameter of 2.17 meters and blade span of 3.66 meters, and is used at the Stanford university field lab. The use of pressurized air as working fluid allows for the unique ability to obtain full dynamic similarity with field conditions in terms of matched Reynolds numbers (Re), tip-speed ratios (λ), and Mach number (M). Tests were performed across a wide range of Re and λ, with the highest Re exceeding the maximum operational field Reynolds number (Remax) by a factor of 3. With an extended range of accessible Re conditions, the peak turbine power efficiency was seen to occur roughly at Re = 2 Remax and λ = 1 . Beyond Re > 2 Remax the turbine performance is invariant in Re for all λ. A clear demonstration of Reynolds number invariance for an actual full-scale wind turbine lends novelty to this study, and overall the results show the viability of the present experimental technique in testing turbines at field conditions.

  17. POSTFUNDOPLICATION DYSPHAGIA CAUSES SIMILAR WATER INGESTION DYNAMICS AS ACHALASIA

    Directory of Open Access Journals (Sweden)

    Roberto Oliveira DANTAS

    Full Text Available ABSTRACT Background - After surgical treatment of gastroesophageal reflux disease dysphagia is a symptom in the majority of patients, with decrease in intensity over time. However, some patients may have persistent dysphagia. Objective - The objective of this investigation was to evaluate the dynamics of water ingestion in patients with postfundoplication dysphagia compared with patients with dysphagia caused by achalasia, idiopathic or consequent to Chagas' disease, and controls. Methods - Thirty-three patients with postfundoplication dysphagia, assessed more than one year after surgery, together with 50 patients with Chagas' disease, 27 patients with idiopathic achalasia and 88 controls were all evaluated by the water swallow test. They drunk, in triplicate, 50 mL of water without breaks while being precisely timed and the number of swallows counted. Also measured was: (a inter-swallows interval - the time to complete the task, divided by the number of swallows during the task; (b swallowing flow - volume drunk divided by the time taken; (c volume of each swallow - volume drunk divided by the number of swallows. Results - Patients with postfundoplication dysphagia, Chagas' disease and idiopathic achalasia took longer to ingest all the volume, had an increased number of swallows, an increase in interval between swallows, a decrease in swallowing flow and a decrease in water volume of each swallow compared with the controls. There was no difference between the three groups of patients. There was no correlation between postfundoplication time and the results. Conclusion - It was concluded that patients with postfundoplication dysphagia have similar water ingestion dynamics as patients with achalasia.

  18. Series distance – an intuitive metric to quantify hydrograph similarity in terms of occurrence, amplitude and timing of hydrological events

    Directory of Open Access Journals (Sweden)

    U. Ehret

    2011-03-01

    Full Text Available Applying metrics to quantify the similarity or dissimilarity of hydrographs is a central task in hydrological modelling, used both in model calibration and the evaluation of simulations or forecasts. Motivated by the shortcomings of standard objective metrics such as the Root Mean Square Error (RMSE or the Mean Absolute Peak Time Error (MAPTE and the advantages of visual inspection as a powerful tool for simultaneous, case-specific and multi-criteria (yet subjective evaluation, we propose a new objective metric termed Series Distance, which is in close accordance with visual evaluation. The Series Distance quantifies the similarity of two hydrographs neither in a time-aggregated nor in a point-by-point manner, but on the scale of hydrological events. It consists of three parts, namely a Threat Score which evaluates overall agreement of event occurrence, and the overall distance of matching observed and simulated events with respect to amplitude and timing. The novelty of the latter two is the way in which matching point pairs on the observed and simulated hydrographs are identified: not by equality in time (as is the case with the RMSE, but by the same relative position in matching segments (rise or recession of the event, indicating the same underlying hydrological process. Thus, amplitude and timing errors are calculated simultaneously but separately, from point pairs that also match visually, considering complete events rather than only individual points (as is the case with MAPTE. Relative weights can freely be assigned to each component of the Series Distance, which allows (subjective customization of the metric to various fields of application, but in a traceable way. Each of the three components of the Series Distance can be used in an aggregated or non-aggregated way, which makes the Series Distance a suitable tool for differentiated, process-based model diagnostics.

    After discussing the applicability of established time series

  19. Quantifying the dynamics of coupled networks of switches and oscillators.

    Directory of Open Access Journals (Sweden)

    Matthew R Francis

    Full Text Available Complex network dynamics have been analyzed with models of systems of coupled switches or systems of coupled oscillators. However, many complex systems are composed of components with diverse dynamics whose interactions drive the system's evolution. We, therefore, introduce a new modeling framework that describes the dynamics of networks composed of both oscillators and switches. Both oscillator synchronization and switch stability are preserved in these heterogeneous, coupled networks. Furthermore, this model recapitulates the qualitative dynamics for the yeast cell cycle consistent with the hypothesized dynamics resulting from decomposition of the regulatory network into dynamic motifs. Introducing feedback into the cell-cycle network induces qualitative dynamics analogous to limitless replicative potential that is a hallmark of cancer. As a result, the proposed model of switch and oscillator coupling provides the ability to incorporate mechanisms that underlie the synchronized stimulus response ubiquitous in biochemical systems.

  20. Geometrical Similarity Transformations in Dynamic Geometry Environment Geogebra

    Science.gov (United States)

    Andraphanova, Natalia V.

    2015-01-01

    The subject of the article is usage of modern computer technologies through the example of interactive geometry environment Geogebra as an innovative technology of representing and studying of geometrical material which involves such didactical opportunities as vizualisation, simulation and dynamics. There is shown a classification of geometric…

  1. Similar impact of topological and dynamic noise on complex patterns

    International Nuclear Information System (INIS)

    Marr, Carsten; Huett, Marc-Thorsten

    2006-01-01

    Shortcuts in a regular architecture affect the information transport through the system due to the severe decrease in average path length. A fundamental new perspective in terms of pattern formation is the destabilizing effect of topological perturbations by processing distant uncorrelated information, similarly to stochastic noise. We study the functional coincidence of rewiring and noisy communication on patterns of binary cellular automata

  2. Schumpeterian economic dynamics as a quantifiable model of evolution

    Science.gov (United States)

    Thurner, Stefan; Klimek, Peter; Hanel, Rudolf

    2010-07-01

    We propose a simple quantitative model of Schumpeterian economic dynamics. New goods and services are endogenously produced through combinations of existing goods. As soon as new goods enter the market, they may compete against already existing goods. In other words, new products can have destructive effects on existing goods. As a result of this competition mechanism, existing goods may be driven out from the market—often causing cascades of secondary defects (Schumpeterian gales of destruction). The model leads to generic dynamics characterized by phases of relative economic stability followed by phases of massive restructuring of markets—which could be interpreted as Schumpeterian business 'cycles'. Model time series of product diversity and productivity reproduce several stylized facts of economics time series on long timescales, such as GDP or business failures, including non-Gaussian fat tailed distributions and volatility clustering. The model is phrased in an open, non-equilibrium setup which can be understood as a self-organized critical system. Its diversity dynamics can be understood by the time-varying topology of the active production networks.

  3. Quantifying evolutionary dynamics from variant-frequency time series

    Science.gov (United States)

    Khatri, Bhavin S.

    2016-09-01

    From Kimura’s neutral theory of protein evolution to Hubbell’s neutral theory of biodiversity, quantifying the relative importance of neutrality versus selection has long been a basic question in evolutionary biology and ecology. With deep sequencing technologies, this question is taking on a new form: given a time-series of the frequency of different variants in a population, what is the likelihood that the observation has arisen due to selection or neutrality? To tackle the 2-variant case, we exploit Fisher’s angular transformation, which despite being discovered by Ronald Fisher a century ago, has remained an intellectual curiosity. We show together with a heuristic approach it provides a simple solution for the transition probability density at short times, including drift, selection and mutation. Our results show under that under strong selection and sufficiently frequent sampling these evolutionary parameters can be accurately determined from simulation data and so they provide a theoretical basis for techniques to detect selection from variant or polymorphism frequency time-series.

  4. Low latitude ionospheric TEC responses to dynamical complexity quantifiers during transient events over Nigeria

    Science.gov (United States)

    Ogunsua, Babalola

    2018-04-01

    In this study, the values of chaoticity and dynamical complexity parameters for some selected storm periods in the year 2011 and 2012 have been computed. This was done using detrended TEC data sets measured from Birnin-Kebbi, Torro and Enugu global positioning system (GPS) receiver stations in Nigeria. It was observed that the significance of difference (SD) values were mostly greater than 1.96 but surprisingly lower than 1.96 in September 29, 2011. The values of the computed SD were also found to be reduced in most cases just after the geomagnetic storm with immediate recovery a day after the main phase of the storm while the values of Lyapunov exponent and Tsallis entropy remains reduced due to the influence of geomagnetic storms. It was also observed that the value of Lyapunov exponent and Tsallis entropy reveals similar variation pattern during storm period in most cases. Also recorded surprisingly were lower values of these dynamical quantifiers during the solar flare event of August 8th and 9th of the year 2011. The possible mechanisms responsible for these observations were further discussed in this work. However, our observations show that the ionospheric effects of some other possible transient events other than geomagnetic storms can also be revealed by the variation of chaoticity and dynamical complexity.

  5. Dynamics of barite growth in porous media quantified by in situ synchrotron X-ray tomography

    Science.gov (United States)

    Godinho, jose; Gerke, kirill

    2016-04-01

    Current models used to formulate mineral sequestration strategies of dissolved contaminants in the bedrock often neglect the effect of confinement and the variation of reactive surface area with time. In this work, in situ synchrotron X-ray micro-tomography is used to quantify barite growth rates in a micro-porous structure as a function of time during 13.5 hours with a resolution of 1 μm. Additionally, the 3D porous network at different time frames are used to simulate the flow velocities and calculate the permeability evolution during the experiment. The kinetics of barite growth under porous confinement is compared with the kinetics of barite growth on free surfaces in the same fluid composition. Results are discussed in terms of surface area normalization and the evolution of flow velocities as crystals fill the porous structure. During the initial hours the growth rate measured in porous media is similar to the growth rate on free surfaces. However, as the thinner flow paths clog the growth rate progressively decreases, which is correlated to a decrease of local flow velocity. The largest pores remain open, enabling growth to continue throughout the structure. Quantifying the dynamics of mineral precipitation kinetics in situ in 4D, has revealed the importance of using a time dependent reactive surface area and accounting for the local properties of the porous network, when formulating predictive models of mineral precipitation in porous media.

  6. Quantifying the dynamic wing morphing of hovering hummingbird.

    Science.gov (United States)

    Maeda, Masateru; Nakata, Toshiyuki; Kitamura, Ikuo; Tanaka, Hiroto; Liu, Hao

    2017-09-01

    Animal wings are lightweight and flexible; hence, during flapping flight their shapes change. It has been known that such dynamic wing morphing reduces aerodynamic cost in insects, but the consequences in vertebrate flyers, particularly birds, are not well understood. We have developed a method to reconstruct a three-dimensional wing model of a bird from the wing outline and the feather shafts (rachides). The morphological and kinematic parameters can be obtained using the wing model, and the numerical or mechanical simulations may also be carried out. To test the effectiveness of the method, we recorded the hovering flight of a hummingbird ( Amazilia amazilia ) using high-speed cameras and reconstructed the right wing. The wing shape varied substantially within a stroke cycle. Specifically, the maximum and minimum wing areas differed by 18%, presumably due to feather sliding; the wing was bent near the wrist joint, towards the upward direction and opposite to the stroke direction; positive upward camber and the 'washout' twist (monotonic decrease in the angle of incidence from the proximal to distal wing) were observed during both half-strokes; the spanwise distribution of the twist was uniform during downstroke, but an abrupt increase near the wrist joint was found during upstroke.

  7. A combinatorial framework to quantify peak/pit asymmetries in complex dynamics

    NARCIS (Netherlands)

    Hasson, Uri; Iacovacci, Jacopo; Davis, Ben; Flanagan, Ryan; Tagliazucchi, E.; Laufs, Helmut; Lacasa, Lucas

    2018-01-01

    We explore a combinatorial framework which efficiently quantifies the asymmetries between minima and maxima in local fluctuations of time series. We first showcase its performance by applying it to a battery of synthetic cases. We find rigorous results on some canonical dynamical models (stochastic

  8. Quantifying sleep architecture dynamics and individual differences using big data and Bayesian networks.

    Science.gov (United States)

    Yetton, Benjamin D; McDevitt, Elizabeth A; Cellini, Nicola; Shelton, Christian; Mednick, Sara C

    2018-01-01

    The pattern of sleep stages across a night (sleep architecture) is influenced by biological, behavioral, and clinical variables. However, traditional measures of sleep architecture such as stage proportions, fail to capture sleep dynamics. Here we quantify the impact of individual differences on the dynamics of sleep architecture and determine which factors or set of factors best predict the next sleep stage from current stage information. We investigated the influence of age, sex, body mass index, time of day, and sleep time on static (e.g. minutes in stage, sleep efficiency) and dynamic measures of sleep architecture (e.g. transition probabilities and stage duration distributions) using a large dataset of 3202 nights from a non-clinical population. Multi-level regressions show that sex effects duration of all Non-Rapid Eye Movement (NREM) stages, and age has a curvilinear relationship for Wake After Sleep Onset (WASO) and slow wave sleep (SWS) minutes. Bayesian network modeling reveals sleep architecture depends on time of day, total sleep time, age and sex, but not BMI. Older adults, and particularly males, have shorter bouts (more fragmentation) of Stage 2, SWS, and they transition less frequently to these stages. Additionally, we showed that the next sleep stage and its duration can be optimally predicted by the prior 2 stages and age. Our results demonstrate the potential benefit of big data and Bayesian network approaches in quantifying static and dynamic architecture of normal sleep.

  9. Dynamic stability of self-similar solutions for a plasma pinch

    International Nuclear Information System (INIS)

    Ma, Sifeng.

    1988-01-01

    Linear Magnetohydrodynamic (MHD) stability theory is applied to a class of self-similar solutions which describe implosion, expansion and oscillation of an infinitely conducting plasma column. The equations of perturbation are derived in the Lagrangian coordinate system. Numerical procedures via the finite-element method are formulated, and general aspects of dynamic stability are discussed, The dynamic stability of the column when it is oscillatory is studied in detail using the Floquet theory, and the characteristic exponent is calculated numerically. A-pinch configuration is examined. It is found that self-similar oscillations in general destabilize the continua in the MHD spectrum, and parametric instability results

  10. Quantifying the interplay between environmental and social effects on aggregated-fish dynamics.

    Directory of Open Access Journals (Sweden)

    Manuela Capello

    Full Text Available Demonstrating and quantifying the respective roles of social interactions and external stimuli governing fish dynamics is key to understanding fish spatial distribution. If seminal studies have contributed to our understanding of fish spatial organization in schools, little experimental information is available on fish in their natural environment, where aggregations often occur in the presence of spatial heterogeneities. Here, we applied novel modeling approaches coupled to accurate acoustic tracking for studying the dynamics of a group of gregarious fish in a heterogeneous environment. To this purpose, we acoustically tracked with submeter resolution the positions of twelve small pelagic fish (Selar crumenophthalmus in the presence of an anchored floating object, constituting a point of attraction for several fish species. We constructed a field-based model for aggregated-fish dynamics, deriving effective interactions for both social and external stimuli from experiments. We tuned the model parameters that best fit the experimental data and quantified the importance of social interactions in the aggregation, providing an explanation for the spatial structure of fish aggregations found around floating objects. Our results can be generalized to other gregarious species and contexts as long as it is possible to observe the fine-scale movements of a subset of individuals.

  11. Dynamic measurements of reflux for quantifying gastroesophageal reflux in patients with prolonged esophageal transit time

    International Nuclear Information System (INIS)

    Gratz, K.F.; Creutzig, H.; Schmiedt, W.; Oelert, H.; Hundeshagen, H.; Medizinische Hochschule Hannover

    1985-01-01

    A combination of a radionuclide transit test and a dynamic gastroesophageal scan was evaluated in normal volunteers, in patients with achalasia treated by pneumatic dilatation (n=34) or Heller myotomy (n=21). Interpretation of 31 of 57 examinations done with usual scintiscan was not possible because of too high esophageal tracer retention. Only one case could not be interpreted with the modified technique. Gastroesophageal reflux was detected and quantified in this manner in 8 patients, 6 more than with the usual scintiscan. 7 of these 8 patients have had Heller procedure, 1 patient even combined with fundoplasty. (orig.) [de

  12. Dynamic measurements of reflux for quantifying gastroesophageal reflux in patients with prolonged esophageal transit time

    Energy Technology Data Exchange (ETDEWEB)

    Gratz, K.F.; Creutzig, H.; Schmiedt, W.; Oelert, H.; Hundeshagen, H.

    1985-05-01

    A combination of a radionuclide transit test and a dynamic gastroesophageal scan was evaluated in normal volunteers, in patients with achalasia treated by pneumatic dilatation (n=34) or Heller myotomy (n=21). Interpretation of 31 of 57 examinations done with usual scintiscan was not possible because of too high esophageal tracer retention. Only one case could not be interpreted with the modified technique. Gastroesophageal reflux was detected and quantified in this manner in 8 patients, 6 more than with the usual scintiscan. 7 of these 8 patients have had Heller procedure, 1 patient even combined with fundoplasty.

  13. Dynamic Time Warping Distance Method for Similarity Test of Multipoint Ground Motion Field

    Directory of Open Access Journals (Sweden)

    Yingmin Li

    2010-01-01

    Full Text Available The reasonability of artificial multi-point ground motions and the identification of abnormal records in seismic array observations, are two important issues in application and analysis of multi-point ground motion fields. Based on the dynamic time warping (DTW distance method, this paper discusses the application of similarity measurement in the similarity analysis of simulated multi-point ground motions and the actual seismic array records. Analysis results show that the DTW distance method not only can quantitatively reflect the similarity of simulated ground motion field, but also offers advantages in clustering analysis and singularity recognition of actual multi-point ground motion field.

  14. Modeling the angular motion dynamics of spacecraft with a magnetic attitude control system based on experimental studies and dynamic similarity

    Science.gov (United States)

    Kulkov, V. M.; Medvedskii, A. L.; Terentyev, V. V.; Firsyuk, S. O.; Shemyakov, A. O.

    2017-12-01

    The problem of spacecraft attitude control using electromagnetic systems interacting with the Earth's magnetic field is considered. A set of dimensionless parameters has been formed to investigate the spacecraft orientation regimes based on dynamically similar models. The results of experimental studies of small spacecraft with a magnetic attitude control system can be extrapolated to the in-orbit spacecraft motion control regimes by using the methods of the dimensional and similarity theory.

  15. PCB Food Web Dynamics Quantify Nutrient and Energy Flow in Aquatic Ecosystems.

    Science.gov (United States)

    McLeod, Anne M; Paterson, Gordon; Drouillard, Ken G; Haffner, G Douglas

    2015-11-03

    Measuring in situ nutrient and energy flows in spatially and temporally complex aquatic ecosystems represents a major ecological challenge. Food web structure, energy and nutrient budgets are difficult to measure, and it is becoming more important to quantify both energy and nutrient flow to determine how food web processes and structure are being modified by multiple stressors. We propose that polychlorinated biphenyl (PCB) congeners represent an ideal tracer to quantify in situ energy and nutrient flow between trophic levels. Here, we demonstrate how an understanding of PCB congener bioaccumulation dynamics provides multiple direct measurements of energy and nutrient flow in aquatic food webs. To demonstrate this novel approach, we quantified nitrogen (N), phosphorus (P) and caloric turnover rates for Lake Huron lake trout, and reveal how these processes are regulated by both growth rate and fish life history. Although minimal nutrient recycling was observed in young growing fish, slow growing, older lake trout (>5 yr) recycled an average of 482 Tonnes·yr(-1) of N, 45 Tonnes·yr(-1) of P and assimilated 22 TJ yr(-1) of energy. Compared to total P loading rates of 590 Tonnes·yr(-1), the recycling of primarily bioavailable nutrients by fish plays an important role regulating the nutrient states of oligotrophic lakes.

  16. A comparative analysis of alternative approaches for quantifying nonlinear dynamics in cardiovascular system.

    Science.gov (United States)

    Chen, Yun; Yang, Hui

    2013-01-01

    Heart rate variability (HRV) analysis has emerged as an important research topic to evaluate autonomic cardiac function. However, traditional time and frequency-domain analysis characterizes and quantify only linear and stationary phenomena. In the present investigation, we made a comparative analysis of three alternative approaches (i.e., wavelet multifractal analysis, Lyapunov exponents and multiscale entropy analysis) for quantifying nonlinear dynamics in heart rate time series. Note that these extracted nonlinear features provide information about nonlinear scaling behaviors and the complexity of cardiac systems. To evaluate the performance, we used 24-hour HRV recordings from 54 healthy subjects and 29 heart failure patients, available in PhysioNet. Three nonlinear methods are evaluated not only individually but also in combination using three classification algorithms, i.e., linear discriminate analysis, quadratic discriminate analysis and k-nearest neighbors. Experimental results show that three nonlinear methods capture nonlinear dynamics from different perspectives and the combined feature set achieves the best performance, i.e., sensitivity 97.7% and specificity 91.5%. Collectively, nonlinear HRV features are shown to have the promise to identify the disorders in autonomic cardiovascular function.

  17. Quantifying terrestrial ecosystem carbon dynamics in the Jinsha watershed, Upper Yangtze, China from 1975 to 2000

    Science.gov (United States)

    Zhao, Shuqing; Liu, Shuguang; Yin, Runsheng; Li, Zhengpeng; Deng, Yulin; Tan, Kun; Deng, Xiangzheng; Rothstein, David; Qi, Jiaguo

    2010-01-01

    Quantifying the spatial and temporal dynamics of carbon stocks in terrestrial ecosystems and carbon fluxes between the terrestrial biosphere and the atmosphere is critical to our understanding of regional patterns of carbon budgets. Here we use the General Ensemble biogeochemical Modeling System to simulate the terrestrial ecosystem carbon dynamics in the Jinsha watershed of China’s upper Yangtze basin from 1975 to 2000, based on unique combinations of spatial and temporal dynamics of major driving forces, such as climate, soil properties, nitrogen deposition, and land use and land cover changes. Our analysis demonstrates that the Jinsha watershed ecosystems acted as a carbon sink during the period of 1975–2000, with an average rate of 0.36 Mg/ha/yr, primarily resulting from regional climate variation and local land use and land cover change. Vegetation biomass accumulation accounted for 90.6% of the sink, while soil organic carbon loss before 1992 led to a lower net gain of carbon in the watershed, and after that soils became a small sink. Ecosystem carbon sink/source patterns showed a high degree of spatial heterogeneity. Carbon sinks were associated with forest areas without disturbances, whereas carbon sources were primarily caused by stand-replacing disturbances. It is critical to adequately represent the detailed fast-changing dynamics of land use activities in regional biogeochemical models to determine the spatial and temporal evolution of regional carbon sink/source patterns.

  18. Visual Analysis of Nonlinear Dynamical Systems: Chaos, Fractals, Self-Similarity and the Limits of Prediction

    Directory of Open Access Journals (Sweden)

    Geoff Boeing

    2016-11-01

    Full Text Available Nearly all nontrivial real-world systems are nonlinear dynamical systems. Chaos describes certain nonlinear dynamical systems that have a very sensitive dependence on initial conditions. Chaotic systems are always deterministic and may be very simple, yet they produce completely unpredictable and divergent behavior. Systems of nonlinear equations are difficult to solve analytically, and scientists have relied heavily on visual and qualitative approaches to discover and analyze the dynamics of nonlinearity. Indeed, few fields have drawn as heavily from visualization methods for their seminal innovations: from strange attractors, to bifurcation diagrams, to cobweb plots, to phase diagrams and embedding. Although the social sciences are increasingly studying these types of systems, seminal concepts remain murky or loosely adopted. This article has three aims. First, it argues for several visualization methods to critically analyze and understand the behavior of nonlinear dynamical systems. Second, it uses these visualizations to introduce the foundations of nonlinear dynamics, chaos, fractals, self-similarity and the limits of prediction. Finally, it presents Pynamical, an open-source Python package to easily visualize and explore nonlinear dynamical systems’ behavior.

  19. Quantifying changes in spatial patterns of surface air temperature dynamics over several decades

    Science.gov (United States)

    Zappalà, Dario A.; Barreiro, Marcelo; Masoller, Cristina

    2018-04-01

    We study daily surface air temperature (SAT) reanalysis in a grid over the Earth's surface to identify and quantify changes in SAT dynamics during the period 1979-2016. By analysing the Hilbert amplitude and frequency we identify the regions where relative variations are most pronounced (larger than ±50 % for the amplitude and ±100 % for the frequency). Amplitude variations are interpreted as due to changes in precipitation or ice melting, while frequency variations are interpreted as due to a northward shift of the inter-tropical convergence zone (ITCZ) and to a widening of the rainfall band in the western Pacific Ocean. The ITCZ is the ascending branch of the Hadley cell, and thus by affecting the tropical atmospheric circulation, ITCZ migration has far-reaching climatic consequences. As the methodology proposed here can be applied to many other geophysical time series, our work will stimulate new research that will advance the understanding of climate change impacts.

  20. A high-throughput assay for quantifying appetite and digestive dynamics

    Science.gov (United States)

    Guggiana-Nilo, Drago; Soucy, Edward; Song, Erin Yue; Lei Wee, Caroline; Engert, Florian

    2015-01-01

    Food intake and digestion are vital functions, and their dysregulation is fundamental for many human diseases. Current methods do not support their dynamic quantification on large scales in unrestrained vertebrates. Here, we combine an infrared macroscope with fluorescently labeled food to quantify feeding behavior and intestinal nutrient metabolism with high temporal resolution, sensitivity, and throughput in naturally behaving zebrafish larvae. Using this method and rate-based modeling, we demonstrate that zebrafish larvae match nutrient intake to their bodily demand and that larvae adjust their digestion rate, according to the ingested meal size. Such adaptive feedback mechanisms make this model system amenable to identify potential chemical modulators. As proof of concept, we demonstrate that nicotine, l-lysine, ghrelin, and insulin have analogous impact on food intake as in mammals. Consequently, the method presented here will promote large-scale translational research of food intake and digestive function in a naturally behaving vertebrate. PMID:26108871

  1. Quantifying nonergodicity in nonautonomous dissipative dynamical systems: An application to climate change

    Science.gov (United States)

    Drótos, Gábor; Bódai, Tamás; Tél, Tamás

    2016-08-01

    In nonautonomous dynamical systems, like in climate dynamics, an ensemble of trajectories initiated in the remote past defines a unique probability distribution, the natural measure of a snapshot attractor, for any instant of time, but this distribution typically changes in time. In cases with an aperiodic driving, temporal averages taken along a single trajectory would differ from the corresponding ensemble averages even in the infinite-time limit: ergodicity does not hold. It is worth considering this difference, which we call the nonergodic mismatch, by taking time windows of finite length for temporal averaging. We point out that the probability distribution of the nonergodic mismatch is qualitatively different in ergodic and nonergodic cases: its average is zero and typically nonzero, respectively. A main conclusion is that the difference of the average from zero, which we call the bias, is a useful measure of nonergodicity, for any window length. In contrast, the standard deviation of the nonergodic mismatch, which characterizes the spread between different realizations, exhibits a power-law decrease with increasing window length in both ergodic and nonergodic cases, and this implies that temporal and ensemble averages differ in dynamical systems with finite window lengths. It is the average modulus of the nonergodic mismatch, which we call the ergodicity deficit, that represents the expected deviation from fulfilling the equality of temporal and ensemble averages. As an important finding, we demonstrate that the ergodicity deficit cannot be reduced arbitrarily in nonergodic systems. We illustrate via a conceptual climate model that the nonergodic framework may be useful in Earth system dynamics, within which we propose the measure of nonergodicity, i.e., the bias, as an order-parameter-like quantifier of climate change.

  2. Cholera and shigellosis in Bangladesh: similarities and differences in population dynamics under climate forcing

    Science.gov (United States)

    Pascual, M.; Cash, B.; Reiner, R.; King, A.; Emch, M.; Yunus, M.; Faruque, A. S.

    2012-12-01

    The influence of climate variability on the population dynamics of infectious diseases is considered a large scale, regional, phenomenon, and as such, has been previously addressed for cholera with temporal models that do not incorporate fine-scale spatial structure. In our previous work, evidence for a role of ENSO (El Niño Southern Oscillation) on cholera in Bangladesh was elucidated, and shown to influence the regional climate through precipitation. With a probabilistic spatial model for cholera dynamics in the megacity of Dhaka, we found that the action of climate variability (ENSO and flooding) is localized: there is a climate-sensitive urban core that acts to propagate risk to the rest of the city. Here, we consider long-term surveillance data for shigellosis, another diarrheal disease that coexists with cholera in Bangladesh. We compare the patterns of association with climate variables for these two diseases in a rural setting, as well as the spatial structure in their spatio-temporal dynamics in an urban one. Evidence for similar patterns is presented, and discussed in the context of the differences in the routes of transmission of the two diseases and the proposed role of an environmental reservoir in cholera. The similarities provide evidence for a more general influence of hydrology and of socio-economic factors underlying human susceptibility and sanitary conditions.

  3. Self-similar dynamic converging shocks - I. An isothermal gas sphere with self-gravity

    Science.gov (United States)

    Lou, Yu-Qing; Shi, Chun-Hui

    2014-07-01

    We explore novel self-similar dynamic evolution of converging spherical shocks in a self-gravitating isothermal gas under conceivable astrophysical situations. The construction of such converging shocks involves a time-reversal operation on feasible flow profiles in self-similar expansion with a proper care for the increasing direction of the specific entropy. Pioneered by Guderley since 1942 but without self-gravity so far, self-similar converging shocks are important for implosion processes in aerodynamics, combustion, and inertial fusion. Self-gravity necessarily plays a key role for grossly spherical structures in very broad contexts of astrophysics and cosmology, such as planets, stars, molecular clouds (cores), compact objects, planetary nebulae, supernovae, gamma-ray bursts, supernova remnants, globular clusters, galactic bulges, elliptical galaxies, clusters of galaxies as well as relatively hollow cavity or bubble structures on diverse spatial and temporal scales. Large-scale dynamic flows associated with such quasi-spherical systems (including collapses, accretions, fall-backs, winds and outflows, explosions, etc.) in their initiation, formation, and evolution are likely encounter converging spherical shocks at times. Our formalism lays an important theoretical basis for pertinent astrophysical and cosmological applications of various converging shock solutions and for developing and calibrating numerical codes. As examples, we describe converging shock triggered star formation, supernova explosions, and void collapses.

  4. Electroencephalogram Similarity Analysis Using Temporal and Spectral Dynamics Analysis for Propofol and Desflurane Induced Unconsciousness

    Directory of Open Access Journals (Sweden)

    Quan Liu

    2018-01-01

    Full Text Available Important information about the state dynamics of the brain during anesthesia is unraveled by Electroencephalogram (EEG approaches. Patterns that are observed through EEG related to neural circuit mechanism under different molecular targets dependent anesthetics have recently attracted much attention. Propofol, a Gamma-amino butyric acid, is known with evidently increasing alpha oscillation. Desflurane shares the same receptor action and should be similar to propofol. To explore their dynamics, EEG under routine surgery level anesthetic depth is analyzed using multitaper spectral method from two groups: propofol (n = 28 and desflurane (n = 23. The time-varying spectrum comparison was undertaken to characterize their properties. Results show that both of the agents are dominated by slow and alpha waves. Especially, for increased alpha band feature, propofol unconsciousness shows maximum power at about 10 Hz (mean ± SD; frequency: 10.2 ± 1.4 Hz; peak power, −14.0 ± 1.6 dB, while it is approximate about 8 Hz (mean ± SD; frequency: 8.3 ± 1.3 Hz; peak power, −13.8 ± 1.6 dB for desflurane with significantly lower frequency-resolved spectra for this band. In addition, the mean power of propofol is much higher from alpha to gamma band, including slow oscillation than that of desflurane. The patterns might give us an EEG biomarker for specific anesthetic. This study suggests that both of the anesthetics exhibit similar spectral dynamics, which could provide insight into some common neural circuit mechanism. However, differences between them also indicate their uniqueness where relevant.

  5. Quantifying Infra-slow Dynamics of Spectral Power and Heart Rate in Sleeping Mice.

    Science.gov (United States)

    Fernandez, Laura M J; Lecci, Sandro; Cardis, Romain; Vantomme, Gil; Béard, Elidie; Lüthi, Anita

    2017-08-02

    Three vigilance states dominate mammalian life: wakefulness, non-rapid eye movement (non-REM) sleep, and REM sleep. As more neural correlates of behavior are identified in freely moving animals, this three-fold subdivision becomes too simplistic. During wakefulness, ensembles of global and local cortical activities, together with peripheral parameters such as pupillary diameter and sympathovagal balance, define various degrees of arousal. It remains unclear the extent to which sleep also forms a continuum of brain states-within which the degree of resilience to sensory stimuli and arousability, and perhaps other sleep functions, vary gradually-and how peripheral physiological states co-vary. Research advancing the methods to monitor multiple parameters during sleep, as well as attributing to constellations of these functional attributes, is central to refining our understanding of sleep as a multifunctional process during which many beneficial effects must be executed. Identifying novel parameters characterizing sleep states will open opportunities for novel diagnostic avenues in sleep disorders. We present a procedure to describe dynamic variations of mouse non-REM sleep states via the combined monitoring and analysis of electroencephalogram (EEG)/electrocorticogram (ECoG), electromyogram (EMG), and electrocardiogram (ECG) signals using standard polysomnographic recording techniques. Using this approach, we found that mouse non-REM sleep is organized into cycles of coordinated neural and cardiac oscillations that generate successive 25-s intervals of high and low fragility to external stimuli. Therefore, central and autonomic nervous systems are coordinated to form behaviorally distinct sleep states during consolidated non-REM sleep. We present surgical manipulations for polysomnographic (i.e., EEG/EMG combined with ECG) monitoring to track these cycles in the freely sleeping mouse, the analysis to quantify their dynamics, and the acoustic stimulation protocols to

  6. Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean

    Science.gov (United States)

    Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.

    2011-12-01

    Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling

  7. Land cover change and remote sensing: Examples of quantifying spatiotemporal dynamics in tropical forests

    Energy Technology Data Exchange (ETDEWEB)

    Krummel, J.R.; Su, Haiping [Argonne National Lab., IL (United States); Fox, J. [East-West Center, Honolulu, HI (United States); Yarnasan, S.; Ekasingh, M. [Chiang Mai Univ. (Thailand)

    1995-06-01

    Research on human impacts or natural processes that operate over broad geographic areas must explicitly address issues of scale and spatial heterogeneity. While the tropical forests of Southeast Asia and Mexico have been occupied and used to meet human needs for thousands of years, traditional forest management systems are currently being transformed by rapid and far-reaching demographic, political, economic, and environmental changes. The dynamics of population growth, migration into the remaining frontiers, and responses to national and international market forces result in a demand for land to produce food and fiber. These results illustrate some of the mechanisms that drive current land use changes, especially in the tropical forest frontiers. By linking the outcome of individual land use decisions and measures of landscape fragmentation and change, the aggregated results shows the hierarchy of temporal and spatial events that in summation result in global changes to the most complex and sensitive biome -- tropical forests. By quantifying the spatial and temporal patterns of tropical forest change, researchers can assist policy makers by showing how landscape systems in these tropical forests are controlled by physical, biological, social, and economic parameters.

  8. Quantifying heterogeneity of lesion uptake in dynamic contrast enhanced MRI for breast cancer diagnosis

    International Nuclear Information System (INIS)

    Karahaliou, A; Skiadopoulos, S; Yiakoumelos, A; Costaridou, L; Vassiou, K; Kanavou, T

    2009-01-01

    The current study investigates whether texture features extracted from lesion kinetics feature maps can be used for breast cancer diagnosis. Fifty five women with 57 breast lesions (27 benign, 30 malignant) were subjected to dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) on 1.5T system. A linear-slope model was fitted pixel-wise to a representative lesion slice time series and fitted parameters were used to create three kinetic maps (wash out, time to peak enhancement and peak enhancement). 28 grey level co-occurrence matrices features were extracted from each lesion kinetic map. The ability of texture features per map in discriminating malignant from benign lesions was investigated using a Probabilistic Neural Network classifier. Additional classification was performed by combining classification outputs of most discriminating feature subsets from the three maps, via majority voting. The combined scheme outperformed classification based on individual maps achieving area under Receiver Operating Characteristics curve 0.960±0.029. Results suggest that heterogeneity of breast lesion kinetics, as quantified by texture analysis, may contribute to computer assisted tissue characterization in DCE-MRI.

  9. Quantifying the impact of the Three Gorges Dam on the thermal dynamics of the Yangtze River

    Science.gov (United States)

    Cai, Huayang; Piccolroaz, Sebastiano; Huang, Jingzheng; Liu, Zhiyong; Liu, Feng; Toffolon, Marco

    2018-05-01

    This study examines the impact of the world’s largest dam, the Three Gorges Dam (TGD), on the thermal dynamics of the Yangtze River (China). The analysis uses long-term observations of river water temperature (RWT) in four stations and reconstructs the RWT that would have occurred in absence of the TGD. Relative to pre-TGD conditions, RWT consistently warmed in the region due to air temperature (AT) increase. In addition, the analysis demonstrates that the TGD significantly affected RWT in the downstream reach. At the closest downstream station (Yichang) to the TGD, the annual cycle of RWT experienced a damped response to AT and a marked seasonal alteration: warming during all seasons except for spring and early summer which were characterized by cooling. Both effects were a direct consequence of the larger thermal inertia of the massive water volume stored in the TGD reservoir, causing the downstream reach to be more thermally resilient. The approach used here to quantify the separate contributions of climate and human interventions on RWT can be used to set scientific guidelines for river management and conservation planning strategies.

  10. Quantifying microstructural dynamics and electrochemical activity of graphite and silicon-graphite lithium ion battery anodes

    Science.gov (United States)

    Pietsch, Patrick; Westhoff, Daniel; Feinauer, Julian; Eller, Jens; Marone, Federica; Stampanoni, Marco; Schmidt, Volker; Wood, Vanessa

    2016-09-01

    Despite numerous studies presenting advances in tomographic imaging and analysis of lithium ion batteries, graphite-based anodes have received little attention. Weak X-ray attenuation of graphite and, as a result, poor contrast between graphite and the other carbon-based components in an electrode pore space renders data analysis challenging. Here we demonstrate operando tomography of weakly attenuating electrodes during electrochemical (de)lithiation. We use propagation-based phase contrast tomography to facilitate the differentiation between weakly attenuating materials and apply digital volume correlation to capture the dynamics of the electrodes during operation. After validating that we can quantify the local electrochemical activity and microstructural changes throughout graphite electrodes, we apply our technique to graphite-silicon composite electrodes. We show that microstructural changes that occur during (de)lithiation of a pure graphite electrode are of the same order of magnitude as spatial inhomogeneities within it, while strain in composite electrodes is locally pronounced and introduces significant microstructural changes.

  11. Quantifying the inflammatory activity in Crohn's disease using CE dynamic MRI

    International Nuclear Information System (INIS)

    Pauls, S.; Schmidt, S.A.; Brambs, H.J.; Gabelmann, A.; Kratzer, W.; Mittrach, C.; Adler, G.; Rieber, A.

    2003-01-01

    Purpose: Evaluation of dynamic contrast enhanced MRI in patients with Crohn's disease to assess local inflammatory activity. Material and Methods: Prospective study of 13 patients with histologically proven Crohn's disease. Axial and coronal slices were acquired by a 1.5 T MR (Magnetom Vision, Siemens, Germany): T1 flash 2 D (TR 72.5 ms, TE 4.1 ms), T2 (TR 2730 ms, TE 138 ms), turbo-flash sequences T1 (TR 94.2 ms, TE 4.1 ms) post contrast media fat saturated (Magnevist circledR , 0.2 ml/kg, flow 4 ml/s). In area of maximal thickening of terminal ileal wall, axial dynamic T1 sequences (TR 11 ms, TE 4.2 ms) were acquired every 1.5 s post contrast media application for a total duration of 1 min. Contrast uptake was subjectively measured by semiquantitative score and computed assisted ROI evaluation. MR parameters were correlated with CDAI (Crohn's disease activity index) and SAI (severe activity index). Results: Contrast uptake in the intestinal wall occurred after 18.5 s (range: 3.0-28.0), contrast upslope until plateau phase lasted for 16.1 s (range: 8.0-50.0). Maximum contrast enhancement into the bowel wall was 266% (105-450%) of baseline. After maximum contrast uptake, we observed a plateau phase in all cases for the total duration of measurement. A significant correlation existed for maximum contrast uptake to CDAI (r = 0.591; p = 0.033), for beginning of contrast upslope to the time until plateau phase (r = 0.822; p = 0.001), and for the time until plateau phase to CDAI (r = 0.562; p = 0.046). CDAI was on average 108, median 106; SAI was on average 114, median 115. SAI correlated significantly to CDAI (r = 0.874). Maximum contrast uptake, beginning of contrast upslope, and time until plateau phase were independent to creeping fat, local lymphadenitis, laboratory parameters, temperature, body mass index, heart frequency and systolic blood pressure. Conclusion: Dynamic MRI enables to quantify local inflammatory activity of bowel wall in patients with Crohn

  12. Universal self-similar dynamics of relativistic and nonrelativistic field theories near nonthermal fixed points

    Science.gov (United States)

    Piñeiro Orioli, Asier; Boguslavski, Kirill; Berges, Jürgen

    2015-07-01

    We investigate universal behavior of isolated many-body systems far from equilibrium, which is relevant for a wide range of applications from ultracold quantum gases to high-energy particle physics. The universality is based on the existence of nonthermal fixed points, which represent nonequilibrium attractor solutions with self-similar scaling behavior. The corresponding dynamic universality classes turn out to be remarkably large, encompassing both relativistic as well as nonrelativistic quantum and classical systems. For the examples of nonrelativistic (Gross-Pitaevskii) and relativistic scalar field theory with quartic self-interactions, we demonstrate that infrared scaling exponents as well as scaling functions agree. We perform two independent nonperturbative calculations, first by using classical-statistical lattice simulation techniques and second by applying a vertex-resummed kinetic theory. The latter extends kinetic descriptions to the nonperturbative regime of overoccupied modes. Our results open new perspectives to learn from experiments with cold atoms aspects about the dynamics during the early stages of our universe.

  13. Intelligent method of plant dynamics behavior estimation by effectively applying similar cases

    International Nuclear Information System (INIS)

    Gofuku, Akio; Numoto, Atsushi; Yoshikawa, Hidekazu

    1994-01-01

    In order to accomplish efficient execution of a dynamic simulation of engineering systems, it is important to construct suitable mathematical models for the simulation. In the construction of the mathematical models, it is necessary to estimate the system's behavior to suppose the phenomena which are needed to be modeled. The case-based reasoning is considered to be a powerful tool to estimate the outline of system's behavior because we often estimate it from the similar cases which are stored as our experience or in literature. In this study, the technique based on similar cases is investigated to estimate the outline of time-responses of several important variables of pressurized water reactor (PWR) plants at a small break loss of coolant accident (SBLOCA). The registered cases in case base are gathered from various reports and the authors' numerical simulations related with SBLOCA of PWR plants. The functions to use in the case retrieval are formed from the characteristic features of SBLOCA of PWR plants. On the other hand, the rules to use in the case refinement are obtained from the qualitative and quantitative consideration of plants' behaviors of the cases in the case base. The applicability of the technique is discussed by two simple estimation trials of plant behavior. (author)

  14. Inferring Characteristics of Sensorimotor Behavior by Quantifying Dynamics of Animal Locomotion

    Science.gov (United States)

    Leung, KaWai

    Locomotion is one of the most well-studied topics in animal behavioral studies. Many fundamental and clinical research make use of the locomotion of an animal model to explore various aspects in sensorimotor behavior. In the past, most of these studies focused on population average of a specific trait due to limitation of data collection and processing power. With recent advance in computer vision and statistical modeling techniques, it is now possible to track and analyze large amounts of behavioral data. In this thesis, I present two projects that aim to infer the characteristics of sensorimotor behavior by quantifying the dynamics of locomotion of nematode Caenorhabditis elegans and fruit fly Drosophila melanogaster, shedding light on statistical dependence between sensing and behavior. In the first project, I investigate the possibility of inferring noxious sensory information from the behavior of Caenorhabditis elegans. I develop a statistical model to infer the heat stimulus level perceived by individual animals from their stereotyped escape responses after stimulation by an IR laser. The model allows quantification of analgesic-like effects of chemical agents or genetic mutations in the worm. At the same time, the method is able to differentiate perturbations of locomotion behavior that are beyond affecting the sensory system. With this model I propose experimental designs that allows statistically significant identification of analgesic-like effects. In the second project, I investigate the relationship of energy budget and stability of locomotion in determining the walking speed distribution of Drosophila melanogaster during aging. The locomotion stability at different age groups is estimated from video recordings using Floquet theory. I calculate the power consumption of different locomotion speed using a biomechanics model. In conclusion, the power consumption, not stability, predicts the locomotion speed distribution at different ages.

  15. Initial virtual flight test for a dynamically similar aircraft model with control augmentation system

    Directory of Open Access Journals (Sweden)

    Linliang Guo

    2017-04-01

    Full Text Available To satisfy the validation requirements of flight control law for advanced aircraft, a wind tunnel based virtual flight testing has been implemented in a low speed wind tunnel. A 3-degree-of-freedom gimbal, ventrally installed in the model, was used in conjunction with an actively controlled dynamically similar model of aircraft, which was equipped with the inertial measurement unit, attitude and heading reference system, embedded computer and servo-actuators. The model, which could be rotated around its center of gravity freely by the aerodynamic moments, together with the flow field, operator and real time control system made up the closed-loop testing circuit. The model is statically unstable in longitudinal direction, and it can fly stably in wind tunnel with the function of control augmentation of the flight control laws. The experimental results indicate that the model responds well to the operator’s instructions. The response of the model in the tests shows reasonable agreement with the simulation results. The difference of response of angle of attack is less than 0.5°. The effect of stability augmentation and attitude control law was validated in the test, meanwhile the feasibility of virtual flight test technique treated as preliminary evaluation tool for advanced flight vehicle configuration research was also verified.

  16. Quantify Water Extraction by TBP/Dodecane via Molecular Dynamics Simulations

    International Nuclear Information System (INIS)

    Khomami, Bamin; Cui, Shengting; De Almeida, Valmor F.

    2013-01-01

    The purpose of this project is to quantify the interfacial transport of water into the most prevalent nuclear reprocessing solvent extractant mixture, namely tri-butyl- phosphate (TBP) and dodecane, via massively parallel molecular dynamics simulations on the most powerful machines available for open research. Specifically, we will accomplish this objective by evolving the water/TBP/dodecane system up to 1 ms elapsed time, and validate the simulation results by direct comparison with experimentally measured water solubility in the organic phase. The significance of this effort is to demonstrate for the first time that the combination of emerging simulation tools and state-of-the-art supercomputers can provide quantitative information on par to experimental measurements for solvent extraction systems of relevance to the nuclear fuel cycle. Results: Initially, the isolated single component, and single phase systems were studied followed by the two-phase, multicomponent counterpart. Specifically, the systems we studied were: pure TBP; pure n-dodecane; TBP/n-dodecane mixture; and the complete extraction system: water-TBP/n-dodecane two phase system to gain deep insight into the water extraction process. We have completely achieved our goal of simulating the molecular extraction of water molecules into the TBP/n-dodecane mixture up to the saturation point, and obtained favorable comparison with experimental data. Many insights into fundamental molecular level processes and physics were obtained from the process. Most importantly, we found that the dipole moment of the extracting agent is crucially important in affecting the interface roughness and the extraction rate of water molecules into the organic phase. In addition, we have identified shortcomings in the existing OPLS-AA force field potential for long-chain alkanes. The significance of this force field is that it is supposed to be optimized for molecular liquid simulations. We found that it failed for dodecane and

  17. Quantifying protein dynamics in the ps–ns time regime by NMR relaxation

    Energy Technology Data Exchange (ETDEWEB)

    Hernández, Griselda; LeMaster, David M., E-mail: david.lemaster@health.ny.gov [University at Albany - SUNY, Wadsworth Center, New York State Department of Health and Department of Biomedical Sciences, School of Public Health (United States)

    2016-11-15

    Both {sup 15}N chemical shift anisotropy (CSA) and sufficiently rapid exchange linebroadening transitions exhibit relaxation contributions that are proportional to the square of the magnetic field. Deconvoluting these contributions is further complicated by residue-dependent variations in protein amide {sup 15}N CSA values which have proven difficult to accurately measure. Exploiting recently reported improvements for the implementation of T{sub 1} and T{sub 1ρ} experiments, field strength-dependent studies have been carried out on the B3 domain of protein G (GB3) as well as on the immunophilin FKBP12 and a H87V variant of that protein in which the major conformational exchange linebroadening transition is suppressed. By applying a zero frequency spectral density rescaling analysis to the relaxation data collected at magnetic fields from 500 to 900 MHz {sup 1}H, differential residue-specific {sup 15}N CSA values have been obtained for GB3 which correlate with those derived from solid state and liquid crystalline NMR measurements to a level similar to the correlation among those previously reported studies. Application of this analysis protocol to FKBP12 demonstrated an efficient quantitation of both weak exchange linebroadening contributions and differential residue-specific {sup 15}N CSA values. Experimental access to such differential residue-specific {sup 15}N CSA values should significantly facilitate more accurate comparisons with molecular dynamics simulations of protein motion that occurs within the timeframe of global molecular tumbling.

  18. Multimodel inference to quantify the relative importance of abiotic factors in the population dynamics of marine zooplankton

    Science.gov (United States)

    Everaert, Gert; Deschutter, Yana; De Troch, Marleen; Janssen, Colin R.; De Schamphelaere, Karel

    2018-05-01

    The effect of multiple stressors on marine ecosystems remains poorly understood and most of the knowledge available is related to phytoplankton. To partly address this knowledge gap, we tested if combining multimodel inference with generalized additive modelling could quantify the relative contribution of environmental variables on the population dynamics of a zooplankton species in the Belgian part of the North Sea. Hence, we have quantified the relative contribution of oceanographic variables (e.g. water temperature, salinity, nutrient concentrations, and chlorophyll a concentrations) and anthropogenic chemicals (i.e. polychlorinated biphenyls) to the density of Acartia clausi. We found that models with water temperature and chlorophyll a concentration explained ca. 73% of the population density of the marine copepod. Multimodel inference in combination with regression-based models are a generic way to disentangle and quantify multiple stressor-induced changes in marine ecosystems. Future-oriented simulations of copepod densities suggested increased copepod densities under predicted environmental changes.

  19. The positive group affect spiral : a dynamic model of the emergence of positive affective similarity in work groups

    NARCIS (Netherlands)

    Walter, F.; Bruch, H.

    This conceptual paper seeks to clarify the process of the emergence of positive collective affect. Specifically, it develops a dynamic model of the emergence of positive affective similarity in work groups. It is suggested that positive group affective similarity and within-group relationship

  20. Quantifying mercury isotope dynamics in captive Pacific bluefin tuna (Thunnus orientalis

    Directory of Open Access Journals (Sweden)

    Sae Yun Kwon

    2016-02-01

    Full Text Available Abstract Analyses of mercury (Hg isotope ratios in fish tissues are used increasingly to infer sources and biogeochemical processes of Hg in natural aquatic ecosystems. Controlled experiments that can couple internal Hg isotope behavior with traditional isotope tracers (δ13C, δ15N can improve the applicability of Hg isotopes as natural ecological tracers. In this study, we investigated changes in Hg isotope ratios (δ202Hg, Δ199Hg during bioaccumulation of natural diets in the pelagic Pacific bluefin tuna (Thunnus orientalis; PBFT. Juvenile PBFT were fed a mixture of natural prey and a dietary supplement (60% Loligo opalescens, 31% Sardinops sagax, 9% gel supplement in captivity for 2914 days, and white muscle tissues were analyzed for Hg isotope ratios and compared to time in captivity and internal turnover of δ13C and δ15N. PBFT muscle tissues equilibrated to Hg isotope ratios of the dietary mixture within ∼700 days, after which we observed a cessation in further shifts in Δ199Hg, and small but significant negative δ202Hg shifts from the dietary mixture. The internal behavior of Δ199Hg is consistent with previous fish studies, which showed an absence of Δ199Hg fractionation during Hg bioaccumulation. The negative δ202Hg shifts can be attributed to either preferential excretion of Hg with higher δ202Hg values or individual variability in captive PBFT feeding preferences and/or consumption rates. The overall internal behavior of Hg isotopes is similar to that described for δ13C and δ15N, though observed Hg turnover was slower compared to carbon and nitrogen. This improved understanding of internal dynamics of Hg isotopes in relation to δ13C and δ15N enhances the applicability of Hg isotope ratios in fish tissues for tracing Hg sources in natural ecosystems.

  1. Levy Stable Processes. From Stationary to Self-Similar Dynamics and Back. An Application to Finance

    International Nuclear Information System (INIS)

    Burnecki, K.; Weron, A.

    2004-01-01

    We employ an ergodic theory argument to demonstrate the foundations of ubiquity of Levy stable self-similar processes in physics and present a class of models for anomalous and nonextensive diffusion. A relationship between stationary and self-similar models is clarified. The presented stochastic integral description of all Levy stable processes could provide new insights into the mechanism underlying a range of self-similar natural phenomena. Finally, this effect is illustrated by self-similar approach to financial modelling. (author)

  2. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers.

    Directory of Open Access Journals (Sweden)

    Sebastian Sippel

    Full Text Available Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP, a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective

  3. Quantifying the Effect of Open-Mindedness on Opinion Dynamics and Advertising Optimization

    OpenAIRE

    Innes, Clinton R

    2014-01-01

    Group opinion dynamics shape our world in innumerable ways. Societal aspects ranging from the political parties we support to the economic decisions we make in our daily lives are all directly af- fected in some way by group opinion dynamics. This makes understanding and potentially being able to predict the complex inter-relationships between individuals’ opinions and group opinion dynam- ics invaluable both scientifically and economically. We propose an aggregation model incorporating ingro...

  4. The effects of gravity on human walking: a new test of the dynamic similarity hypothesis using a predictive model.

    Science.gov (United States)

    Raichlen, David A

    2008-09-01

    The dynamic similarity hypothesis (DSH) suggests that differences in animal locomotor biomechanics are due mostly to differences in size. According to the DSH, when the ratios of inertial to gravitational forces are equal between two animals that differ in size [e.g. at equal Froude numbers, where Froude = velocity2/(gravity x hip height)], their movements can be made similar by multiplying all time durations by one constant, all forces by a second constant and all linear distances by a third constant. The DSH has been generally supported by numerous comparative studies showing that as inertial forces differ (i.e. differences in the centripetal force acting on the animal due to variation in hip heights), animals walk with dynamic similarity. However, humans walking in simulated reduced gravity do not walk with dynamically similar kinematics. The simulated gravity experiments did not completely account for the effects of gravity on all body segments, and the importance of gravity in the DSH requires further examination. This study uses a kinematic model to predict the effects of gravity on human locomotion, taking into account both the effects of gravitational forces on the upper body and on the limbs. Results show that dynamic similarity is maintained in altered gravitational environments. Thus, the DSH does account for differences in the inertial forces governing locomotion (e.g. differences in hip height) as well as differences in the gravitational forces governing locomotion.

  5. Panarchy: discontinuities reval similarities in the dynamic system structure of ecological and social systems

    Science.gov (United States)

    Debates on the organization, structure and dynamics of ecosystems across scales of space and time have waxed and waned in the literature for a century. From successional theory to ecosystem theories of resilience and robustness, from hierarchy to ascendency to panarchy theory, e...

  6. A Statistical Physics Characterization of the Complex Systems Dynamics: Quantifying Complexity from Spatio-Temporal Interactions

    Science.gov (United States)

    Koorehdavoudi, Hana; Bogdan, Paul

    2016-06-01

    Biological systems are frequently categorized as complex systems due to their capabilities of generating spatio-temporal structures from apparent random decisions. In spite of research on analyzing biological systems, we lack a quantifiable framework for measuring their complexity. To fill this gap, in this paper, we develop a new paradigm to study a collective group of N agents moving and interacting in a three-dimensional space. Our paradigm helps to identify the spatio-temporal states of the motion of the group and their associated transition probabilities. This framework enables the estimation of the free energy landscape corresponding to the identified states. Based on the energy landscape, we quantify missing information, emergence, self-organization and complexity for a collective motion. We show that the collective motion of the group of agents evolves to reach the most probable state with relatively lowest energy level and lowest missing information compared to other possible states. Our analysis demonstrates that the natural group of animals exhibit a higher degree of emergence, self-organization and complexity over time. Consequently, this algorithm can be integrated into new frameworks to engineer collective motions to achieve certain degrees of emergence, self-organization and complexity.

  7. Real-Time G-Protein-Coupled Receptor Imaging to Understand and Quantify Receptor Dynamics

    Directory of Open Access Journals (Sweden)

    María S. Aymerich

    2011-01-01

    Full Text Available Understanding the trafficking of G-protein-coupled receptors (GPCRs and their regulation by agonists and antagonists is fundamental to develop more effective drugs. Optical methods using fluorescent-tagged receptors and spinning disk confocal microscopy are useful tools to investigate membrane receptor dynamics in living cells. The aim of this study was to develop a method to characterize receptor dynamics using this system which offers the advantage of very fast image acquisition with minimal cell perturbation. However, in short-term assays photobleaching was still a problem. Thus, we developed a procedure to perform a photobleaching-corrected image analysis. A study of short-term dynamics of the long isoform of the dopamine type 2 receptor revealed an agonist-induced increase in the mobile fraction of receptors with a rate of movement of 0.08 μm/s For long-term assays, the ratio between the relative fluorescence intensity at the cell surface versus that in the intracellular compartment indicated that receptor internalization only occurred in cells co-expressing G protein-coupled receptor kinase 2. These results indicate that the lateral movement of receptors and receptor internalization are not directly coupled. Thus, we believe that live imaging of GPCRs using spinning disk confocal image analysis constitutes a powerful tool to study of receptor dynamics.

  8. Coordinated approaches to quantify long-term ecosystem dynamics in response to global change

    Science.gov (United States)

    Yiqi Luo; Jerry Melillo; Shuli Niu; Claus Beier; James S. Clark; Aime E.T. Classen; Eric Dividson; Jeffrey S. Dukes; R. Dave Evans; Christopher B. Field; Claudia I. Czimczik; Michael Keller; Bruce A. Kimball; Lara M. Kueppers; Richard J. Norby; Shannon L. Pelini; Elise Pendall; Edward Rastetter; Johan Six; Melinda Smith; Mark G. Tjoelker; Margaret S. Torn

    2011-01-01

    Many serious ecosystem consequences of climate change will take decades or even centuries to emerge. Long-term ecological responses to global change are strongly regulated by slow processes, such as changes in species composition, carbon dynamics in soil and by long-lived plants, and accumulation of nutrient capitals. Understanding and predicting these processes...

  9. Detecting and quantifying land use/land cover dynamics in Wadla ...

    African Journals Online (AJOL)

    A study was conducted in Wadla Delanta Massif to investigate land use/cover dynamics over the last four decades (1973-2014) using satellite images (1973 MSS, 1995 TM and 2014 ETM+). Global positioning system ... in the study area. Keywords: GIS, Image classification, Remote sensing, Supervised classification ...

  10. Agent-Based Model to Study and Quantify the Evolution Dynamics of Android Malware Infection

    Directory of Open Access Journals (Sweden)

    Juan Alegre-Sanahuja

    2014-01-01

    Full Text Available In the last years the number of malware Apps that the users download to their devices has risen. In this paper, we propose an agent-based model to quantify the Android malware infection evolution, modeling the behavior of the users and the different markets where the users may download Apps. The model predicts the number of infected smartphones depending on the type of malware. Additionally, we will estimate the cost that the users should afford when the malware is in their devices. We will be able to analyze which part is more critical: the users, giving indiscriminate permissions to the Apps or not protecting their devices with antivirus software, or the Android platform, due to the vulnerabilities of the Android devices that permit their rooted. We focus on the community of Valencia, Spain, although the obtained results can be extrapolated to other places where the number of Android smartphones remains fairly stable.

  11. Quantifying hyporheic exchange dynamics in a highly regulated large river reach.

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn Edward; Zhou, T; Huang, M; Hou, Z; Bao, J; Arntzen, E; Mackley, R; Harding, S; Titzler, S; Murray, C; Perkins, W; Chen, X; Stegen, J; Thorne, P; Zachara, J

    2017-03-01

    Hyporheic exchange is an important mechanism taking place in riverbanks and riverbed sediments, where river water and shallow groundwater mix and interact with each other. The direction, magnitude, and residence time of the hyporheic flux that penetrates the river bed are critical for biogeochemical processes such as carbon and nitrogen cycling, and biodegradation of organic contaminants. Many approaches including field measurements and numerical methods have been developed to quantify the hyporheic exchanges in relatively small rivers. However, the spatial and temporal distributions of hyporheic exchanges in a large, regulated river reach remain less explored due to the large spatial domains, complexity of geomorphologic features and subsurface properties, and the great pressure gradient variations at the riverbed created by dam operations.

  12. Coordinated approaches to quantify long-term ecosystem dynamics in response to global change

    DEFF Research Database (Denmark)

    Liu, Y.; Melillo, J.; Niu, S.

    2011-01-01

    a coordinated approach that combines long-term, large-scale global change experiments with process studies and modeling. Long-term global change manipulative experiments, especially in high-priority ecosystems such as tropical forests and high-latitude regions, are essential to maximize information gain......Many serious ecosystem consequences of climate change will take decades or even centuries to emerge. Long-term ecological responses to global change are strongly regulated by slow processes, such as changes in species composition, carbon dynamics in soil and by long-lived plants, and accumulation...... to be the most effective strategy to gain the best information on long-term ecosystem dynamics in response to global change....

  13. Quantifying non-linear dynamics of mass-springs in series oscillators via asymptotic approach

    Science.gov (United States)

    Starosta, Roman; Sypniewska-Kamińska, Grażyna; Awrejcewicz, Jan

    2017-05-01

    Dynamical regular response of an oscillator with two serially connected springs with nonlinear characteristics of cubic type and governed by a set of differential-algebraic equations (DAEs) is studied. The classical approach of the multiple scales method (MSM) in time domain has been employed and appropriately modified to solve the governing DAEs of two systems, i.e. with one- and two degrees-of-freedom. The approximate analytical solutions have been verified by numerical simulations.

  14. Quantifying non-ergodic dynamics of force-free granular gases.

    Science.gov (United States)

    Bodrova, Anna; Chechkin, Aleksei V; Cherstvy, Andrey G; Metzler, Ralf

    2015-09-14

    Brownian motion is ergodic in the Boltzmann-Khinchin sense that long time averages of physical observables such as the mean squared displacement provide the same information as the corresponding ensemble average, even at out-of-equilibrium conditions. This property is the fundamental prerequisite for single particle tracking and its analysis in simple liquids. We study analytically and by event-driven molecular dynamics simulations the dynamics of force-free cooling granular gases and reveal a violation of ergodicity in this Boltzmann-Khinchin sense as well as distinct ageing of the system. Such granular gases comprise materials such as dilute gases of stones, sand, various types of powders, or large molecules, and their mixtures are ubiquitous in Nature and technology, in particular in Space. We treat-depending on the physical-chemical properties of the inter-particle interaction upon their pair collisions-both a constant and a velocity-dependent (viscoelastic) restitution coefficient ε. Moreover we compare the granular gas dynamics with an effective single particle stochastic model based on an underdamped Langevin equation with time dependent diffusivity. We find that both models share the same behaviour of the ensemble mean squared displacement (MSD) and the velocity correlations in the limit of weak dissipation. Qualitatively, the reported non-ergodic behaviour is generic for granular gases with any realistic dependence of ε on the impact velocity of particles.

  15. Developing stochastic epidemiological models to quantify the dynamics of infectious diseases in domestic livestock.

    Science.gov (United States)

    MacKenzie, K; Bishop, S C

    2001-08-01

    A stochastic model describing disease transmission dynamics for a microparasitic infection in a structured domestic animal population is developed and applied to hypothetical epidemics on a pig farm. Rational decision making regarding appropriate control strategies for infectious diseases in domestic livestock requires an understanding of the disease dynamics and risk profiles for different groups of animals. This is best achieved by means of stochastic epidemic models. Methodologies are presented for 1) estimating the probability of an epidemic, given the presence of an infected animal, whether this epidemic is major (requires intervention) or minor (dies out without intervention), and how the location of the infected animal on the farm influences the epidemic probabilities; 2) estimating the basic reproductive ratio, R0 (i.e., the expected number of secondary cases on the introduction of a single infected animal) and the variability of the estimate of this parameter; and 3) estimating the total proportion of animals infected during an epidemic and the total proportion infected at any point in time. The model can be used for assessing impact of altering farm structure on disease dynamics, as well as disease control strategies, including altering farm structure, vaccination, culling, and genetic selection.

  16. Quantifying suspended sediment dynamics in mega deltas using remote sensing data: A case study of the Mekong floodplains

    Science.gov (United States)

    Dang, Thanh Duc; Cochrane, Thomas A.; Arias, Mauricio E.

    2018-06-01

    Temporal and spatial concentrations of suspended sediment in floodplains are difficult to quantify because in situ measurements can be logistically complex, time consuming and costly. In this research, satellite imagery with long temporal and large spatial coverage (Landsat TM/ETM+) was used to complement in situ suspended sediment measurements to reflect sediment dynamics in a large (70,000 km2) floodplain. Instead of using a single spectral band from Landsat, a Principal Component Analysis was applied to obtain uncorrelated reflectance values for five bands of Landsat TM/ETM+. Significant correlations between the scores of the 1st principal component and the values of continuously gauged suspended sediment concentration, shown via high coefficients of determination of sediment rating curves (R2 ranging from 0.66 to 0.92), permit the application of satellite images to quantify spatial and temporal sediment variation in the Mekong floodplains. Estimated suspended sediment maps show that hydraulic regimes at Chaktomuk (Cambodia), where the Mekong, Bassac, and Tonle Sap rivers diverge, determine the amount of seasonal sediment supplies to the Mekong Delta. The development of flood prevention systems to allow for three rice crops a year in the Vietnam Mekong Delta significantly reduces localized flooding, but also prevents sediment (source of nutrients) from entering fields. A direct consequence of this is the need to apply more artificial fertilizers to boost agricultural productivity, which may trigger environmental problems. Overall, remote sensing is shown to be an effective tool to understand temporal and spatial sediment dynamics in large floodplains.

  17. Quantifying temporal trends in fisheries abundance using Bayesian dynamic linear models: A case study of riverine Smallmouth Bass populations

    Science.gov (United States)

    Schall, Megan K.; Blazer, Vicki S.; Lorantas, Robert M.; Smith, Geoffrey; Mullican, John E.; Keplinger, Brandon J.; Wagner, Tyler

    2018-01-01

    Detecting temporal changes in fish abundance is an essential component of fisheries management. Because of the need to understand short‐term and nonlinear changes in fish abundance, traditional linear models may not provide adequate information for management decisions. This study highlights the utility of Bayesian dynamic linear models (DLMs) as a tool for quantifying temporal dynamics in fish abundance. To achieve this goal, we quantified temporal trends of Smallmouth Bass Micropterus dolomieu catch per effort (CPE) from rivers in the mid‐Atlantic states, and we calculated annual probabilities of decline from the posterior distributions of annual rates of change in CPE. We were interested in annual declines because of recent concerns about fish health in portions of the study area. In general, periods of decline were greatest within the Susquehanna River basin, Pennsylvania. The declines in CPE began in the late 1990s—prior to observations of fish health problems—and began to stabilize toward the end of the time series (2011). In contrast, many of the other rivers investigated did not have the same magnitude or duration of decline in CPE. Bayesian DLMs provide information about annual changes in abundance that can inform management and are easily communicated with managers and stakeholders.

  18. Critical Zone Co-dynamics: Quantifying Interactions between Subsurface, Land Surface, and Vegetation Properties Using UAV and Geophysical Approaches

    Science.gov (United States)

    Dafflon, B.; Leger, E.; Peterson, J.; Falco, N.; Wainwright, H. M.; Wu, Y.; Tran, A. P.; Brodie, E.; Williams, K. H.; Versteeg, R.; Hubbard, S. S.

    2017-12-01

    Improving understanding and modelling of terrestrial systems requires advances in measuring and quantifying interactions among subsurface, land surface and vegetation processes over relevant spatiotemporal scales. Such advances are important to quantify natural and managed ecosystem behaviors, as well as to predict how watershed systems respond to increasingly frequent hydrological perturbations, such as droughts, floods and early snowmelt. Our study focuses on the joint use of UAV-based multi-spectral aerial imaging, ground-based geophysical tomographic monitoring (incl., electrical and electromagnetic imaging) and point-scale sensing (soil moisture sensors and soil sampling) to quantify interactions between above and below ground compartments of the East River Watershed in the Upper Colorado River Basin. We evaluate linkages between physical properties (incl. soil composition, soil electrical conductivity, soil water content), metrics extracted from digital surface and terrain elevation models (incl., slope, wetness index) and vegetation properties (incl., greenness, plant type) in a 500 x 500 m hillslope-floodplain subsystem of the watershed. Data integration and analysis is supported by numerical approaches that simulate the control of soil and geomorphic characteristic on hydrological processes. Results provide an unprecedented window into critical zone interactions, revealing significant below- and above-ground co-dynamics. Baseline geophysical datasets provide lithological structure along the hillslope, which includes a surface soil horizon, underlain by a saprolite layer and the fractured Mancos shale. Time-lapse geophysical data show very different moisture dynamics in various compartments and locations during the winter and growing season. Integration with aerial imaging reveals a significant linkage between plant growth and the subsurface wetness, soil characteristics and the topographic gradient. The obtained information about the organization and

  19. Quantifying Network Dynamics and Information Flow Across Chinese Social Media During the African Ebola Outbreak.

    Science.gov (United States)

    Feng, Shihui; Hossain, Liaquat; Crawford, John W; Bossomaier, Terry

    2018-02-01

    Social media provides us with a new platform on which to explore how the public responds to disasters and, of particular importance, how they respond to the emergence of infectious diseases such as Ebola. Provided it is appropriately informed, social media offers a potentially powerful means of supporting both early detection and effective containment of communicable diseases, which is essential for improving disaster medicine and public health preparedness. The 2014 West African Ebola outbreak is a particularly relevant contemporary case study on account of the large number of annual arrivals from Africa, including Chinese employees engaged in projects in Africa. Weibo (Weibo Corp, Beijing, China) is China's most popular social media platform, with more than 2 billion users and over 300 million daily posts, and offers great opportunity to monitor early detection and promotion of public health awareness. We present a proof-of-concept study of a subset of Weibo posts during the outbreak demonstrating potential and identifying priorities for improving the efficacy and accuracy of information dissemination. We quantify the evolution of the social network topology within Weibo relating to the efficacy of information sharing. We show how relatively few nodes in the network can have a dominant influence over both the quality and quantity of the information shared. These findings make an important contribution to disaster medicine and public health preparedness from theoretical and methodological perspectives for dealing with epidemics. (Disaster Med Public Health Preparedness. 2018;12:26-37).

  20. Can segmental model reductions quantify whole-body balance accurately during dynamic activities?

    Science.gov (United States)

    Jamkrajang, Parunchaya; Robinson, Mark A; Limroongreungrat, Weerawat; Vanrenterghem, Jos

    2017-07-01

    When investigating whole-body balance in dynamic tasks, adequately tracking the whole-body centre of mass (CoM) or derivatives such as the extrapolated centre of mass (XCoM) can be crucial but add considerable measurement efforts. The aim of this study was to investigate whether reduced kinematic models can still provide adequate CoM and XCoM representations during dynamic sporting tasks. Seventeen healthy recreationally active subjects (14 males and 3 females; age, 24.9±3.2years; height, 177.3±6.9cm; body mass 72.6±7.0kg) participated in this study. Participants completed three dynamic movements, jumping, kicking, and overarm throwing. Marker-based kinematic data were collected with 10 optoelectronic cameras at 250Hz (Oqus Qualisys, Gothenburg, Sweden). The differences between (X)CoM from a full-body model (gold standard) and (X)CoM representations based on six selected model reductions were evaluated using a Bland-Altman approach. A threshold difference was set at ±2cm to help the reader interpret which model can still provide an acceptable (X)CoM representation. Antero-posterior and medio-lateral displacement profiles of the CoM representation based on lower limbs, trunk and upper limbs showed strong agreement, slightly reduced for lower limbs and trunk only. Representations based on lower limbs only showed less strong agreement, particularly for XCoM in kicking. Overall, our results provide justification of the use of certain model reductions for specific needs, saving measurement effort whilst limiting the error of tracking (X)CoM trajectories in the context of whole-body balance investigation. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Towards quantifying dynamic human-human physical interactions for robot assisted stroke therapy.

    Science.gov (United States)

    Mohan, Mayumi; Mendonca, Rochelle; Johnson, Michelle J

    2017-07-01

    Human-Robot Interaction is a prominent field of robotics today. Knowledge of human-human physical interaction can prove vital in creating dynamic physical interactions between human and robots. Most of the current work in studying this interaction has been from a haptic perspective. Through this paper, we present metrics that can be used to identify if a physical interaction occurred between two people using kinematics. We present a simple Activity of Daily Living (ADL) task which involves a simple interaction. We show that we can use these metrics to successfully identify interactions.

  2. Quantifying collective effervescence: Heart-rate dynamics at a fire-walking ritual

    DEFF Research Database (Denmark)

    Xygalatas, Dimitris; Konvalinka, Ivana; Roepstorff, Andreas

    2011-01-01

    Collective rituals are ubiquitous and resilient features of all known human cultures. They are also functionally opaque, costly, and sometimes dangerous. Social scientists have speculated that collective rituals generate benefits in excess of their costs by reinforcing social bonding and group...... solidarity, yet quantitative evidence for these conjectures is scarce. Our recent study measured the physiological effects of a highly arousing Spanish fire-walking ritual, revealing shared patterns in heart-rate dynamics between participants and related spectators. We briefly describe our results...

  3. Quantifiers and working memory

    NARCIS (Netherlands)

    Szymanik, J.; Zajenkowski, M.

    2010-01-01

    The paper presents a study examining the role of working memory in quantifier verification. We created situations similar to the span task to compare numerical quantifiers of low and high rank, parity quantifiers and proportional quantifiers. The results enrich and support the data obtained

  4. Quantifiers and working memory

    NARCIS (Netherlands)

    Szymanik, J.; Zajenkowski, M.

    2009-01-01

    The paper presents a study examining the role of working memory in quantifier verification. We created situations similar to the span task to compare numerical quantifiers of low and high rank, parity quantifiers and proportional quantifiers. The results enrich and support the data obtained

  5. Dynamic scaling, data-collapse and self-similarity in Barabasi-Albert networks

    Energy Technology Data Exchange (ETDEWEB)

    Hassan, M Kamrul; Pavel, Neeaj I [Theoretical Physics Group, Department of Physics, University of Dhaka, Dhaka 1000 (Bangladesh); Hassan, M Zahedul, E-mail: khassan@univdhaka.edu [Institute of Computer Science, Bangladesh Atomic Energy Commission, Dhaka 1000 (Bangladesh)

    2011-04-29

    In this paper, we show that if each node of the Barabasi-Albert (BA) network is characterized by the generalized degree q, i.e. the product of their degree k and the square root of their respective birth time, then the distribution function F(q, t) exhibits dynamic scaling F(q, t {yields} {infinity}) {approx} t{sup -1/2}{phi}(q/t{sup 1/2}) where {phi}(x) is the scaling function. We verified it by showing that a series of distinct F(q, t) versus q curves for different network sizes N collapse onto a single universal curve if we plot t{sup 1/2}F(q, t) versus q/t{sup 1/2} instead. Finally, we show that the BA network falls into two universality classes depending on whether new nodes arrive with single edge (m = 1) or with multiple edges (m > 1).

  6. EpiTools: An Open-Source Image Analysis Toolkit for Quantifying Epithelial Growth Dynamics.

    Science.gov (United States)

    Heller, Davide; Hoppe, Andreas; Restrepo, Simon; Gatti, Lorenzo; Tournier, Alexander L; Tapon, Nicolas; Basler, Konrad; Mao, Yanlan

    2016-01-11

    Epithelia grow and undergo extensive rearrangements to achieve their final size and shape. Imaging the dynamics of tissue growth and morphogenesis is now possible with advances in time-lapse microscopy, but a true understanding of their complexities is limited by automated image analysis tools to extract quantitative data. To overcome such limitations, we have designed a new open-source image analysis toolkit called EpiTools. It provides user-friendly graphical user interfaces for accurately segmenting and tracking the contours of cell membrane signals obtained from 4D confocal imaging. It is designed for a broad audience, especially biologists with no computer-science background. Quantitative data extraction is integrated into a larger bioimaging platform, Icy, to increase the visibility and usability of our tools. We demonstrate the usefulness of EpiTools by analyzing Drosophila wing imaginal disc growth, revealing previously overlooked properties of this dynamic tissue, such as the patterns of cellular rearrangements. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Aging dynamics at the martensitic phase transition of Au-Cd quantified by XPCS

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, L.; Waldorf, M.; Klemradt, U. [II. Physik. Inst., RWTH Aachen Univ. (Germany); Gutt, C.; Gruebel, G. [HASYLAB, DESY, Hamburg (Germany); Madsen, A. [ESRF, Grenoble (France); Finlayson, T.R. [School of Physics, Univ. of Melbourne (Australia)

    2009-07-01

    Aging phenomena of martensites have been discussed controversially for decades. Although they were successfully associated with defect-related diffusion processes in the low temperature phase (Ren and Otsuka, Nature 389, 579 (1997)), so far no experiments have directly addressed the characteristic time scales associated with nanoscopic structural changes. Using a Au{sub 50.5}Cd{sub 49.5} single crystal X-ray photon correlation spectroscopy (XPCS) measurements in diffraction geometry were carried out at ESRF beamline ID10A. High temperature resolution (0.1 K) and stability ({+-}4 mK) were employed to resolve potential slow dynamics in the vicinity of the phase transition, 2D scattering data close to the (001) Bragg reflection were recorded with a sampling time into the detector of 0.2 s at 1.4 s intervals. For each temperature one-time correlation functions show significant dynamics only near T{sub c}, being fastest at the transition in disagreement with any critical slowing down scenario. Two-time correlation functions reveal a generally non-stationary behavior and also avalanches in the sample. Characteristic timescales were determined as a function of the aging-time by calculating one-time-correlation functions at a specific age. Fits of Kohlrausch-Williams-Watts functions reveal time constants ranging from {approx}400 s to over 6000 s at largest aging-times.

  8. Quantifying the behavior of price dynamics at opening time in stock market

    Science.gov (United States)

    Ochiai, Tomoshiro; Takada, Hideyuki; Nacher, Jose C.

    2014-11-01

    The availability of huge volume of financial data has offered the possibility for understanding the markets as a complex system characterized by several stylized facts. Here we first show that the time evolution of the Japan’s Nikkei stock average index (Nikkei 225) futures follows the resistance and breaking-acceleration effects when the complete time series data is analyzed. However, in stock markets there are periods where no regular trades occur between the close of the market on one day and the next day’s open. To examine these time gaps we decompose the time series data into opening time and intermediate time. Our analysis indicates that for the intermediate time, both the resistance and the breaking-acceleration effects are still observed. However, for the opening time there are almost no resistance and breaking-acceleration effects, and volatility is always constantly high. These findings highlight unique dynamic differences between stock markets and forex market and suggest that current risk management strategies may need to be revised to address the absence of these dynamic effects at the opening time.

  9. Quantifying seasonal dynamics of canopy structure and function using inexpensive narrowband spectral radiometers

    Science.gov (United States)

    Vierling, L. A.; Garrity, S. R.; Campbell, G.; Coops, N. C.; Eitel, J.; Gamon, J. A.; Hilker, T.; Krofcheck, D. J.; Litvak, M. E.; Naupari, J. A.; Richardson, A. D.; Sonnentag, O.; van Leeuwen, M.

    2011-12-01

    Increasing the spatial and temporal density of automated environmental sensing networks is necessary to quantify shifts in plant structure (e.g., leaf area index) and function (e.g., photosynthesis). Improving detection sensitivity can facilitate a mechanistic understanding by better linking plant processes to environmental change. Spectral radiometer measurements can be highly useful for tracking plant structure and function from diurnal to seasonal time scales and calibrating and validating satellite- and aircraft-based spectral measurements. However, dense ground networks of such instruments are challenging to establish due to the cost and complexity of automated instrument deployment. We therefore developed simple to operate, lightweight and inexpensive narrowband (~10nm bandwidth) spectral instruments capable of continuously measuring four to six discrete bands that have proven capacity to describe key physiological processes and structural features of plant canopies. These bands are centered at 530, 570, 675, 800, 880, and 970 nm to enable calculation of the physiological reflectance index (PRI), normalized difference vegetation index (NDVI), green NDVI (gNDVI), and water band index (WBI) collected above and within vegetation canopies. To date, measurements have been collected above grassland, semi-arid shrub steppe, piñon-juniper woodland, dense conifer forest, mixed deciduous-conifer forest, and cropland canopies, with additional measurements collected along vertical transects through a temperate conifer rainforest. Findings from this work indicate not only that key shifts in plant phenology, physiology, and structure can be captured using such instruments, but that the temporally dense nature of the measurements can help to disentangle heretofore unreported complexities of simultaneous phenological and structural change on canopy reflectance.

  10. Cordilleran forest scaling dynamics and disturbance regimes quantified by aerial lidar

    Science.gov (United States)

    Swetnam, Tyson L.

    Semi-arid forests are in a period of rapid transition as a result of unprecedented landscape scale fires, insect outbreaks, drought, and anthropogenic land use practices. Understanding how historically episodic disturbances led to coherent forest structural and spatial patterns that promoted resilience and resistance is a critical part of addressing change. Here my coauthors and I apply metabolic scaling theory (MST) to examine scaling behavior and structural patterns of semi-arid conifer forests in Arizona and New Mexico. We conceptualize a linkage to mechanistic drivers of forest assembly that incorporates the effects of low-intensity disturbance, and physiologic and resource limitations as an extension of MST. We use both aerial LiDAR data and field observations to quantify changes in forest structure from the sub-meter to landscape scales. We found: (1) semi-arid forest structure exhibits MST-predicted behaviors regardless of disturbance and that MST can help to quantitatively measure the level of disturbance intensity in a forest, (2) the application of a power law to a forest overstory frequency distribution can help predict understory presence/absence, (3) local indicators of spatial association can help to define first order effects (e.g. topographic changes) and map where recent disturbances (e.g. logging and fire) have altered forest structure. Lastly, we produced a comprehensive set of above-ground biomass and carbon models for five distinct forest types and ten common species of the southwestern US that are meant for use in aerial LiDAR forest inventory projects. This dissertation presents both a conceptual framework and applications for investigating local scales (stands of trees) up to entire ecosystems for diagnosis of current carbon balances, levels of departure from historical norms, and ecological stability. These tools and models will become more important as we prepare our ecosystems for a future characterized by increased climatic variability

  11. Cell motility dynamics: a novel segmentation algorithm to quantify multi-cellular bright field microscopy images.

    Directory of Open Access Journals (Sweden)

    Assaf Zaritsky

    Full Text Available Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional

  12. Cell motility dynamics: a novel segmentation algorithm to quantify multi-cellular bright field microscopy images.

    Science.gov (United States)

    Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan

    2011-01-01

    Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single

  13. Quantifying dynamic changes in plantar pressure gradient in diabetics with peripheral neuropathy

    Directory of Open Access Journals (Sweden)

    Chi-Wen Lung

    2016-07-01

    Full Text Available Diabetic foot ulcers remain one of the most serious complications of diabetes. Peak plantar pressure (PPP and peak pressure gradient (PPG during walking have been shown to be associated with the development of diabetic foot ulcers. To gain further insight into the mechanical etiology of diabetic foot ulcers, examination of the pressure gradient angle (PGA has been recently proposed. The PGA quantifies directional variation or orientation of the pressure gradient during walking, and provides a measure of whether pressure gradient patterns are concentrated or dispersed along the plantar surface. We hypothesized that diabetics at risk of foot ulceration would have smaller PGA in key plantar regions, suggesting less movement of the pressure gradient over time. A total of 27 participants were studied, including 19 diabetics with peripheral neuropathy and 8 non-diabetic control subjects. A foot pressure measurement system was used to measure plantar pressures during walking. PPP, PPG and PGA were calculated for four foot regions - 1st toe (T1, 1st metatarsal head (M1, 2nd metatarsal head (M2, and heel (HL. Consistent with prior studies, PPP and PPG were significantly larger in the diabetic group compared to non-diabetic controls in the T1 and M1 regions, but not M2 or HL. For example, PPP was 165% (P=0.02 and PPG was 214% (P<0.001 larger in T1. PGA was found to be significantly smaller in the diabetic group in T1 (46%, P=0.04, suggesting a more concentrated pressure gradient pattern under the toe. The proposed PGA may improve our understanding of the role of pressure gradient on the risk of diabetic foot ulcers.

  14. Quantifying mantle structure and dynamics using plume tracing in seismic tomography

    Science.gov (United States)

    O'Farrell, K. A.; Eakin, C. M.; Jackson, M. G.; Jones, T. D.; Lekic, V.; Lithgow-Bertelloni, C. R.

    2017-12-01

    Directly linking deep mantle processes with surface features and dynamics is a complex problem. Hotspot volcanism gives us surface observables of mantle signatures, but the depth and source of the mantle plumes feeding these hotspots are highly debated. To address these issues, it is necessary to consider the entire journey of a plume through the mantle. By analyzing the behavior of mantle plumes we can constrain the vigor of mantle convection, the net rotation of the mantle and the role of thermal versus chemical anomalies as well as the bulk physical properties such as the viscosity profile. To do this, we developed a new algorithm to trace plume-like features in shear-wave (Vs) seismic tomography models based on picking local minima in the velocity and searching for continuous features with depth. We applied this method to recent tomographic models and find 60+ continuous plume conduits that are > 750 km long. Approximately a third of these can be associated with known hotspots at the surface. We analyze the morphology of these continuous conduits and infer large scale mantle flow patterns and properties. We find the largest lateral deflections in the conduits occur near the base of the lower mantle and in the upper mantle (near the thermal boundary layers). The preferred orientation of the plume deflections show large variability at all depths and indicate no net mantle rotation. Plate by plate analysis shows little agreement in deflection below particular plates, indicating these deflected features might be long lived and not caused by plate shearing. Changes in the gradient of plume deflection are inferred to correspond with viscosity contrasts in the mantle and found below the transition zone as well as at 1000 km depth. From this inferred viscosity structure, we explore the dynamics of a plume through these viscosity jumps. We also retrieve the Vs profiles for the conduits and compare with the velocity profiles predicted for different mantle adiabat

  15. The dynamics of single protein molecules is non-equilibrium and self-similar over thirteen decades in time

    Science.gov (United States)

    Hu, Xiaohu; Hong, Liang; Dean Smith, Micholas; Neusius, Thomas; Cheng, Xiaolin; Smith, Jeremy C.

    2016-02-01

    Internal motions of proteins are essential to their function. The time dependence of protein structural fluctuations is highly complex, manifesting subdiffusive, non-exponential behaviour with effective relaxation times existing over many decades in time, from ps up to ~102 s (refs ,,,). Here, using molecular dynamics simulations, we show that, on timescales from 10-12 to 10-5 s, motions in single proteins are self-similar, non-equilibrium and exhibit ageing. The characteristic relaxation time for a distance fluctuation, such as inter-domain motion, is observation-time-dependent, increasing in a simple, power-law fashion, arising from the fractal nature of the topology and geometry of the energy landscape explored. Diffusion over the energy landscape follows a non-ergodic continuous time random walk. Comparison with single-molecule experiments suggests that the non-equilibrium self-similar dynamical behaviour persists up to timescales approaching the in vivo lifespan of individual protein molecules.

  16. Quantifying the chiral magnetic effect from anomalous-viscous fluid dynamics

    Science.gov (United States)

    Jiang, Yin; Shi, Shuzhe; Yin, Yi; Liao, Jinfeng

    2018-01-01

    The Chiral Magnetic Effect (CME) is a macroscopic manifestation of fundamental chiral anomaly in a many-body system of chiral fermions, and emerges as an anomalous transport current in the fluid dynamics framework. Experimental observation of the CME is of great interest and has been reported in Dirac and Weyl semimetals. Significant efforts have also been made to look for the CME in heavy ion collisions. Critically needed for such a search is the theoretical prediction for the CME signal. In this paper we report a first quantitative modeling framework, Anomalous Viscous Fluid Dynamics (AVFD), which computes the evolution of fermion currents on top of realistic bulk evolution in heavy ion collisions and simultaneously accounts for both anomalous and normal viscous transport effects. AVFD allows a quantitative understanding of the generation and evolution of CME-induced charge separation during the hydrodynamic stage, as well as its dependence on theoretical ingredients. With reasonable estimates of key parameters, the AVFD simulations provide the first phenomenologically successful explanation of the measured signal in 200 AGeV AuAu collisions. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, within the framework of the Beam Energy Scan Theory (BEST) Topical Collaboration. The work is also supported in part by the National Science Foundation under Grant No. PHY-1352368 (SS and JL), by the National Science Foundation of China under Grant No. 11735007 (JL) and by the U.S. Department of Energy under grant Contract Number No. DE- SC0012704 (BNL)/DE-SC0011090 (MIT) (YY). JL is grateful to the Institute for Nuclear Theory for hospitality during the INT-16-3 Program. The computation of this research was performed on IU’s Big Red II cluster, supported in part by Lilly Endowment, Inc. (through its support for the Indiana University Pervasive Technology Institute) and in part by the Indiana METACyt

  17. Quantifying and Modelling Long Term Sediment Dynamics in Catchments in Western Europe

    Science.gov (United States)

    Notebaert, B.; De Brue, H.; Verstraeten, G.; Broothaerts, N.

    2015-12-01

    Quantification of sediment dynamics allows to get insight in driving forces and internal dynamics of the sediment cascade system. A useful tool to achieve this is the sediment budget approach, which encompasses the quantification of different sinks and sources. A Holocene time-differentiated sediment budget has been constructed for the Belgian Dijle River catchment (720 km²), based on a large set of field data. The results show how soil erosion is driven by land use changes over longer timescales. Sediment redistribution and the relative importance of the different sinks also vary over time, mainly as a result of changing land use and related landscape connectivity. However, the coarse temporal resolution typically associated with Holocene studies complicates the understanding of sub-millennial scale processes. In a second step, the field-based sediment budget was combined with a modeling approach using Watem/Sedem, a spatially distributed model that simulates soil erosion and colluvial deposition. After validation of the model calibration against the sediment budget, the model was used in a sensitivity analysis. Results confirm the overwhelming influence of human land use on both soil erosion and landscape connectivity, whereas the climatic impact is comparatively small. In addition to catchment-wide simulations, the model also served to test the relative importance of lynchets and dry valleys in different environments. Finally, the geomorphic model was used to simulate past land use, taking into account equifinality. For this purpose, a large series of hypothetical time-independent land use maps of the Dijle catchment were modeled based on a multi-objective allocation algorithm, and applied in Watem/Sedem. Modeled soil erosion and sediment deposition outcomes for each scenario were subsequently compared with the field-based record, taking into account uncertainties. As such, the model allows to evaluate and select realistic land use scenarios for the Holocene.

  18. Quantifying the accuracy of the tumor motion and area as a function of acceleration factor for the simulation of the dynamic keyhole magnetic resonance imaging method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Danny; Pollock, Sean; Keall, Paul, E-mail: paul.keall@sydney.edu.au [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, Sydney, NSW 2006 (Australia); Greer, Peter B. [School of Mathematical and Physical Sciences, University of Newcastle, Newcastle, NSW 2308, Australia and Department of Radiation Oncology, Calvary Mater Newcastle Hospital, Newcastle, NSW 2298 (Australia); Kim, Taeho [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, Sydney, NSW 2006, Australia and Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23219 (United States)

    2016-05-15

    Purpose: The dynamic keyhole is a new MR image reconstruction method for thoracic and abdominal MR imaging. To date, this method has not been investigated with cancer patient magnetic resonance imaging (MRI) data. The goal of this study was to assess the dynamic keyhole method for the task of lung tumor localization using cine-MR images reconstructed in the presence of respiratory motion. Methods: The dynamic keyhole method utilizes a previously acquired a library of peripheral k-space datasets at similar displacement and phase (where phase is simply used to determine whether the breathing is inhale to exhale or exhale to inhale) respiratory bins in conjunction with central k-space datasets (keyhole) acquired. External respiratory signals drive the process of sorting, matching, and combining the two k-space streams for each respiratory bin, thereby achieving faster image acquisition without substantial motion artifacts. This study was the first that investigates the impact of k-space undersampling on lung tumor motion and area assessment across clinically available techniques (zero-filling and conventional keyhole). In this study, the dynamic keyhole, conventional keyhole and zero-filling methods were compared to full k-space dataset acquisition by quantifying (1) the keyhole size required for central k-space datasets for constant image quality across sixty four cine-MRI datasets from nine lung cancer patients, (2) the intensity difference between the original and reconstructed images in a constant keyhole size, and (3) the accuracy of tumor motion and area directly measured by tumor autocontouring. Results: For constant image quality, the dynamic keyhole method, conventional keyhole, and zero-filling methods required 22%, 34%, and 49% of the keyhole size (P < 0.0001), respectively, compared to the full k-space image acquisition method. Compared to the conventional keyhole and zero-filling reconstructed images with the keyhole size utilized in the dynamic keyhole

  19. Why are large cities faster? Universal scaling and self-similarity in urban organization and dynamics

    Science.gov (United States)

    Bettencourt, L. M. A.; Lobo, J.; West, G. B.

    2008-06-01

    Cities have existed since the beginning of civilization and have always been intimately connected with humanity's cultural and technological development. Much about the human and social dynamics that takes place is cities is intuitively recognizable across time, space and culture; yet we still do not have a clear cut answer as to why cities exist or to what factors are critical to make them thrive or collapse. Here, we construct an extensive quantitative characterization of the variation of many urban indicators with city size, using large data sets for American, European and Chinese cities. We show that social and economic quantities, characterizing the creation of wealth and new ideas, show increasing returns to population scale, which appear quantitatively as a power law of city size with an exponent β≃ 1.15 > 1. Concurrently, quantities characterizing material infrastructure typically show economies of scale, namely β≃ 0.8 exponential growth, which inexorably lead to crises of urban organization. To avoid them we show that growth may proceed in cycles, separated by major urban adaptations, with the unintended consequence that the duration of such cycles decreases with larger urban population size and is now estimated to be shorter than a human lifetime.

  20. Quantifying sediment dynamics over century and event timescales with Beryllium-10 and Lead-210

    Science.gov (United States)

    Belmont, P.; Willenbring, J.; Schottler, S.

    2010-12-01

    Landscape erosion is unsteady and non-uniform over human timescales. Quantifying that spatial and temporal variability is important for developing an accurate understanding of watershed erosion, as well as useful morphodynamic models that consider erosion, storage, and sediment transport pathways through watersheds. In this study, we have utilized naturally occurring meteoric 10Be and 210Pb to constrain long-term erosion rates and determine the relative importance of different sediment sources in the Le Sueur River watershed, southern Minnesota. Consistently high suspended sediment loads measured in the Le Sueur are the combined result of natural and human-induced processes. Catastrophic baselevel fall of 70 meters that occurred 13,400 years ago initiated rapid river incision with a knickpoint that has propagated 40 km up through the channel network. Over the past 150 years, agriculture has changed the vegetation cover, disturbed soils and profoundly altered watershed hydrology. Primary sediment sources include upland agricultural fields, bluffs and ravines that have resulted from Holocene river incision, and degrading banks and floodplains. Our two tracers provide complementary pieces of information to constrain erosion rates and identify sources. Both tracers exhibit high concentrations in upland soils and low concentrations in bluffs and ravines. Sediment temporarily stored in floodplains is diminished in 210Pb and enriched in 10Be concentration, which allows us to constrain the rate of channel-floodplain exchange. Results from 10Be analysis in the watershed and in the sedimentary record of Lake Pepin, a natural sediment trap downstream, suggest that agriculture has increased landscape erosion rates significantly, but that the relative magnitude of upland erosion compared to other sources has changed over time, with upland contributions being most pronounced in the mid-20th century. Suspended sediment samples analyzed for 10Be and 210Pb from different locations

  1. 13C- and 15N-Labeling Strategies Combined with Mass Spectrometry Comprehensively Quantify Phospholipid Dynamics in C. elegans.

    Directory of Open Access Journals (Sweden)

    Blair C R Dancy

    Full Text Available Membranes define cellular and organelle boundaries, a function that is critical to all living systems. Like other biomolecules, membrane lipids are dynamically maintained, but current methods are extremely limited for monitoring lipid dynamics in living animals. We developed novel strategies in C. elegans combining 13C and 15N stable isotopes with mass spectrometry to directly quantify the replenishment rates of the individual fatty acids and intact phospholipids of the membrane. Using multiple measurements of phospholipid dynamics, we found that the phospholipid pools are replaced rapidly and at rates nearly double the turnover measured for neutral lipid populations. In fact, our analysis shows that the majority of membrane lipids are replaced each day. Furthermore, we found that stearoyl-CoA desaturases (SCDs, critical enzymes in polyunsaturated fatty acid production, play an unexpected role in influencing the overall rates of membrane maintenance as SCD depletion affected the turnover of nearly all membrane lipids. Additionally, the compromised membrane maintenance as defined by LC-MS/MS with SCD RNAi resulted in active phospholipid remodeling that we predict is critical to alleviate the impact of reduced membrane maintenance in these animals. Not only have these combined methodologies identified new facets of the impact of SCDs on the membrane, but they also have great potential to reveal many undiscovered regulators of phospholipid metabolism.

  2. Quantifying the Geomorphic Dynamics of the Extensively Impacted Lower Yuba River

    Science.gov (United States)

    Wyrick, J. R.; Pasternack, G. B.; Carley, J. K.; Barker, R.; Massa, D.; Bratovich, P.; Reedy, G.; Johnson, T.

    2010-12-01

    Traditionally it is has been thought that rivers possess the capability of adjusting their attributes to accommodate varying flow and sediment transport regimes so that sediment in- and out-fluxes are balanced and landform conditions are “stable”. In reality, however, geomorphic drivers and boundary conditions are much more independently dynamic than classically envisioned, such that landforms may always be in a state of adjustment that is normal and appropriate. Rather than thinking of landforms as stable, it is more appropriate to think of them, and the ecosystem services with which they are associated, as resilient in response to change. Knowledge of historic, pre-human baseline conditions or regional reference conditions is limited and may not be as applicable in understanding natural geomorphic and ecosystem services as once envisioned. In light of this natural complexity, a geomorphic assessment of conditions after a large dam or other facility is built and operated may not be as simple as documenting geomorphic instability and attributing that to human impacts relative to the presumed stable baseline conditions. Rather than compare anthropogenically-impacted conditions to theoretical baseline or reference conditions, a more effective approach is to deduce the geomorphic processes in a system under different regimes and evaluate the implications for resiliency of ecosystem services. Through a mechanistic understanding of environmental systems, it may be possible to rationally rehabilitate an ecosystem to achieve resiliency in cases where it has been lost or is desirable to instill, even if it was not historically present. This analytic paradigm is being used to assess the history and on-going geomorphic dynamism of the lower Yuba River (LYR) in northern California. Despite a legacy of massive hydraulic mining waste deposition, dredger re-working of the river valley, dam construction, and flow regulation, the river has been described as lacking the

  3. Quantifying the influence of the terrestrial biosphere on glacial–interglacial climate dynamics

    Directory of Open Access Journals (Sweden)

    T. Davies-Barnard

    2017-10-01

    Full Text Available The terrestrial biosphere is thought to be a key component in the climatic variability seen in the palaeo-record. It has a direct impact on surface temperature through changes in surface albedo and evapotranspiration (so-called biogeophysical effects and, in addition, has an important indirect effect through changes in vegetation and soil carbon storage (biogeochemical effects and hence modulates the concentrations of greenhouse gases in the atmosphere. The biogeochemical and biogeophysical effects generally have opposite signs, meaning that the terrestrial biosphere could potentially have played only a very minor role in the dynamics of the glacial–interglacial cycles of the late Quaternary. Here we use a fully coupled dynamic atmosphere–ocean–vegetation general circulation model (GCM to generate a set of 62 equilibrium simulations spanning the last 120 kyr. The analysis of these simulations elucidates the relative importance of the biogeophysical versus biogeochemical terrestrial biosphere interactions with climate. We find that the biogeophysical effects of vegetation account for up to an additional −0.91 °C global mean cooling, with regional cooling as large as −5 °C, but with considerable variability across the glacial–interglacial cycle. By comparison, while opposite in sign, our model estimates of the biogeochemical impacts are substantially smaller in magnitude. Offline simulations show a maximum of +0.33 °C warming due to an increase of 25 ppm above our (pre-industrial baseline atmospheric CO2 mixing ratio. In contrast to shorter (century timescale projections of future terrestrial biosphere response where direct and indirect responses may at times cancel out, we find that the biogeophysical effects consistently and strongly dominate the biogeochemical effect over the inter-glacial cycle. On average across the period, the terrestrial biosphere has a −0.26 °C effect on temperature, with −0.58 °C at the

  4. Quantifying the influence of the terrestrial biosphere on glacial-interglacial climate dynamics

    Science.gov (United States)

    Davies-Barnard, Taraka; Ridgwell, Andy; Singarayer, Joy; Valdes, Paul

    2017-10-01

    The terrestrial biosphere is thought to be a key component in the climatic variability seen in the palaeo-record. It has a direct impact on surface temperature through changes in surface albedo and evapotranspiration (so-called biogeophysical effects) and, in addition, has an important indirect effect through changes in vegetation and soil carbon storage (biogeochemical effects) and hence modulates the concentrations of greenhouse gases in the atmosphere. The biogeochemical and biogeophysical effects generally have opposite signs, meaning that the terrestrial biosphere could potentially have played only a very minor role in the dynamics of the glacial-interglacial cycles of the late Quaternary. Here we use a fully coupled dynamic atmosphere-ocean-vegetation general circulation model (GCM) to generate a set of 62 equilibrium simulations spanning the last 120 kyr. The analysis of these simulations elucidates the relative importance of the biogeophysical versus biogeochemical terrestrial biosphere interactions with climate. We find that the biogeophysical effects of vegetation account for up to an additional -0.91 °C global mean cooling, with regional cooling as large as -5 °C, but with considerable variability across the glacial-interglacial cycle. By comparison, while opposite in sign, our model estimates of the biogeochemical impacts are substantially smaller in magnitude. Offline simulations show a maximum of +0.33 °C warming due to an increase of 25 ppm above our (pre-industrial) baseline atmospheric CO2 mixing ratio. In contrast to shorter (century) timescale projections of future terrestrial biosphere response where direct and indirect responses may at times cancel out, we find that the biogeophysical effects consistently and strongly dominate the biogeochemical effect over the inter-glacial cycle. On average across the period, the terrestrial biosphere has a -0.26 °C effect on temperature, with -0.58 °C at the Last Glacial Maximum. Depending on

  5. Syntactic computations in the language network: Characterising dynamic network properties using representational similarity analysis

    Directory of Open Access Journals (Sweden)

    Lorraine Komisarjevsky Tyler

    2013-05-01

    Full Text Available The core human capacity of syntactic analysis involves a left hemisphere network involving left inferior frontal gyrus (LIFG and posterior middle temporal gyrus (LMTG and the anatomical connections between them. Here we use MEG to determine the spatio-temporal properties of syntactic computations in this network. Listeners heard spoken sentences containing a local syntactic ambiguity (e.g. …landing planes…, at the offset of which they heard a disambiguating verb and decided whether it was an acceptable/unacceptable continuation of the sentence. We charted the time-course of processing and resolving syntactic ambiguity by measuring MEG responses from the onset of each word in the ambiguous phrase and the disambiguating word. We used representational similarity analysis (RSA to characterize syntactic information represented in the LIFG and LpMTG over time and to investigate their relationship to each other. Testing a variety of lexico-syntactic and ambiguity models against the MEG data, our results suggest early lexico-syntactic responses in the LpMTG and later effects of ambiguity in the LIFG, pointing to a clear differentiation in the functional roles of these two regions. Our results suggest the LpMTG represents and transmits lexical information to the LIFG, which responds to and resolves the ambiguity.

  6. Similar temperature dependencies of glycolytic enzymes: an evolutionary adaptation to temperature dynamics?

    Directory of Open Access Journals (Sweden)

    Cruz Luisa Ana B

    2012-12-01

    Full Text Available Abstract Background Temperature strongly affects microbial growth, and many microorganisms have to deal with temperature fluctuations in their natural environment. To understand regulation strategies that underlie microbial temperature responses and adaptation, we studied glycolytic pathway kinetics in Saccharomyces cerevisiae during temperature changes. Results Saccharomyces cerevisiae was grown under different temperature regimes and glucose availability conditions. These included glucose-excess batch cultures at different temperatures and glucose-limited chemostat cultures, subjected to fast linear temperature shifts and circadian sinoidal temperature cycles. An observed temperature-independent relation between intracellular levels of glycolytic metabolites and residual glucose concentration for all experimental conditions revealed that it is the substrate availability rather than temperature that determines intracellular metabolite profiles. This observation corresponded with predictions generated in silico with a kinetic model of yeast glycolysis, when the catalytic capacities of all glycolytic enzymes were set to share the same normalized temperature dependency. Conclusions From an evolutionary perspective, such similar temperature dependencies allow cells to adapt more rapidly to temperature changes, because they result in minimal perturbations of intracellular metabolite levels, thus circumventing the need for extensive modification of enzyme levels.

  7. Correlation between the Quantifiable Parameters of Whole Solitary Pulmonary Nodules Perfusion Imaging Derived with Dynamic CT and Nodules Size

    Directory of Open Access Journals (Sweden)

    Shiyuan LIU

    2009-05-01

    Full Text Available Background and objective The solitary pulmonary nodules (SPNs is one of the most common findings on chest radiographs. The blood flow patterns of the biggest single SPNs level has been studied. This assessment may be only a limited sample of the entire region of interest (ROI and is unrepresentative of the SPNs as a volume. Ideally, SPNs volume perfusion should be measured. The aim of this study is to evaluate the correlation between the quantifiableparameters of SPNs volume perfusion imaging derived with 16-slice spiral CT and 64-slice spiral CT and nodules size. Methods Sixty-five patients with SPNs (diameter≤3 cm; 42 malignant; 12 active inflammatory; 11 benign underwent multi-location dynamic contrast material-enhanced serial CT scanning mode with stable table were performed; The mean values of valid sections were calculated, as the quantifiable parameters of volume SPNs perfusion imaging derived with16-slice spiral CT and 64-slice spiral CT. The correlation between the quantifiable parameters of SPNs volume perfusion imaging derived with 16-slice spiral CT and 64-slice spiral CT and nodules size were assessed by means of linear regression analysis. Results No significant correlations were found between the nodules size and each of the peak height (PHSPN (32.15 Hu±14.55 Hu,ratio of peak height of the SPN to that of the aorta (SPN-to-A ratio(13.20±6.18%, perfusion(PSPN (29.79±19.12 mLmin-1100 g-1 and mean transit time (12.95±6.53 s (r =0.081, P =0.419; r =0.089, P =0.487; r =0.167, P =0.077; r =0.023, P =0.880. Conclusion No significant correlations were found between the quantifiable parameters of SPNs volume perfusion imaging derived with 16-slice spiral CT and 64-slice spiral CT and nodules size.

  8. Methyl mercury dynamics in a tidal wetland quantified using in situ optical measurements

    Science.gov (United States)

    Bergamaschi, B.A.; Fleck, J.A.; Downing, B.D.; Boss, E.; Pellerin, B.; Ganju, N.K.; Schoellhamer, D.H.; Byington, A.A.; Heim, W.A.; Stephenson, M.; Fujii, R.

    2011-01-01

    We assessed monomethylmercury (MeHg) dynamics in a tidal wetland over three seasons using a novel method that employs a combination of in situ optical measurements as concentration proxies. MeHg concentrations measured over a single spring tide were extended to a concentration time series using in situ optical measurements. Tidal fluxes were calculated using modeled concentrations and bi-directional velocities obtained acoustically. The magnitude of the flux was the result of complex interactions of tides, geomorphic features, particle sorption, and random episodic events such as wind storms and precipitation. Correlation of dissolved organic matter quality measurements with timing of MeHg release suggests that MeHg is produced in areas of fluctuating redox and not limited by buildup of sulfide. The wetland was a net source of MeHg to the estuary in all seasons, with particulate flux being much higher than dissolved flux, even though dissolved concentrations were commonly higher. Estimated total MeHg yields out of the wetland were approximately 2.5 μg m−2 yr−1—4–40 times previously published yields—representing a potential loading to the estuary of 80 g yr−1, equivalent to 3% of the river loading. Thus, export from tidal wetlands should be included in mass balance estimates for MeHg loading to estuaries. Also, adequate estimation of loads and the interactions between physical and biogeochemical processes in tidal wetlands might not be possible without long-term, high-frequency in situ measurements.

  9. Quantifying the impact of woodpecker predation on population dynamics of the emerald ash borer (Agrilus planipennis.

    Directory of Open Access Journals (Sweden)

    David E Jennings

    Full Text Available The emerald ash borer (EAB, Agrilus planipennis, is an invasive beetle that has killed millions of ash trees (Fraxinus spp. since it was accidentally introduced to North America in the 1990s. Understanding how predators such as woodpeckers (Picidae affect the population dynamics of EAB should enable us to more effectively manage the spread of this beetle, and toward this end we combined two experimental approaches to elucidate the relative importance of woodpecker predation on EAB populations. First, we examined wild populations of EAB in ash trees in New York, with each tree having a section screened to exclude woodpeckers. Second, we established experimental cohorts of EAB in ash trees in Maryland, and the cohorts on half of these trees were caged to exclude woodpeckers. The following spring these trees were debarked and the fates of the EAB larvae were determined. We found that trees from which woodpeckers were excluded consistently had significantly lower levels of predation, and that woodpecker predation comprised a greater source of mortality at sites with a more established wild infestation of EAB. Additionally, there was a considerable difference between New York and Maryland in the effect that woodpecker predation had on EAB population growth, suggesting that predation alone may not be a substantial factor in controlling EAB. In our experimental cohorts we also observed that trees from which woodpeckers were excluded had a significantly higher level of parasitism. The lower level of parasitism on EAB larvae found when exposed to woodpeckers has implications for EAB biological control, suggesting that it might be prudent to exclude woodpeckers from trees when attempting to establish parasitoid populations. Future studies may include utilizing EAB larval cohorts with a range of densities to explore the functional response of woodpeckers.

  10. Quantifying the impact of woodpecker predation on population dynamics of the emerald ash borer (Agrilus planipennis).

    Science.gov (United States)

    Jennings, David E; Gould, Juli R; Vandenberg, John D; Duan, Jian J; Shrewsbury, Paula M

    2013-01-01

    The emerald ash borer (EAB), Agrilus planipennis, is an invasive beetle that has killed millions of ash trees (Fraxinus spp.) since it was accidentally introduced to North America in the 1990s. Understanding how predators such as woodpeckers (Picidae) affect the population dynamics of EAB should enable us to more effectively manage the spread of this beetle, and toward this end we combined two experimental approaches to elucidate the relative importance of woodpecker predation on EAB populations. First, we examined wild populations of EAB in ash trees in New York, with each tree having a section screened to exclude woodpeckers. Second, we established experimental cohorts of EAB in ash trees in Maryland, and the cohorts on half of these trees were caged to exclude woodpeckers. The following spring these trees were debarked and the fates of the EAB larvae were determined. We found that trees from which woodpeckers were excluded consistently had significantly lower levels of predation, and that woodpecker predation comprised a greater source of mortality at sites with a more established wild infestation of EAB. Additionally, there was a considerable difference between New York and Maryland in the effect that woodpecker predation had on EAB population growth, suggesting that predation alone may not be a substantial factor in controlling EAB. In our experimental cohorts we also observed that trees from which woodpeckers were excluded had a significantly higher level of parasitism. The lower level of parasitism on EAB larvae found when exposed to woodpeckers has implications for EAB biological control, suggesting that it might be prudent to exclude woodpeckers from trees when attempting to establish parasitoid populations. Future studies may include utilizing EAB larval cohorts with a range of densities to explore the functional response of woodpeckers.

  11. Extracting key information from historical data to quantify the transmission dynamics of smallpox

    Directory of Open Access Journals (Sweden)

    Brockmann Stefan O

    2008-08-01

    Full Text Available Abstract Background Quantification of the transmission dynamics of smallpox is crucial for optimizing intervention strategies in the event of a bioterrorist attack. This article reviews basic methods and findings in mathematical and statistical studies of smallpox which estimate key transmission parameters from historical data. Main findings First, critically important aspects in extracting key information from historical data are briefly summarized. We mention different sources of heterogeneity and potential pitfalls in utilizing historical records. Second, we discuss how smallpox spreads in the absence of interventions and how the optimal timing of quarantine and isolation measures can be determined. Case studies demonstrate the following. (1 The upper confidence limit of the 99th percentile of the incubation period is 22.2 days, suggesting that quarantine should last 23 days. (2 The highest frequency (61.8% of secondary transmissions occurs 3–5 days after onset of fever so that infected individuals should be isolated before the appearance of rash. (3 The U-shaped age-specific case fatality implies a vulnerability of infants and elderly among non-immune individuals. Estimates of the transmission potential are subsequently reviewed, followed by an assessment of vaccination effects and of the expected effectiveness of interventions. Conclusion Current debates on bio-terrorism preparedness indicate that public health decision making must account for the complex interplay and balance between vaccination strategies and other public health measures (e.g. case isolation and contact tracing taking into account the frequency of adverse events to vaccination. In this review, we summarize what has already been clarified and point out needs to analyze previous smallpox outbreaks systematically.

  12. Quantifying response to intracranial pressure normalization in idiopathic intracranial hypertension via dynamic neuroimaging.

    Science.gov (United States)

    Lublinsky, Svetlana; Kesler, Anat; Friedman, Alon; Horev, Anat; Shelef, Ilan

    2018-04-01

    Idiopathic intracranial hypertension (IIH) is characterized by elevated intracranial pressure without a clear cause. To investigate dynamic imaging findings in IIH and their relation to mechanisms underlying intracranial pressure normalization. Prospective. Eighteen IIH patients and 30 healthy controls. T 1 -weighted, venography, fluid attenuation inversion recovery, and apparent diffusion coefficients were acquired on 1.5T scanner. The dural sinus was measured before and after lumbar puncture (LP). The degree of sinus occlusion was evaluated, based on 95% confidence intervals of controls. We studied a number of neuroimaging biomarkers associated with IIH (sinus occlusion; optic nerve; distribution of cerebrospinal fluid into the subarachnoid space, sulci and lateral ventricles (LVs); Meckel's caves; arachnoid granulation; pituitary and choroid plexus), before and after LP, using a set of specially developed quantification techniques. Relationships among various biomarkers were investigated (Pearson correlation coefficient) and linked to long-term disease outcomes (logistic regression). The t-test and the Wilcoxon rank test were used to compare between controls and before and after LP data. As a result of LP, the following were found to be in good accordance with the opening pressure: relative compression of cerebrospinal fluid (R = -0.857, P < 0.001) and brain volumes (R = -0.576, P = 0.012), LV expansion (R = 0.772, P < 0.001) and venous volume (R = 0.696, P = 0.001), enlargement of the pituitary (R = 0.640, P = 0.023), and shrinkage of subarachnoid space (R = -0.887, P < 0.001). The only parameter that had an impact on long-term prognosis was cross-sectional size of supplemental drainage veins after LP (sensitivity of 92%, specificity of 20%, and area under the curve of 0.845, P < 0.001). We present an approach for quantitative characterization of the intracranial venous system and its implementation as a diagnostic assistance

  13. Quantifying the Impacts of Environmental Factors on Vegetation Dynamics over Climatic and Management Gradients of Central Asia

    Directory of Open Access Journals (Sweden)

    Olena Dubovyk

    2016-07-01

    Full Text Available Currently there is a lack of quantitative information regarding the driving factors of vegetation dynamics in post-Soviet Central Asia. Insufficient knowledge also exists concerning vegetation variability across sub-humid to arid climatic gradients as well as vegetation response to different land uses, from natural rangelands to intensively irrigated croplands. In this study, we analyzed the environmental drivers of vegetation dynamics in five Central Asian countries by coupling key vegetation parameter “overall greenness” derived from Moderate Resolution Imaging Spectroradiometer (MODIS Normalized Difference Vegetation Index (NDVI time series data, with its possible factors across various management and climatic gradients. We developed nine generalized least-squares random effect (GLS-RE models to analyze the relative impact of environmental factors on vegetation dynamics. The obtained results quantitatively indicated the extensive control of climatic factors on managed and unmanaged vegetation cover across Central Asia. The most diverse vegetation dynamics response to climatic variables was observed for “intensively managed irrigated croplands”. Almost no differences in response to these variables were detected for managed non-irrigated vegetation and unmanaged (natural vegetation across all countries. Natural vegetation and rainfed non-irrigated crop dynamics were principally associated with temperature and precipitation parameters. Variables related to temperature had the greatest relative effect on irrigated croplands and on vegetation cover within the mountainous zone. Further research should focus on incorporating the socio-economic factors discussed here in a similar analysis.

  14. Quantifying the accuracy of the tumor motion and area as a function of acceleration factor for the simulation of the dynamic keyhole magnetic resonance imaging method.

    Science.gov (United States)

    Lee, Danny; Greer, Peter B; Pollock, Sean; Kim, Taeho; Keall, Paul

    2016-05-01

    The dynamic keyhole is a new MR image reconstruction method for thoracic and abdominal MR imaging. To date, this method has not been investigated with cancer patient magnetic resonance imaging (MRI) data. The goal of this study was to assess the dynamic keyhole method for the task of lung tumor localization using cine-MR images reconstructed in the presence of respiratory motion. The dynamic keyhole method utilizes a previously acquired a library of peripheral k-space datasets at similar displacement and phase (where phase is simply used to determine whether the breathing is inhale to exhale or exhale to inhale) respiratory bins in conjunction with central k-space datasets (keyhole) acquired. External respiratory signals drive the process of sorting, matching, and combining the two k-space streams for each respiratory bin, thereby achieving faster image acquisition without substantial motion artifacts. This study was the first that investigates the impact of k-space undersampling on lung tumor motion and area assessment across clinically available techniques (zero-filling and conventional keyhole). In this study, the dynamic keyhole, conventional keyhole and zero-filling methods were compared to full k-space dataset acquisition by quantifying (1) the keyhole size required for central k-space datasets for constant image quality across sixty four cine-MRI datasets from nine lung cancer patients, (2) the intensity difference between the original and reconstructed images in a constant keyhole size, and (3) the accuracy of tumor motion and area directly measured by tumor autocontouring. For constant image quality, the dynamic keyhole method, conventional keyhole, and zero-filling methods required 22%, 34%, and 49% of the keyhole size (P lung tumor monitoring applications. This study demonstrates that the dynamic keyhole method is a promising technique for clinical applications such as image-guided radiation therapy requiring the MR monitoring of thoracic tumors. Based

  15. Quantified Facial Soft-tissue Strain in Animation Measured by Real-time Dynamic 3-Dimensional Imaging.

    Science.gov (United States)

    Hsu, Vivian M; Wes, Ari M; Tahiri, Youssef; Cornman-Homonoff, Joshua; Percec, Ivona

    2014-09-01

    The aim of this study is to evaluate and quantify dynamic soft-tissue strain in the human face using real-time 3-dimensional imaging technology. Thirteen subjects (8 women, 5 men) between the ages of 18 and 70 were imaged using a dual-camera system and 3-dimensional optical analysis (ARAMIS, Trilion Quality Systems, Pa.). Each subject was imaged at rest and with the following facial expressions: (1) smile, (2) laughter, (3) surprise, (4) anger, (5) grimace, and (6) pursed lips. The facial strains defining stretch and compression were computed for each subject and compared. The areas of greatest strain were localized to the midface and lower face for all expressions. Subjects over the age of 40 had a statistically significant increase in stretch in the perioral region while lip pursing compared with subjects under the age of 40 (58.4% vs 33.8%, P = 0.015). When specific components of lip pursing were analyzed, there was a significantly greater degree of stretch in the nasolabial fold region in subjects over 40 compared with those under 40 (61.6% vs 32.9%, P = 0.007). Furthermore, we observed a greater degree of asymmetry of strain in the nasolabial fold region in the older age group (18.4% vs 5.4%, P = 0.03). This pilot study illustrates that the face can be objectively and quantitatively evaluated using dynamic major strain analysis. The technology of 3-dimensional optical imaging can be used to advance our understanding of facial soft-tissue dynamics and the effects of animation on facial strain over time.

  16. Froude number fractions to increase walking pattern dynamic similarities: application to plantar pressure study in healthy subjects.

    Science.gov (United States)

    Moretto, P; Bisiaux, M; Lafortune, M A

    2007-01-01

    The purpose of this study was to determine if using similar walking velocities obtained from fractions of the Froude number (N(Fr)) and leg length can lead to kinematic and kinetic similarities and lower variability. Fifteen male subjects walked on a treadmill at 0.83 (VS(1)) and 1.16ms(-1) (VS(2)) and then at two similar velocities (V(Sim27) and V(Sim37)) determined from two fractions of the N(Fr) (0.27 and 0.37) so that the average group velocity remained unchanged in both conditions (VS(1)=V (Sim27)andVS(2)=V (Sim37)). N(Fr) can theoretically be used to determine walking velocities proportional to leg lengths and to establish dynamic similarities between subjects. This study represents the first attempt at using this approach to examine plantar pressure. The ankle and knee joint angles were studied in the sagittal plane and the plantar pressure distribution was assessed with an in-shoe measurement device. The similarity ratios were computed from anthropometric parameters and plantar pressure peaks. Dynamically similar conditions caused a 25% reduction in leg joint angles variation and a 10% significant decrease in dimensionless pressure peak variability on average of five footprint locations. It also lead to heel and under-midfoot pressure peaks proportional to body mass and to an increase in the number of under-forefoot plantar pressure peaks proportional to body mass and/or leg length. The use of walking velocities derived from N(Fr) allows kinematic and plantar pressure similarities between subjects to be observed and leads to a lower inter-subject variability. In-shoe pressure measurements have proven to be valuable for the understanding of lower extremity function. Set walking velocities used for clinical assessment mask the effects of body size and individual gait mechanics. The anthropometric scaling of walking velocities (fraction of N(Fr)) should improve identification of unique walking strategies and pathological foot functions.

  17. Quantifying Intracranial Aneurysm Wall Permeability for Risk Assessment Using Dynamic Contrast-Enhanced MRI: A Pilot Study.

    Science.gov (United States)

    Vakil, P; Ansari, S A; Cantrell, C G; Eddleman, C S; Dehkordi, F H; Vranic, J; Hurley, M C; Batjer, H H; Bendok, B R; Carroll, T J

    2015-05-01

    Pathological changes in the intracranial aneurysm wall may lead to increases in its permeability; however the clinical significance of such changes has not been explored. The purpose of this pilot study was to quantify intracranial aneurysm wall permeability (K(trans), VL) to contrast agent as a measure of aneurysm rupture risk and compare these parameters against other established measures of rupture risk. We hypothesized K(trans) would be associated with intracranial aneurysm rupture risk as defined by various anatomic, imaging, and clinical risk factors. Twenty-seven unruptured intracranial aneurysms in 23 patients were imaged with dynamic contrast-enhanced MR imaging, and wall permeability parameters (K(trans), VL) were measured in regions adjacent to the aneurysm wall and along the paired control MCA by 2 blinded observers. K(trans) and VL were evaluated as markers of rupture risk by comparing them against established clinical (symptomatic lesions) and anatomic (size, location, morphology, multiplicity) risk metrics. Interobserver agreement was strong as shown in regression analysis (R(2) > 0.84) and intraclass correlation (intraclass correlation coefficient >0.92), indicating that the K(trans) can be reliably assessed clinically. All intracranial aneurysms had a pronounced increase in wall permeability compared with the paired healthy MCA (P risk in anatomic (P = .02) and combined anatomic/clinical (P = .03) groups independent of size. We report the first evidence of dynamic contrast-enhanced MR imaging-modeled contrast permeability in intracranial aneurysms. We found that contrast agent permeability across the aneurysm wall correlated significantly with both aneurysm size and size-independent anatomic risk factors. In addition, K(trans) was a significant and size-independent predictor of morphologically and clinically defined high-risk aneurysms. © 2015 by American Journal of Neuroradiology.

  18. Madagascar’s Mangroves: Quantifying Nation-Wide and Ecosystem Specific Dynamics, and Detailed Contemporary Mapping of Distinct Ecosystems

    Directory of Open Access Journals (Sweden)

    Trevor G. Jones

    2016-01-01

    Full Text Available Mangrove ecosystems help mitigate climate change, are highly biodiverse, and provide critical goods and services to coastal communities. Despite their importance, anthropogenic activities are rapidly degrading and deforesting mangroves world-wide. Madagascar contains 2% of the world’s mangroves, many of which have undergone or are starting to exhibit signs of widespread degradation and deforestation. Remotely sensed data can be used to quantify mangrove loss and characterize remaining distributions, providing detailed, accurate, timely and updateable information. We use USGS maps produced from Landsat data to calculate nation-wide dynamics for Madagascar’s mangroves from 1990 to 2010, and examine change more closely by partitioning the national distribution in to primary (i.e., >1000 ha ecosystems; with focus on four Areas of Interest (AOIs: Ambaro-Ambanja Bays (AAB, Mahajamba Bay (MHJ, Tsiribihina Manombolo Delta (TMD and Bay des Assassins (BdA. Results indicate a nation–wide net-loss of 21% (i.e., 57,359 ha from 1990 to 2010, with dynamics varying considerably among primary mangrove ecosystems. Given the limitations of national-level maps for certain localized applications (e.g., carbon stock inventories, building on two previous studies for AAB and MHJ, we employ Landsat data to produce detailed, contemporary mangrove maps for TMD and BdA. These contemporary, AOI-specific maps provide improved detail and accuracy over the USGS national-level maps, and are being applied to conservation and restoration initiatives through the Blue Ventures’ Blue Forests programme and WWF Madagascar West Indian Ocean Programme Office’s work in the region.

  19. Chemical dynamics between wells across a time-dependent barrier: Self-similarity in the Lagrangian descriptor and reactive basins.

    Science.gov (United States)

    Junginger, Andrej; Duvenbeck, Lennart; Feldmaier, Matthias; Main, Jörg; Wunner, Günter; Hernandez, Rigoberto

    2017-08-14

    In chemical or physical reaction dynamics, it is essential to distinguish precisely between reactants and products for all times. This task is especially demanding in time-dependent or driven systems because therein the dividing surface (DS) between these states often exhibits a nontrivial time-dependence. The so-called transition state (TS) trajectory has been seen to define a DS which is free of recrossings in a large number of one-dimensional reactions across time-dependent barriers and thus, allows one to determine exact reaction rates. A fundamental challenge to applying this method is the construction of the TS trajectory itself. The minimization of Lagrangian descriptors (LDs) provides a general and powerful scheme to obtain that trajectory even when perturbation theory fails. Both approaches encounter possible breakdowns when the overall potential is bounded, admitting the possibility of returns to the barrier long after the trajectories have reached the product or reactant wells. Such global dynamics cannot be captured by perturbation theory. Meanwhile, in the LD-DS approach, it leads to the emergence of additional local minima which make it difficult to extract the optimal branch associated with the desired TS trajectory. In this work, we illustrate this behavior for a time-dependent double-well potential revealing a self-similar structure of the LD, and we demonstrate how the reflections and side-minima can be addressed by an appropriate modification of the LD associated with the direct rate across the barrier.

  20. Quantifying and comparing dynamic predictive accuracy of joint models for longitudinal marker and time-to-event in presence of censoring and competing risks

    DEFF Research Database (Denmark)

    Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie

    2015-01-01

    to quantify predictive accuracy. Nonparametric inverse probability of censoring weighting is used to estimate dynamic curves of AUC and BS as functions of the time at which predictions are made. Asymptotic results are established and both pointwise confidence intervals and simultaneous confidence bands...

  1. A similarity score-based two-phase heuristic approach to solve the dynamic cellular facility layout for manufacturing systems

    Science.gov (United States)

    Kumar, Ravi; Singh, Surya Prakash

    2017-11-01

    The dynamic cellular facility layout problem (DCFLP) is a well-known NP-hard problem. It has been estimated that the efficient design of DCFLP reduces the manufacturing cost of products by maintaining the minimum material flow among all machines in all cells, as the material flow contributes around 10-30% of the total product cost. However, being NP hard, solving the DCFLP optimally is very difficult in reasonable time. Therefore, this article proposes a novel similarity score-based two-phase heuristic approach to solve the DCFLP optimally considering multiple products in multiple times to be manufactured in the manufacturing layout. In the first phase of the proposed heuristic, a machine-cell cluster is created based on similarity scores between machines. This is provided as an input to the second phase to minimize inter/intracell material handling costs and rearrangement costs over the entire planning period. The solution methodology of the proposed approach is demonstrated. To show the efficiency of the two-phase heuristic approach, 21 instances are generated and solved using the optimization software package LINGO. The results show that the proposed approach can optimally solve the DCFLP in reasonable time.

  2. Differences and similarity in the dynamic and acoustic properties of gas microbubbles in liquid mercury and water

    International Nuclear Information System (INIS)

    Ida, Masato; Haga, Katsuhiro; Kogawa, Hiroyuki; Naoe, Takashi; Futakawa, Masatoshi

    2010-01-01

    Differences and similarities in the dynamics of microbubbles in liquid mercury and water are clarified and summarized in order to evaluate the validity and usefulness of experiments with water as an alternative to experiments with mercury. Pressure-wave induced cavitation in liquid mercury is of particular concern in the high-power pulsed neutron sources working in Japan and the U.S. Toward suppressing the pressure waves and cavitation, injection of gas microbubbles into liquid mercury has been attempted. However, many difficulties arise in mercury experiments mainly because liquid mercury is an opaque liquid. Hence we and collaborators have performed water experiments as an alternative, in conjunction with mercury experiments. In this paper, we discussed how we should use the result with water and how we can make the water experiments meaningful. The non-dimensional numbers of bubbly liquids and bubbles' rise velocity, coalescence frequency, and response to heat input were investigated theoretically for both mercury and water. A suggestion was made to 'see through' bubble distribution in flowing mercury from the result of water study, and a notable similarity was found in the effect of bubbles to absorb thermal expansion of the liquids. (author)

  3. Dynamic contrast-enhanced 3-T magnetic resonance imaging: a method for quantifying disease activity in early polyarthritis

    Energy Technology Data Exchange (ETDEWEB)

    Navalho, Marcio [Faculdade de Medicina da Universidade de Lisboa, Rheumatology Research Unit, Instituto de Medicina Molecular, Lisbon (Portugal); Hospital da Luz, Radiology Department, Lisbon (Portugal); Hospital da Luz, Centro de Imagiologia, Lisbon (Portugal); Resende, Catarina [Hospital da Luz, Rheumatology Department, Lisbon (Portugal); Hospital de Santa Maria, Rheumatology Department, Centro Hospitalar de Lisboa Norte, EPE, Lisbon (Portugal); Rodrigues, Ana Maria; Fonseca, Joao Eurico; Canhao, Helena [Faculdade de Medicina da Universidade de Lisboa, Rheumatology Research Unit, Instituto de Medicina Molecular, Lisbon (Portugal); Hospital de Santa Maria, Rheumatology Department, Centro Hospitalar de Lisboa Norte, EPE, Lisbon (Portugal); Gaspar, Augusto [Hospital da Luz, Radiology Department, Lisbon (Portugal); Campos, Jorge [Hospital de Santa Maria, Radiology Department, Centro Hospitalar de Lisboa Norte, EPE, Lisbon (Portugal)

    2012-01-15

    To determine whether measurement of synovial enhancement and thickness quantification parameters with 3.0-Tesla magnetic resonance imaging (3-T MRI) can reliably quantify disease activity in patients with early polyarthritis. Eighteen patients (16 women, 2 men; mean age 46 years) with early polyarthritis with less than 12 months of symptoms were included. MRI examination using 3-T device was performed by a new approach including both wrists and hands simultaneously in the examination field-of-view. MRI scoring of disease activity included quantification of synovial enhancement with simple measurements such as rate of early enhancement (REE; REE{sub 57} = S{sub 57}/S{sub 200}, where S{sub 57} and S{sub 200} are the signal intensities 57 s and 200 s after gadolinium injection) and rate of relative enhancement (RE; RE = S{sub 200} - S{sub 0}). Both wrists and hands were scored according to the Rheumatoid Arthritis MRI Scoring System (RAMRIS) for synovitis. Disease activity was clinically assessed by the 28-joint Disease Activity Score (DAS28). DAS28 score was strongly correlated with RE (r = 0.8331, p < 0.0001), REE (r = 0.8112, p < 0.0001), and RAMRIS score for synovitis (r = 0.7659, p < 0.0002). An REE score above 0.778 accurately identified patients with clinically active disease (sensitivity 92%; specificity 67%; p < 0.05). A statistically significant difference was observed in the RE, REE, and RAMRIS scores for synovitis between patients with active and inactive disease (p < 0.05). Our findings support the use of 3-T dynamic contrast-enhanced MRI for precise quantification of disease activity and for discriminating active disease from inactive disease in early polyarthritis. (orig.)

  4. Quantifying Hyporheic Exchanges in a Large Scale River Reach Using Coupled 3-D Surface and Subsurface Computational Fluid Dynamics Simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn Edward; Bao, J; Huang, M; Hou, Z; Perkins, W; Harding, S; Titzler, S; Ren, H; Thorne, P; Suffield, S; Murray, C; Zachara, J

    2017-03-01

    Hyporheic exchange is a critical mechanism shaping hydrological and biogeochemical processes along a river corridor. Recent studies on quantifying the hyporheic exchange were mostly limited to local scales due to field inaccessibility, computational demand, and complexity of geomorphology and subsurface geology. Surface flow conditions and subsurface physical properties are well known factors on modulating the hyporheic exchange, but quantitative understanding of their impacts on the strength and direction of hyporheic exchanges at reach scales is absent. In this study, a high resolution computational fluid dynamics (CFD) model that couples surface and subsurface flow and transport is employed to simulate hyporheic exchanges in a 7-km long reach along the main-stem of the Columbia River. Assuming that the hyporheic exchange does not affect surface water flow conditions due to its negligible magnitude compared to the volume and velocity of river water, we developed a one-way coupled surface and subsurface water flow model using the commercial CFD software STAR-CCM+. The model integrates the Reynolds-averaged Navier-Stokes (RANS) equation solver with a realizable κ-ε two-layer turbulence model, a two-layer all y+ wall treatment, and the volume of fluid (VOF) method, and is used to simulate hyporheic exchanges by tracking the free water-air interface as well as flow in the river and the subsurface porous media. The model is validated against measurements from acoustic Doppler current profiler (ADCP) in the stream water and hyporheic fluxes derived from a set of temperature profilers installed across the riverbed. The validated model is then employed to systematically investigate how hyporheic exchanges are influenced by surface water fluid dynamics strongly regulated by upstream dam operations, as well as subsurface structures (e.g. thickness of riverbed and subsurface formation layers) and hydrogeological properties (e.g. permeability). The results

  5. Quantifying geomorphic controls on riparian forest dynamics using a linked physical-biological model: implications for river corridor conservation

    Science.gov (United States)

    Stella, J. C.; Harper, E. B.; Fremier, A. K.; Hayden, M. K.; Battles, J. J.

    2009-12-01

    In high-order alluvial river systems, physical factors of flooding and channel migration are particularly important drivers of riparian forest dynamics because they regulate habitat creation, resource fluxes of water, nutrients and light that are critical for growth, and mortality from fluvial disturbance. Predicting vegetation composition and dynamics at individual sites in this setting is challenging, both because of the stochastic nature of the flood regime and the spatial variability of flood events. Ecological models that correlate environmental factors with species’ occurrence and abundance (e.g., ’niche models’) often work well in infrequently-disturbed upland habitats, but are less useful in river corridors and other dynamic zones where environmental conditions fluctuate greatly and selection pressures on disturbance-adapted organisms are complex. In an effort to help conserve critical riparian forest habitat along the middle Sacramento River, CA, we are taking a mechanistic approach to quantify linkages between fluvial and biotic processes for Fremont cottonwood (Populus fremontii), a keystone pioneer tree in dryland rivers ecosystems of the U.S. Southwest. To predict the corridor-wide population effects of projected changes to the disturbance regime from flow regulation, climate change, and landscape modifications, we have coupled a physical model of channel meandering with a patch-based population model that incorporates the climatic, hydrologic, and topographic factors critical for tree recruitment and survival. We employed these linked simulations to study the relative influence of the two most critical habitat types--point bars and abandoned channels--in sustaining the corridor-wide cottonwood population over a 175-year period. The physical model uses discharge data and channel planform to predict the spatial distribution of new habitat patches; the population model runs on top of this physical template to track tree colonization and survival on

  6. Ocean Acidification Experiments in Large-Scale Mesocosms Reveal Similar Dynamics of Dissolved Organic Matter Production and Biotransformation

    Directory of Open Access Journals (Sweden)

    Maren Zark

    2017-09-01

    similar succession patterns for individual compound pools during a phytoplankton bloom and subsequent accumulation of these compounds were observed. The similar behavior of DOM production and biotransformation during and following a phytoplankton bloom irrespective of plankton community composition and CO2 treatment provides novel insights into general dynamics of the marine DOM pool.

  7. Investigating Forest Harvest Effects on DOC Concentration and Quality: An In Situ, High Resolution Approach to Quantifying DOC Export Dynamics

    Science.gov (United States)

    Jollymore, A. J.; Johnson, M. S.; Hawthorne, I.

    2013-12-01

    the months following harvest. A major advantage of this study is the use of in situ measurements, allowing for high temporal resolution of DOC dynamics occurring within specific hydrologic events. For example, concentration-discharge relationships for both the pre- and post-logging periods demonstrate similar clockwise hysteresis during individual storm events, while the magnitude of change dramatically increased during the post-logging period. However, in situ measurements of SUVA over this period suggest that DOC quality may be less affected by forest harvest than overall DOC concentration, where high frequency data also allows for the observation of SUVA and spectral slope responses to specific hydrologic events during the pre- and post- harvest period.

  8. Similarity measure and topology evolution of foreign exchange markets using dynamic time warping method: Evidence from minimal spanning tree

    Science.gov (United States)

    Wang, Gang-Jin; Xie, Chi; Han, Feng; Sun, Bo

    2012-08-01

    In this study, we employ a dynamic time warping method to study the topology of similarity networks among 35 major currencies in international foreign exchange (FX) markets, measured by the minimal spanning tree (MST) approach, which is expected to overcome the synchronous restriction of the Pearson correlation coefficient. In the empirical process, firstly, we subdivide the analysis period from June 2005 to May 2011 into three sub-periods: before, during, and after the US sub-prime crisis. Secondly, we choose NZD (New Zealand dollar) as the numeraire and then, analyze the topology evolution of FX markets in terms of the structure changes of MSTs during the above periods. We also present the hierarchical tree associated with the MST to study the currency clusters in each sub-period. Our results confirm that USD and EUR are the predominant world currencies. But USD gradually loses the most central position while EUR acts as a stable center in the MST passing through the crisis. Furthermore, an interesting finding is that, after the crisis, SGD (Singapore dollar) becomes a new center currency for the network.

  9. Direct Patlak Reconstruction From Dynamic PET Data Using the Kernel Method With MRI Information Based on Structural Similarity.

    Science.gov (United States)

    Gong, Kuang; Cheng-Liao, Jinxiu; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2018-04-01

    Positron emission tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neuroscience. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information into image reconstruction. Previously, kernel learning has been successfully embedded into static and dynamic PET image reconstruction using either PET temporal or MRI information. Here, we combine both PET temporal and MRI information adaptively to improve the quality of direct Patlak reconstruction. We examined different approaches to combine the PET and MRI information in kernel learning to address the issue of potential mismatches between MRI and PET signals. Computer simulations and hybrid real-patient data acquired on a simultaneous PET/MR scanner were used to evaluate the proposed methods. Results show that the method that combines PET temporal information and MRI spatial information adaptively based on the structure similarity index has the best performance in terms of noise reduction and resolution improvement.

  10. Quantifying and comparing dynamic predictive accuracy of joint models for longitudinal marker and time-to-event in presence of censoring and competing risks.

    Science.gov (United States)

    Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie; Berr, Claudine; Dartigues, Jean-François; Jacqmin-Gadda, Hélène

    2015-03-01

    Thanks to the growing interest in personalized medicine, joint modeling of longitudinal marker and time-to-event data has recently started to be used to derive dynamic individual risk predictions. Individual predictions are called dynamic because they are updated when information on the subject's health profile grows with time. We focus in this work on statistical methods for quantifying and comparing dynamic predictive accuracy of this kind of prognostic models, accounting for right censoring and possibly competing events. Dynamic area under the ROC curve (AUC) and Brier Score (BS) are used to quantify predictive accuracy. Nonparametric inverse probability of censoring weighting is used to estimate dynamic curves of AUC and BS as functions of the time at which predictions are made. Asymptotic results are established and both pointwise confidence intervals and simultaneous confidence bands are derived. Tests are also proposed to compare the dynamic prediction accuracy curves of two prognostic models. The finite sample behavior of the inference procedures is assessed via simulations. We apply the proposed methodology to compare various prediction models using repeated measures of two psychometric tests to predict dementia in the elderly, accounting for the competing risk of death. Models are estimated on the French Paquid cohort and predictive accuracies are evaluated and compared on the French Three-City cohort. © 2014, The International Biometric Society.

  11. Quantifying nutrient export and deposition with a dynamic landscape evolution model for the lake Bolsena watershed, Italy

    Science.gov (United States)

    Pelorosso, Raffaele; Temme, Arnoud; Gobattoni, Federica; Leone, Antonio

    2010-05-01

    other hand, recent researches have been improving landscape evolution simulation models.. One such model, LAPSUS (LandscApe ProcesS modelling at mUlti-dimensions and Scales, Schoorl et al.,2002; Temme et al., 2009) has been applied to the Lake Bolsena watershed in Lazio, Italy. LAPSUS takes into account erosion as a naturally occurring process in landscape evolution and shapes landscapes by both erosion and deposition allowing interactions at different spatial and temporal resolutions and extents. An integrated approach to quantify nutrient export and deposition at catchment scale is presented and discussed here coupling such a dynamic landscape evolution model (LAPSUS) with the characteristic transport equations for nutrients.

  12. Quantifying Matter

    CERN Document Server

    Angelo, Joseph A

    2011-01-01

    Quantifying Matter explains how scientists learned to measure matter and quantify some of its most fascinating and useful properties. It presents many of the most important intellectual achievements and technical developments that led to the scientific interpretation of substance. Complete with full-color photographs, this exciting new volume describes the basic characteristics and properties of matter. Chapters include:. -Exploring the Nature of Matter. -The Origin of Matter. -The Search for Substance. -Quantifying Matter During the Scientific Revolution. -Understanding Matter's Electromagnet

  13. Quantifying the Dynamics of Field Cancerization in Tobacco-Related Head and Neck Cancer: A Multiscale Modeling Approach.

    Science.gov (United States)

    Ryser, Marc D; Lee, Walter T; Ready, Neal E; Leder, Kevin Z; Foo, Jasmine

    2016-12-15

    High rates of local recurrence in tobacco-related head and neck squamous cell carcinoma (HNSCC) are commonly attributed to unresected fields of precancerous tissue. Because they are not easily detectable at the time of surgery without additional biopsies, there is a need for noninvasive methods to predict the extent and dynamics of these fields. Here, we developed a spatial stochastic model of tobacco-related HNSCC at the tissue level and calibrated the model using a Bayesian framework and population-level incidence data from the Surveillance, Epidemiology, and End Results (SEER) registry. Probabilistic model analyses were performed to predict the field geometry at time of diagnosis, and model predictions of age-specific recurrence risks were tested against outcome data from SEER. The calibrated models predicted a strong dependence of the local field size on age at diagnosis, with a doubling of the expected field diameter between ages at diagnosis of 50 and 90 years, respectively. Similarly, the probability of harboring multiple, clonally unrelated fields at the time of diagnosis was found to increase substantially with patient age. On the basis of these findings, we hypothesized a higher recurrence risk in older than in younger patients when treated by surgery alone; we successfully tested this hypothesis using age-stratified outcome data. Further clinical studies are needed to validate the model predictions in a patient-specific setting. This work highlights the importance of spatial structure in models of epithelial carcinogenesis and suggests that patient age at diagnosis may be a critical predictor of the size and multiplicity of precancerous lesions. Cancer Res; 76(24); 7078-88. ©2016 AACR. ©2016 American Association for Cancer Research.

  14. Quantified Facial Soft-tissue Strain in Animation Measured by Real-time Dynamic 3-Dimensional Imaging

    Directory of Open Access Journals (Sweden)

    Vivian M. Hsu, MD

    2014-09-01

    Conclusions: This pilot study illustrates that the face can be objectively and quantitatively evaluated using dynamic major strain analysis. The technology of 3-dimensional optical imaging can be used to advance our understanding of facial soft-tissue dynamics and the effects of animation on facial strain over time.

  15. Similarity recognition of online data curves based on dynamic spatial time warping for the estimation of lithium-ion battery capacity

    Science.gov (United States)

    Tao, Laifa; Lu, Chen; Noktehdan, Azadeh

    2015-10-01

    Battery capacity estimation is a significant recent challenge given the complex physical and chemical processes that occur within batteries and the restrictions on the accessibility of capacity degradation data. In this study, we describe an approach called dynamic spatial time warping, which is used to determine the similarities of two arbitrary curves. Unlike classical dynamic time warping methods, this approach can maintain the invariance of curve similarity to the rotations and translations of curves, which is vital in curve similarity search. Moreover, it utilizes the online charging or discharging data that are easily collected and do not require special assumptions. The accuracy of this approach is verified using NASA battery datasets. Results suggest that the proposed approach provides a highly accurate means of estimating battery capacity at less time cost than traditional dynamic time warping methods do for different individuals and under various operating conditions.

  16. Correlation between the quantifiable parameters of blood flow pattern derived with dynamic CT in maliagnant solitary pulmonary nodules and tumor size

    Directory of Open Access Journals (Sweden)

    Chenshi ZHANG

    2008-02-01

    Full Text Available Background and Objective The solitary pulmonary nodules (SPNs is one of the most common findings on chest radiographs. It becomes possible to provide more accurately quantitative information about blood flow patterns of solitary pulmonary nodules (SPNs with multi-slice spiral computed tomography (MSCT. The aim of this study is to evaluate the correlation between the quantifiable parameters of blood flow pattern derived with dynamic CT in maliagnant solitary pulmonary nodules and tumor size. Methods 68 patients with maliagnant solitary pulmonary nodules (SPNs (diameter <=4 cmunderwent multi-location dynamic contrast material-enhanced (nonionic contrast material was administrated via the antecubital vein at a rate of 4mL/s by an autoinjector, 4*5mm or 4*2.5mm scanning mode with stable table were performed. serial CT. Precontrast and postcontrast attenuation on every scan was recorded. Perfusion (PSPN, peak height (PHSPNratio of peak height of the SPN to that of the aorta (SPN-to-A ratioand mean transit time(MTT were calculated. The correlation between the quantifiable parameters of blood flow pattern derived with dynamic CT in maliagnant solitary pulmonary nodules and tumor size were assessed by means of linear regression analysis. Results No significant correlations were found between the tumor size and each of the peak height (PHSPN ratio of peak height of the SPN to that of the aorta (SPN-to-A ratio perfusion(PSPNand mean transit time (r=0.18, P=0.14; r=0.20,P=0.09; r=0.01, P=0.95; r=0.01, P=0.93. Conclusion No significant correlation is found between the tumor size and each of the quantifiable parameters of blood flow pattern derived with dynamic CT in maliagnant solitary pulmonary nodules.

  17. Turbulence, dynamic similarity and scale effects in high-velocity free-surface flows above a stepped chute

    Science.gov (United States)

    Felder, Stefan; Chanson, Hubert

    2009-07-01

    In high-velocity free-surface flows, air entrainment is common through the interface, and intense interactions take place between turbulent structures and entrained bubbles. Two-phase flow properties were measured herein in high-velocity open channel flows above a stepped chute. Detailed turbulence measurements were conducted in a large-size facility, and a comparative analysis was applied to test the validity of the Froude and Reynolds similarities. The results showed consistently that the Froude similitude was not satisfied using a 2:1 geometric scaling ratio. Lesser number of entrained bubbles and comparatively greater bubble sizes were observed at the smaller Reynolds numbers, as well as lower turbulence levels and larger turbulent length and time scales. The results implied that small-size models did underestimate the rate of energy dissipation and the aeration efficiency of prototype stepped spillways for similar flow conditions. Similarly a Reynolds similitude was tested. The results showed also some significant scale effects. However a number of self-similar relationships remained invariant under changes of scale and confirmed the analysis of Chanson and Carosi (Exp Fluids 42:385-401, 2007). The finding is significant because self-similarity may provide a picture general enough to be used to characterise the air-water flow field in large prototype channels.

  18. Using transfer functions to quantify El Niño Southern Oscillation dynamics in data and models.

    Science.gov (United States)

    MacMartin, Douglas G; Tziperman, Eli

    2014-09-08

    Transfer function tools commonly used in engineering control analysis can be used to better understand the dynamics of El Niño Southern Oscillation (ENSO), compare data with models and identify systematic model errors. The transfer function describes the frequency-dependent input-output relationship between any pair of causally related variables, and can be estimated from time series. This can be used first to assess whether the underlying relationship is or is not frequency dependent, and if so, to diagnose the underlying differential equations that relate the variables, and hence describe the dynamics of individual subsystem processes relevant to ENSO. Estimating process parameters allows the identification of compensating model errors that may lead to a seemingly realistic simulation in spite of incorrect model physics. This tool is applied here to the TAO array ocean data, the GFDL-CM2.1 and CCSM4 general circulation models, and to the Cane-Zebiak ENSO model. The delayed oscillator description is used to motivate a few relevant processes involved in the dynamics, although any other ENSO mechanism could be used instead. We identify several differences in the processes between the models and data that may be useful for model improvement. The transfer function methodology is also useful in understanding the dynamics and evaluating models of other climate processes.

  19. Quantifying the contribution of chromatin dynamics to stochastic gene expression reveals long, locus-dependent periods between transcriptional bursts.

    Science.gov (United States)

    Viñuelas, José; Kaneko, Gaël; Coulon, Antoine; Vallin, Elodie; Morin, Valérie; Mejia-Pous, Camila; Kupiec, Jean-Jacques; Beslon, Guillaume; Gandrillon, Olivier

    2013-02-25

    A number of studies have established that stochasticity in gene expression may play an important role in many biological phenomena. This therefore calls for further investigations to identify the molecular mechanisms at stake, in order to understand and manipulate cell-to-cell variability. In this work, we explored the role played by chromatin dynamics in the regulation of stochastic gene expression in higher eukaryotic cells. For this purpose, we generated isogenic chicken-cell populations expressing a fluorescent reporter integrated in one copy per clone. Although the clones differed only in the genetic locus at which the reporter was inserted, they showed markedly different fluorescence distributions, revealing different levels of stochastic gene expression. Use of chromatin-modifying agents showed that direct manipulation of chromatin dynamics had a marked effect on the extent of stochastic gene expression. To better understand the molecular mechanism involved in these phenomena, we fitted these data to a two-state model describing the opening/closing process of the chromatin. We found that the differences between clones seemed to be due mainly to the duration of the closed state, and that the agents we used mainly seem to act on the opening probability. In this study, we report biological experiments combined with computational modeling, highlighting the importance of chromatin dynamics in stochastic gene expression. This work sheds a new light on the mechanisms of gene expression in higher eukaryotic cells, and argues in favor of relatively slow dynamics with long (hours to days) periods of quiet state.

  20. Quantifying the dynamics of flow within a permeable bed using time-resolved endoscopic particle imaging velocimetry (EPIV)

    Energy Technology Data Exchange (ETDEWEB)

    Blois, G. [University of Birmingham, School of Geography, Earth and Environmental Sciences, Birmingham (United Kingdom); University of Illinois, Department of Mechanical Science and Engineering, Urbana, IL (United States); Sambrook Smith, G.H.; Lead, J.R. [University of Birmingham, School of Geography, Earth and Environmental Sciences, Birmingham (United Kingdom); Best, J.L. [University of Illinois, Departments of Geology, Geography, Mechanical Science and Engineering, and Ven Te Chow Hydrosystems Laboratory, Urbana, IL (United States); Hardy, R.J. [Durham University, Department of Geography, Science Laboratories, Durham (United Kingdom)

    2012-07-15

    This paper presents results of an experimental study investigating the mean and temporal evolution of flow within the pore space of a packed bed overlain by a free-surface flow. Data were collected by an endoscopic PIV (EPIV) technique. EPIV allows the instantaneous velocity field within the pore space to be quantified at a high spatio-temporal resolution, thus permitting investigation of the structure of turbulent subsurface flow produced by a high Reynolds number freestream flow (Re{sub s} in the range 9.8 x 10{sup 3}-9.7 x 10{sup 4}). Evolution of coherent flow structures within the pore space is shown to be driven by jet flow, with the interaction of this jet with the pore flow generating distinct coherent flow structures. The effects of freestream water depth, Reynolds and Froude numbers are investigated. (orig.)

  1. Similar below-ground carbon cycling dynamics but contrasting modes of nitrogen cycling between arbuscular mycorrhizal and ectomycorrhizal forests.

    Science.gov (United States)

    Lin, Guigang; McCormack, M Luke; Ma, Chengen; Guo, Dali

    2017-02-01

    Compared with ectomycorrhizal (ECM) forests, arbuscular mycorrhizal (AM) forests are hypothesized to have higher carbon (C) cycling rates and a more open nitrogen (N) cycle. To test this hypothesis, we synthesized 645 observations, including 22 variables related to below-ground C and N dynamics from 100 sites, where AM and ECM forests co-occurred at the same site. Leaf litter quality was lower in ECM than in AM trees, leading to greater forest floor C stocks in ECM forests. By contrast, AM forests had significantly higher mineral soil C concentrations, and this result was strongly mediated by plant traits and climate. No significant differences were found between AM and ECM forests in C fluxes and labile C concentrations. Furthermore, inorganic N concentrations, net N mineralization and nitrification rates were all higher in AM than in ECM forests, indicating 'mineral' N economy in AM but 'organic' N economy in ECM trees. AM and ECM forests show systematic differences in mineral vs organic N cycling, and thus mycorrhizal type may be useful in predicting how different tree species respond to multiple environmental change factors. By contrast, mycorrhizal type alone cannot reliably predict below-ground C dynamics without considering plant traits and climate. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.

  2. Quantifying dynamic mechanical properties of human placenta tissue using optimization techniques with specimen-specific finite-element models.

    Science.gov (United States)

    Hu, Jingwen; Klinich, Kathleen D; Miller, Carl S; Nazmi, Giseli; Pearlman, Mark D; Schneider, Lawrence W; Rupp, Jonathan D

    2009-11-13

    Motor-vehicle crashes are the leading cause of fetal deaths resulting from maternal trauma in the United States, and placental abruption is the most common cause of these deaths. To minimize this injury, new assessment tools, such as crash-test dummies and computational models of pregnant women, are needed to evaluate vehicle restraint systems with respect to reducing the risk of placental abruption. Developing these models requires accurate material properties for tissues in the pregnant abdomen under dynamic loading conditions that can occur in crashes. A method has been developed for determining dynamic material properties of human soft tissues that combines results from uniaxial tensile tests, specimen-specific finite-element models based on laser scans that accurately capture non-uniform tissue-specimen geometry, and optimization techniques. The current study applies this method to characterizing material properties of placental tissue. For 21 placenta specimens tested at a strain rate of 12/s, the mean failure strain is 0.472+/-0.097 and the mean failure stress is 34.80+/-12.62 kPa. A first-order Ogden material model with ground-state shear modulus (mu) of 23.97+/-5.52 kPa and exponent (alpha(1)) of 3.66+/-1.90 best fits the test results. The new method provides a nearly 40% error reduction (p<0.001) compared to traditional curve-fitting methods by considering detailed specimen geometry, loading conditions, and dynamic effects from high-speed loading. The proposed method can be applied to determine mechanical properties of other soft biological tissues.

  3. Spatio-temporal image correlation spectroscopy and super-resolution microscopy to quantify molecular dynamics in T cells.

    Science.gov (United States)

    Ashdown, George W; Owen, Dylan M

    2018-02-02

    Many cellular processes are regulated by the spatio-temporal organisation of signalling complexes, cytoskeletal components and membranes. One such example is at the T cell immunological synapse where the retrograde flow of cortical filamentous (F)-actin from the synapse periphery drives signalling protein microclusters towards the synapse centre. The density of this mesh however, makes visualisation and analysis of individual actin fibres difficult due to the resolution limit of conventional microscopy. Recently, super-resolution methods such as structured illumination microscopy (SIM) have surpassed this resolution limit. Here, we apply SIM to better visualise the dense cortical actin meshwork in T cell synapses formed against activating, antibody-coated surfaces and image under total-internal reflection fluorescence (TIRF) illumination. To analyse the observed molecular flows, and the relationship between them, we apply spatio-temporal image correlation spectroscopy (STICS) and its cross-correlation variant (STICCS). We show that the dynamic cortical actin mesh can be visualised with unprecedented detail and that STICS/STICCS can output accurate, quantitative maps of molecular flow velocity and directionality from such data. We find that the actin flow can be disrupted using small molecule inhibitors of actin polymerisation. This combination of imaging and quantitative analysis may provide an important new tool for researchers to investigate the molecular dynamics at cellular length scales. Here we demonstrate the retrograde flow of F-actin which may be important for the clustering and dynamics of key signalling proteins within the plasma membrane, a phenomenon which is vital to correct T cell activation and therefore the mounting of an effective immune response. Copyright © 2018. Published by Elsevier Inc.

  4. Quantifying Km-scale Hydrological Exchange Flows under Dynamic Flows and Their Influences on River Corridor Biogeochemistry

    Science.gov (United States)

    Chen, X.; Song, X.; Shuai, P.; Hammond, G. E.; Ren, H.; Zachara, J. M.

    2017-12-01

    Hydrologic exchange flows (HEFs) in rivers play vital roles in watershed ecological and biogeochemical functions due to their strong capacity to attenuate contaminants and process significant quantities of carbon and nutrients. While most of existing HEF studies focus on headwater systems with the assumption of steady-state flow, there is lack of understanding of large-scale HEFs in high-order regulated rivers that experience high-frequency stage fluctuations. The large variability of HEFs is a result of interactions between spatial heterogeneity in hydrogeologic properties and temporal variation in river discharge induced by natural or anthropogenic perturbations. Our 9-year spatially distributed dataset (water elevation, specific conductance, and temperature) combined with mechanistic hydrobiogeochemical simulations have revealed complex spatial and temporal dynamics in km-scale HEFs and their significant impacts on contaminant plume mobility and hyporheic biogeochemical processes along the Hanford Reach. Extended multidirectional flow behaviors of unconfined, river corridor groundwater were observed hundreds of meters inland from the river shore resulting from discharge-dependent HEFs. An appropriately sized modeling domain to capture the impact of regional groundwater flow as well as knowledge of subsurface structures controlling intra-aquifer hydrologic connectivity were essential to realistically model transient storage in this large-scale river corridor. This work showed that both river water and mobile groundwater contaminants could serve as effective tracers of HEFs, thus providing valuable information for evaluating and validating the HEF models. Multimodal residence time distributions with long tails were resulted from the mixture of long and short exchange pathways, which consequently impact the carbon and nutrient cycling within the river corridor. Improved understanding of HEFs using integrated observational and modeling approaches sheds light on

  5. Annual dynamics of daylight variability and contrast a simulation-based approach to quantifying visual effects in architecture

    CERN Document Server

    Rockcastle, Siobhan

    2013-01-01

    Daylight is a dynamic source of illumination in architectural space, creating diverse and ephemeral configurations of light and shadow within the built environment. Perceptual qualities of daylight, such as contrast and temporal variability, are essential to our understanding of both material and visual effects in architecture. Although spatial contrast and light variability are fundamental to the visual experience of architecture, architects still rely primarily on intuition to evaluate their designs because there are few metrics that address these factors. Through an analysis of contemporary

  6. Quantifying the potential of automated dynamic solar shading in office buildings through integrated simulations of energy and daylight

    DEFF Research Database (Denmark)

    Nielsen, Martin Vraa; Svendsen, Svend; Bjerregaard Jensen, Lotte

    2011-01-01

    The façade design is and should be considered a central issue in the design of energy-efficient buildings. That is why dynamic façade components are increasingly used to adapt to both internal and external impacts, and to cope with a reduction in energy consumption and an increase in occupant...... them with various window heights and orientations. Their performance was evaluated on the basis of the building’s total energy demand, its energy demand for heating, cooling and lighting, and also its daylight factors. Simulation results comparing the three façade alternatives show potential...

  7. Self-Similar Nonlinear Dynamical Solutions for One-Component Nonneutral Plasma in a Time-Dependent Linear Focusing Field

    International Nuclear Information System (INIS)

    Qin, Hong; Davidson, Ronald C.

    2011-01-01

    In a linear trap confining a one-component nonneutral plasma, the external focusing force is a linear function of the configuration coordinates and/or the velocity coordinates. Linear traps include the classical Paul trap and the Penning trap, as well as the newly proposed rotating-radio- frequency traps and the Mobius accelerator. This paper describes a class of self-similar nonlinear solutions of nonneutral plasma in general time-dependent linear focusing devices, with self-consistent electrostatic field. This class of nonlinear solutions includes many known solutions as special cases.

  8. Feedforward compensation for novel dynamics depends on force field orientation but is similar for the left and right arms.

    Science.gov (United States)

    Reuter, Eva-Maria; Cunnington, Ross; Mattingley, Jason B; Riek, Stephan; Carroll, Timothy J

    2016-11-01

    There are well-documented differences in the way that people typically perform identical motor tasks with their dominant and the nondominant arms. According to Yadav and Sainburg's (Neuroscience 196: 153-167, 2011) hybrid-control model, this is because the two arms rely to different degrees on impedance control versus predictive control processes. Here, we assessed whether differences in limb control mechanisms influence the rate of feedforward compensation to a novel dynamic environment. Seventy-five healthy, right-handed participants, divided into four subsamples depending on the arm (left, right) and direction of the force field (ipsilateral, contralateral), reached to central targets in velocity-dependent curl force fields. We assessed the rate at which participants developed predictive compensation for the force field using intermittent error-clamp trials and assessed both kinematic errors and initial aiming angles in the field trials. Participants who were exposed to fields that pushed the limb toward ipsilateral space reduced kinematic errors more slowly, built up less predictive field compensation, and relied more on strategic reaiming than those exposed to contralateral fields. However, there were no significant differences in predictive field compensation or kinematic errors between limbs, suggesting that participants using either the left or the right arm could adapt equally well to novel dynamics. It therefore appears that the distinct preferences in control mechanisms typically observed for the dominant and nondominant arms reflect a default mode that is based on habitual functional requirements rather than an absolute limit in capacity to access the controller specialized for the opposite limb. Copyright © 2016 the American Physiological Society.

  9. Quantifying Transmission.

    Science.gov (United States)

    Woolhouse, Mark

    2017-07-01

    Transmissibility is the defining characteristic of infectious diseases. Quantifying transmission matters for understanding infectious disease epidemiology and designing evidence-based disease control programs. Tracing individual transmission events can be achieved by epidemiological investigation coupled with pathogen typing or genome sequencing. Individual infectiousness can be estimated by measuring pathogen loads, but few studies have directly estimated the ability of infected hosts to transmit to uninfected hosts. Individuals' opportunities to transmit infection are dependent on behavioral and other risk factors relevant given the transmission route of the pathogen concerned. Transmission at the population level can be quantified through knowledge of risk factors in the population or phylogeographic analysis of pathogen sequence data. Mathematical model-based approaches require estimation of the per capita transmission rate and basic reproduction number, obtained by fitting models to case data and/or analysis of pathogen sequence data. Heterogeneities in infectiousness, contact behavior, and susceptibility can have substantial effects on the epidemiology of an infectious disease, so estimates of only mean values may be insufficient. For some pathogens, super-shedders (infected individuals who are highly infectious) and super-spreaders (individuals with more opportunities to transmit infection) may be important. Future work on quantifying transmission should involve integrated analyses of multiple data sources.

  10. Permeability to macromolecular contrast media quantified by dynamic MRI correlates with tumor tissue assays of vascular endothelial growth factor (VEGF)

    International Nuclear Information System (INIS)

    Cyran, Clemens C.; Sennino, Barbara; Fu, Yanjun; Rogut, Victor; Shames, David M.; Chaopathomkul, Bundit; Wendland, Michael F.; McDonald, Donald M.; Brasch, Robert C.; Raatschen, Hans-Juergen

    2012-01-01

    Purpose: To correlate dynamic MRI assays of macromolecular endothelial permeability with microscopic area–density measurements of vascular endothelial growth factor (VEGF) in tumors. Methods and material: This study compared tumor xenografts from two different human cancer cell lines, MDA-MB-231 tumors (n = 5), and MDA-MB-435 (n = 8), reported to express respectively higher and lower levels of VEGF. Dynamic MRI was enhanced by a prototype macromolecular contrast medium (MMCM), albumin-(Gd-DTPA)35. Quantitative estimates of tumor microvascular permeability (K PS ; μl/min × 100 cm 3 ), obtained using a two-compartment kinetic model, were correlated with immunohistochemical measurements of VEGF in each tumor. Results: Mean K PS was 2.4 times greater in MDA-MB-231 tumors (K PS = 58 ± 30.9 μl/min × 100 cm 3 ) than in MDA-MB-435 tumors (K PS = 24 ± 8.4 μl/min × 100 cm 3 ) (p < 0.05). Correspondingly, the area–density of VEGF in MDA-MB-231 tumors was 2.6 times greater (27.3 ± 2.2%, p < 0.05) than in MDA-MB-435 cancers (10.5 ± 0.5%, p < 0.05). Considering all tumors without regard to cell type, a significant positive correlation (r = 0.67, p < 0.05) was observed between MRI-estimated endothelial permeability and VEGF immunoreactivity. Conclusion: Correlation of MRI assays of endothelial permeability to a MMCM and VEGF immunoreactivity of tumors support the hypothesis that VEGF is a major contributor to increased macromolecular permeability in cancers. When applied clinically, the MMCM-enhanced MRI approach could help to optimize the appropriate application of VEGF-inhibiting therapy on an individual patient basis.

  11. Similarities and differences of serotonin and its precursors in their interactions with model membranes studied by molecular dynamics simulation

    Science.gov (United States)

    Wood, Irene; Martini, M. Florencia; Pickholz, Mónica

    2013-08-01

    In this work, we report a molecular dynamics (MD) simulations study of relevant biological molecules as serotonin (neutral and protonated) and its precursors, tryptophan and 5-hydroxy-tryptophan, in a fully hydrated bilayer of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidyl-choline (POPC). The simulations were carried out at the fluid lamellar phase of POPC at constant pressure and temperature conditions. Two guest molecules of each type were initially placed at the water phase. We have analyzed, the main localization, preferential orientation and specific interactions of the guest molecules within the bilayer. During the simulation run, the four molecules were preferentially found at the water-lipid interphase. We found that the interactions that stabilized the systems are essentially hydrogen bonds, salt bridges and cation-π. None of the guest molecules have access to the hydrophobic region of the bilayer. Besides, zwitterionic molecules have access to the water phase, while protonated serotonin is anchored in the interphase. Even taking into account that these simulations were done using a model membrane, our results suggest that the studied molecules could not cross the blood brain barrier by diffusion. These results are in good agreement with works that show that serotonin and Trp do not cross the BBB by simple diffusion.

  12. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    Science.gov (United States)

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.

    2015-02-01

    Wind-profiling lidars are now regularly used in boundary-layer meteorology and in applications such as wind energy and air quality. Lidar wind profilers exploit the Doppler shift of laser light backscattered from particulates carried by the wind to measure a line-of-sight (LOS) velocity. The Doppler beam swinging (DBS) technique, used by many commercial systems, considers measurements of this LOS velocity in multiple radial directions in order to estimate horizontal and vertical winds. The method relies on the assumption of homogeneous flow across the region sampled by the beams. Using such a system in inhomogeneous flow, such as wind turbine wakes or complex terrain, will result in errors. To quantify the errors expected from such violation of the assumption of horizontal homogeneity, we simulate inhomogeneous flow in the atmospheric boundary layer, notably stably stratified flow past a wind turbine, with a mean wind speed of 6.5 m s-1 at the turbine hub-height of 80 m. This slightly stable case results in 15° of wind direction change across the turbine rotor disk. The resulting flow field is sampled in the same fashion that a lidar samples the atmosphere with the DBS approach, including the lidar range weighting function, enabling quantification of the error in the DBS observations. The observations from the instruments located upwind have small errors, which are ameliorated with time averaging. However, the downwind observations, particularly within the first two rotor diameters downwind from the wind turbine, suffer from errors due to the heterogeneity of the wind turbine wake. Errors in the stream-wise component of the flow approach 30% of the hub-height inflow wind speed close to the rotor disk. Errors in the cross-stream and vertical velocity components are also significant: cross-stream component errors are on the order of 15% of the hub-height inflow wind speed (1.0 m s-1) and errors in the vertical velocity measurement exceed the actual vertical velocity

  13. DYNAMICS INSIDE THE RADIO AND X-RAY CLUSTER CAVITIES OF CYGNUS A AND SIMILAR FRII SOURCES

    International Nuclear Information System (INIS)

    Mathews, William G.; Guo Fulai

    2012-01-01

    We describe approximate axisymmetric computations of the dynamical evolution of material inside radio lobes and X-ray cluster gas cavities in Fanaroff-Riley II (FRII) sources such as Cygnus A. All energy is delivered by a jet to the lobe/cavity via a moving hotspot where jet energy dissipates in a reverse shock. Our calculations describe the evolution of hot plasma, cosmic rays (CRs), and toroidal magnetic fields flowing from the hotspot into the cavity. Many important observational features are explained. Gas, CRs, and field flow back along the cavity surface in a 'boundary backflow' consistent with detailed FRII observations. Computed ages of backflowing CRs are consistent with observed radio-synchrotron age variations only if shear instabilities in the boundary backflow are damped and we assume this is done with viscosity of unknown origin. We compute a faint thermal jet along the symmetry axis and suggest that it is responsible for redirecting the Cygnus A nonthermal jet. Magnetic fields estimated from synchrotron self-Compton (SSC) X-radiation observed near the hotspot evolve into radio lobe fields. Computed profiles of radio-synchrotron lobe emission perpendicular to the jet reveal dramatically limb-brightened emission in excellent agreement with FRII observation, although computed lobe fields exceed those observed. Strong winds flowing from hotspots naturally create kiloparsec-sized spatial offsets between hotspot nonthermal X-ray inverse Compton (IC-CMB) emission and radio-synchrotron emission that peaks 1-2 kpc ahead where the field increases due to wind compression. In our computed version of Cygnus A, nonthermal X-ray emission increases from the hotspot (some IC-CMB, mostly SSC) toward the offset radio-synchrotron peak (mostly SSC).

  14. Quantifying Spatiotemporal Dynamics of Solar Radiation over the Northeast China Based on ACO-BPNN Model and Intensity Analysis

    Directory of Open Access Journals (Sweden)

    Xiangqian Li

    2017-01-01

    Full Text Available Reliable information on the spatiotemporal dynamics of solar radiation plays a crucial role in studies relating to global climate change. In this study, a new backpropagation neural network (BPNN model optimized with an Ant Colony Optimization (ACO algorithm was developed to generate the ACO-BPNN model, which had demonstrated superior performance for simulating solar radiation compared to traditional BPNN modelling, for Northeast China. On this basis, we applied an intensity analysis to investigate the spatiotemporal variation of solar radiation from 1982 to 2010 over the study region at three levels: interval, category, and conversion. Research findings revealed that (1 the solar radiation resource in the study region increased from the 1980s to the 2000s and the average annual rate of variation from the 1980s to the 1990s was lower than that from the 1990s to the 2000s and (2 the gains and losses of solar radiation at each level were in different conditions. The poor, normal, and comparatively abundant levels were transferred to higher levels, whereas the abundant level was transferred to lower levels. We believe our findings contribute to implementing ad hoc energy management strategies to optimize the use of solar radiation resources and provide scientific suggestions for policy planning.

  15. Application of data science tools to quantify and distinguish between structures and models in molecular dynamics datasets.

    Science.gov (United States)

    Kalidindi, Surya R; Gomberg, Joshua A; Trautt, Zachary T; Becker, Chandler A

    2015-08-28

    Structure quantification is key to successful mining and extraction of core materials knowledge from both multiscale simulations as well as multiscale experiments. The main challenge stems from the need to transform the inherently high dimensional representations demanded by the rich hierarchical material structure into useful, high value, low dimensional representations. In this paper, we develop and demonstrate the merits of a data-driven approach for addressing this challenge at the atomic scale. The approach presented here is built on prior successes demonstrated for mesoscale representations of material internal structure, and involves three main steps: (i) digital representation of the material structure, (ii) extraction of a comprehensive set of structure measures using the framework of n-point spatial correlations, and (iii) identification of data-driven low dimensional measures using principal component analyses. These novel protocols, applied on an ensemble of structure datasets output from molecular dynamics (MD) simulations, have successfully classified the datasets based on several model input parameters such as the interatomic potential and the temperature used in the MD simulations.

  16. Application of data science tools to quantify and distinguish between structures and models in molecular dynamics datasets

    International Nuclear Information System (INIS)

    Kalidindi, Surya R; Gomberg, Joshua A; Trautt, Zachary T; Becker, Chandler A

    2015-01-01

    Structure quantification is key to successful mining and extraction of core materials knowledge from both multiscale simulations as well as multiscale experiments. The main challenge stems from the need to transform the inherently high dimensional representations demanded by the rich hierarchical material structure into useful, high value, low dimensional representations. In this paper, we develop and demonstrate the merits of a data-driven approach for addressing this challenge at the atomic scale. The approach presented here is built on prior successes demonstrated for mesoscale representations of material internal structure, and involves three main steps: (i) digital representation of the material structure, (ii) extraction of a comprehensive set of structure measures using the framework of n-point spatial correlations, and (iii) identification of data-driven low dimensional measures using principal component analyses. These novel protocols, applied on an ensemble of structure datasets output from molecular dynamics (MD) simulations, have successfully classified the datasets based on several model input parameters such as the interatomic potential and the temperature used in the MD simulations. (paper)

  17. 87Sr/86Sr as a quantitative geochemical proxy for 14C reservoir age in dynamic, brackish waters: assessing applicability and quantifying uncertainties.

    Science.gov (United States)

    Lougheed, Bryan; van der Lubbe, Jeroen; Davies, Gareth

    2016-04-01

    Accurate geochronologies are crucial for reconstructing the sensitivity of brackish and estuarine environments to rapidly changing past external impacts. A common geochronological method used for such studies is radiocarbon (14C) dating, but its application in brackish environments is severely limited by an inability to quantify spatiotemporal variations in 14C reservoir age, or R(t), due to dynamic interplay between river runoff and marine water. Additionally, old carbon effects and species-specific behavioural processes also influence 14C ages. Using the world's largest brackish water body (the estuarine Baltic Sea) as a test-bed, combined with a comprehensive approach that objectively excludes both old carbon and species-specific effects, we demonstrate that it is possible to use 87Sr/86Sr ratios to quantify R(t) in ubiquitous mollusc shell material, leading to almost one order of magnitude increase in Baltic Sea 14C geochronological precision over the current state-of-the-art. We propose that this novel proxy method can be developed for other brackish water bodies worldwide, thereby improving geochronological control in these climate sensitive, near-coastal environments.

  18. Channel Geometry and Flood Flows: Quantifying over-bank flow dynamics during high-flow events in North Carolina's floodplains

    Science.gov (United States)

    Lovette, J. P.; Duncan, J. M.; Vimal, S.; Band, L. E.

    2015-12-01

    Natural riparian areas play numerous roles in the maintenance and improvement of stream water quality. Both restoration of riparian areas and improvement of hydrologic connectivity to the stream are often key goals of river restoration projects. These management actions are designed to improve nutrient removal by slowing and treating overland flow delivered from uplands and by storing, treating, and slowly releasing streamwater from overbank inundation during flood events. A major question is how effective this storage of overbank flow is at treating streamwater based on the cumulative time stream discharge at a downstream location has spent in shallower, slower overbank flow. The North Carolina Floodplain Mapping Program maintains a detailed statewide Flood Risk Information System (FRIS) using HEC-RAS modeling, lidar, and detailed surveyed river cross-sections. FRIS provides extensive information regarding channel geometry on approximately 39,000 stream reaches (a slightly coarser spatial resolution than the NHD+v2 dataset) with tens of cross-sections for each reach. We use this FRIS data to calculate volume and discharge from floodplain riparian areas separately from in-channel flow during overbank events. Preliminary results suggest that a small percentage of total annual discharge interacts with the full floodplain extent along a stream reach due to the infrequency of overbank flow events. However, with the significantly different physical characteristics of the riparian area when compared to the channel itself, this overbank flow can provide unique services to water quality. Our project aims to use this information in conjunction with data from the USGS SPARROW program to target non-point source hotspots of Nitrogen and Phosphorus addition and removal. By better understanding the flow dynamics within riparian areas during high flow events, riparian restoration projects can be carried out with improved efficacy.

  19. Similar but Different: Dynamic Social Network Analysis Highlights Fundamental Differences between the Fission-Fusion Societies of Two Equid Species, the Onager and Grevy's Zebra.

    Directory of Open Access Journals (Sweden)

    Daniel I Rubenstein

    Full Text Available Understanding why animal societies take on the form that they do has benefited from insights gained by applying social network analysis to patterns of individual associations. Such analyses typically aggregate data over long time periods even though most selective forces that shape sociality have strong temporal elements. By explicitly incorporating the temporal signal in social interaction data we re-examine the network dynamics of the social systems of the evolutionarily closely-related Grevy's zebras and wild asses that show broadly similar social organizations. By identifying dynamic communities, previously hidden differences emerge: Grevy's zebras show more modularity than wild asses and in wild asses most communities consist of solitary individuals; and in Grevy's zebras, lactating females show a greater propensity to switch communities than non-lactating females and males. Both patterns were missed by static network analyses and in general, adding a temporal dimension provides insights into differences associated with the size and persistence of communities as well as the frequency and synchrony of their formation. Dynamic network analysis provides insights into the functional significance of these social differences and highlights the way dynamic community analysis can be applied to other species.

  20. Self-similar dynamics of air film entrained by a solid disk in confined space: A simple prototype of topological transitions

    Science.gov (United States)

    Nakazato, Hana; Yamagishi, Yuki; Okumura, Ko

    2018-05-01

    In hydrodynamic topological transitions, one mass of fluid breaks into two or two merge into one. For example, in honey-drop formation when honey is dripping from a spoon, honey is extended to separate into two masses as the liquid neck bridging them thins down to the micron scale. At the moment when the topology changes due to the breakup, physical observables such as surface curvature locally diverge. Such singular dynamics has widely attracted physicists, revealing universality in self-similar dynamics, which shares much in common with critical phenomena in thermodynamics. Many experimental examples have been found, including an electric spout and vibration-induced jet eruption. However, only a few cases have been physically understood on the basis of equations that govern the singular dynamics and even in such a case the physical understanding is mathematically complicated, inevitably involving delicate numerical calculations. Here we study the breakup of air film entrained by a solid disk into viscous liquid in a confined space, which leads to formation, thinning, and breakup of the neck of air. As a result, we unexpectedly find that equations governing the neck dynamics can be solved analytically by virtue of two remarkable experimental features: Only a single length scale linearly dependent on time remains near the singularity and two universal scaling functions describing the singular neck shape and velocity field are both analytic. The present solvable case would be essential for a better understanding of the singular dynamics and will help reveal the physics of unresolved examples intimately related to daily-life phenomena and diverse practical applications.

  1. Actively heated high-resolution fiber-optic-distributed temperature sensing to quantify streambed flow dynamics in zones of strong groundwater upwelling

    Science.gov (United States)

    Briggs, Martin A.; Buckley, Sean F.; Bagtzoglou, Amvrossios C.; Werkema, Dale D.; Lane, John W.

    2016-01-01

    Zones of strong groundwater upwelling to streams enhance thermal stability and moderate thermal extremes, which is particularly important to aquatic ecosystems in a warming climate. Passive thermal tracer methods used to quantify vertical upwelling rates rely on downward conduction of surface temperature signals. However, moderate to high groundwater flux rates (>−1.5 m d−1) restrict downward propagation of diurnal temperature signals, and therefore the applicability of several passive thermal methods. Active streambed heating from within high-resolution fiber-optic temperature sensors (A-HRTS) has the potential to define multidimensional fluid-flux patterns below the extinction depth of surface thermal signals, allowing better quantification and separation of local and regional groundwater discharge. To demonstrate this concept, nine A-HRTS were emplaced vertically into the streambed in a grid with ∼0.40 m lateral spacing at a stream with strong upward vertical flux in Mashpee, Massachusetts, USA. Long-term (8–9 h) heating events were performed to confirm the dominance of vertical flow to the 0.6 m depth, well below the extinction of ambient diurnal signals. To quantify vertical flux, short-term heating events (28 min) were performed at each A-HRTS, and heat-pulse decay over vertical profiles was numerically modeled in radial two dimension (2-D) using SUTRA. Modeled flux values are similar to those obtained with seepage meters, Darcy methods, and analytical modeling of shallow diurnal signals. We also observed repeatable differential heating patterns along the length of vertically oriented sensors that may indicate sediment layering and hyporheic exchange superimposed on regional groundwater discharge.

  2. Artefact in Physiological Data Collected from Patients with Brain Injury: Quantifying the Problem and Providing a Solution Using a Factorial Switching Linear Dynamical Systems Approach.

    Science.gov (United States)

    Georgatzis, Konstantinos; Lal, Partha; Hawthorne, Christopher; Shaw, Martin; Piper, Ian; Tarbert, Claire; Donald, Rob; Williams, Christopher K I

    2016-01-01

    High-resolution, artefact-free and accurately annotated physiological data are desirable in patients with brain injury both to inform clinical decision-making and for intelligent analysis of the data in applications such as predictive modelling. We have quantified the quality of annotation surrounding artefactual events and propose a factorial switching linear dynamical systems (FSLDS) approach to automatically detect artefact in physiological data collected in the neurological intensive care unit (NICU). Retrospective analysis of the BrainIT data set to discover potential hypotensive events corrupted by artefact and identify the annotation of associated clinical interventions. Training of an FSLDS model on clinician-annotated artefactual events in five patients with severe traumatic brain injury. In a subset of 187 patients in the BrainIT database, 26.5 % of potential hypotensive events were abandoned because of artefactual data. Only 30 % of these episodes could be attributed to an annotated clinical intervention. As assessed by the area under the receiver operating characteristic curve metric, FSLDS model performance in automatically identifying the events of blood sampling, arterial line damping and patient handling was 0.978, 0.987 and 0.765, respectively. The influence of artefact on physiological data collected in the NICU is a significant problem. This pilot study using an FSLDS approach shows real promise and is under further development.

  3. Computational hydrodynamic comparison of a mini vessel and a USP 2 dissolution testing system to predict the dynamic operating conditions for similarity of dissolution performance.

    Science.gov (United States)

    Wang, Bing; Bredael, Gerard; Armenante, Piero M

    2018-03-25

    The hydrodynamic characteristics of a mini vessel and a USP 2 dissolution testing system were obtained and compared to predict the tablet-liquid mass transfer coefficient from velocity distributions near the tablet and establish the dynamic operating conditions under which dissolution in mini vessels could be conducted to generate concentration profiles similar to those in the USP 2. Velocity profiles were obtained experimentally using Particle Image Velocimetry (PIV). Computational Fluid Dynamics (CFD) was used to predict the velocity distribution and strain rate around a model tablet. A CFD-based mass transfer model was also developed. When plotted against strain rate, the predicted tablet-liquid mass transfer coefficient was found to be independent of the system where it was obtained, implying that a tablet would dissolve at the same rate in both systems provided that the concentration gradient between the tablet surface and the bulk is the same, the tablet surface area per unit liquid volume is identical, and the two systems are operated at the appropriate agitation speeds specified in this work. The results of this work will help dissolution scientists operate mini vessels so as to predict the dissolution profiles in the USP 2, especially during the early stages of drug development. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Quantifying linguistic coordination

    DEFF Research Database (Denmark)

    Fusaroli, Riccardo; Tylén, Kristian

    task (Bahrami et al 2010, Fusaroli et al. 2012) we extend to linguistic coordination dynamical measures of recurrence employed in the analysis of sensorimotor coordination (such as heart-rate (Konvalinka et al 2011), postural sway (Shockley 2005) and eye-movements (Dale, Richardson and Kirkham 2012......). We employ nominal recurrence analysis (Orsucci et al 2005, Dale et al 2011) on the decision-making conversations between the participants. We report strong correlations between various indexes of recurrence and collective performance. We argue this method allows us to quantify the qualities...

  5. Thermo-fluid-dynamics of natural convection around a heated vertical plate with a critical assessment of the standard similarity theory

    Science.gov (United States)

    Guha, Abhijit; Nayek, Subhajit

    2017-10-01

    A compulsory element of all textbooks on natural convection has been a detailed similarity analysis for laminar natural convection on a heated semi-infinite vertical plate and a routinely used boundary condition for such analysis is u = 0 at x = 0. The same boundary condition continues to be assumed in related theoretical analyses, even in recent publications. The present work examines the consequence of this long-held assumption, which appears to have never been questioned in the literature, on the fluid dynamics and heat transfer characteristics. The assessment has been made here by solving the Navier-Stokes equations numerically with two boundary conditions—one with constrained velocity at x = 0 to mimic the similarity analysis and the other with no such constraints simulating the case of a heated vertical plate in an infinite expanse of the quiescent fluid medium. It is found that the fluid flow field given by the similarity theory is drastically different from that given by the computational fluid dynamics (CFD) simulations with unconstrained velocity. This also reflects on the Nusselt number, the prediction of the CFD simulations with unconstrained velocity being quite close to the experimentally measured values at all Grashof and Prandtl numbers (this is the first time theoretically computed values of the average Nusselt number N u ¯ are found to be so close to the experimental values). The difference of the Nusselt number (Δ N u ¯ ) predicted by the similarity theory and that by the CFD simulations (as well as the measured values), both computed with a high degree of precision, can be very significant, particularly at low Grashof numbers and at Prandtl numbers far removed from unity. Computations show that within the range of investigations (104 ≤ GrL ≤ 108, 0.01 ≤ Pr ≤ 100), the maximum value of Δ N u ¯ may be of the order 50%. Thus, for quantitative predictions, the available theory (i.e., similarity analysis) can be rather inadequate. With

  6. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar

    2016-03-21

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  7. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar; Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knuepfer, Christian; Liebermeister, Wolfram

    2016-01-01

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users' intuition about model similarity, and to support complex model searches in databases.

  8. Quantifying the Adaptive Cycle.

    Directory of Open Access Journals (Sweden)

    David G Angeler

    Full Text Available The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994-2011 data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems.

  9. DMPD: Are the IKKs and IKK-related kinases TBK1 and IKK-epsilon similarly activated? [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 18353649 Are the IKKs and IKK-related kinases TBK1 and IKK-epsilon similarly activa...e IKKs and IKK-related kinases TBK1 and IKK-epsilon similarly activated? PubmedID... 18353649 Title Are the IKKs and IKK-related kinases TBK1 and IKK-epsilon similarly activated? Authors Chau

  10. Self-similar cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Chao, W Z [Cambridge Univ. (UK). Dept. of Applied Mathematics and Theoretical Physics

    1981-07-01

    The kinematics and dynamics of self-similar cosmological models are discussed. The degrees of freedom of the solutions of Einstein's equations for different types of models are listed. The relation between kinematic quantities and the classifications of the self-similarity group is examined. All dust local rotational symmetry models have been found.

  11. From Recombination Dynamics to Device Performance: Quantifying the Efficiency of Exciton Dissociation, Charge Separation, and Extraction in Bulk Heterojunction Solar Cells with Fluorine-Substituted Polymer Donors

    KAUST Repository

    Gorenflot, Julien

    2017-09-28

    An original set of experimental and modeling tools is used to quantify the yield of each of the physical processes leading to photocurrent generation in organic bulk heterojunction solar cells, enabling evaluation of materials and processing condition beyond the trivial comparison of device performances. Transient absorption spectroscopy, “the” technique to monitor all intermediate states over the entire relevant timescale, is combined with time-delayed collection field experiments, transfer matrix simulations, spectral deconvolution, and parametrization of the charge carrier recombination by a two-pool model, allowing quantification of densities of excitons and charges and extrapolation of their kinetics to device-relevant conditions. Photon absorption, charge transfer, charge separation, and charge extraction are all quantified for two recently developed wide-bandgap donor polymers: poly(4,8-bis((2-ethylhexyl)oxy)benzo[1,2-b:4,5-b′]dithiophene-3,4-difluorothiophene) (PBDT[2F]T) and its nonfluorinated counterpart poly(4,8-bis((2-ethylhexyl)oxy)benzo[1,2-b:4,5-b′]dithiophene-3,4-thiophene) (PBDT[2H]T) combined with PC71BM in bulk heterojunctions. The product of these yields is shown to agree well with the devices\\' external quantum efficiency. This methodology elucidates in the specific case studied here the origin of improved photocurrents obtained when using PBDT[2F]T instead of PBDT[2H]T as well as upon using solvent additives. Furthermore, a higher charge transfer (CT)-state energy is shown to lead to significantly lower energy losses (resulting in higher VOC) during charge generation compared to P3HT:PCBM.

  12. From Recombination Dynamics to Device Performance: Quantifying the Efficiency of Exciton Dissociation, Charge Separation, and Extraction in Bulk Heterojunction Solar Cells with Fluorine-Substituted Polymer Donors

    KAUST Repository

    Gorenflot, Julien; Paulke, Andreas; Piersimoni, Fortunato; Wolf, Jannic Sebastian; Kan, Zhipeng; Cruciani, Federico; El Labban, Abdulrahman; Neher, Dieter; Beaujuge, Pierre; Laquai, Fré dé ric

    2017-01-01

    An original set of experimental and modeling tools is used to quantify the yield of each of the physical processes leading to photocurrent generation in organic bulk heterojunction solar cells, enabling evaluation of materials and processing condition beyond the trivial comparison of device performances. Transient absorption spectroscopy, “the” technique to monitor all intermediate states over the entire relevant timescale, is combined with time-delayed collection field experiments, transfer matrix simulations, spectral deconvolution, and parametrization of the charge carrier recombination by a two-pool model, allowing quantification of densities of excitons and charges and extrapolation of their kinetics to device-relevant conditions. Photon absorption, charge transfer, charge separation, and charge extraction are all quantified for two recently developed wide-bandgap donor polymers: poly(4,8-bis((2-ethylhexyl)oxy)benzo[1,2-b:4,5-b′]dithiophene-3,4-difluorothiophene) (PBDT[2F]T) and its nonfluorinated counterpart poly(4,8-bis((2-ethylhexyl)oxy)benzo[1,2-b:4,5-b′]dithiophene-3,4-thiophene) (PBDT[2H]T) combined with PC71BM in bulk heterojunctions. The product of these yields is shown to agree well with the devices' external quantum efficiency. This methodology elucidates in the specific case studied here the origin of improved photocurrents obtained when using PBDT[2F]T instead of PBDT[2H]T as well as upon using solvent additives. Furthermore, a higher charge transfer (CT)-state energy is shown to lead to significantly lower energy losses (resulting in higher VOC) during charge generation compared to P3HT:PCBM.

  13. Quantifying the uncertainty in heritability.

    Science.gov (United States)

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-05-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.

  14. Quantifying the Spatio-Temporal Dynamics of Rural Settlements and the Associated Impacts on Land Use in an Undeveloped Area of China

    Directory of Open Access Journals (Sweden)

    Jie Wang

    2018-05-01

    Full Text Available Rapid urbanization and economic growth in China have accelerated changes in rural settlements and associated land-use types that are expected to alter ecological services and the environment. Relevant studies of the dynamics of rural settlements and corresponding rural land-use changes are in short supply, however, especially in undeveloped areas in China. This study, therefore, investigated the spatio-temporal dynamics of rural settlements and their impacts on other land-use types by using 30 m rural settlement status and dynamic maps from the end of the 1980s to 2010. These maps were generated by visual interpretation with strict product quality control and accuracy. Henan province was selected as a case study of undeveloped regions in China. We examined in particular how the expansion of rural settlements affected cultivated lands and the processes of rural settlement urbanization. This study looked at three periods: the end of the 1980s–2000, 2000–2010, and the end of the 1980s–2010, with two spatial scales of province and prefecture city. Major findings about the rural settlements in Henan from the end of the 1980s to 2010 include (1 the area of rural settlements grew continuously, although the increasing trend slowed; (2 the expansion of rural settlements showed a negative trend contrary to the trend of the urbanization of rural settlements; (3 rural settlement expansion occupied considerable expanse of cultivated lands, which accounted for up to 96% of the total expansion lands; (4 urbanization of rural settlements was the main mode by which rural residential lands vanished, accounting for more than 98% of the lost lands. This study can provide suggestions for the conservation and sustainability of the rural environment and inform reasonable policies on rural development.

  15. A Probabilistic Analysis to Quantify the Effect of March 11, 2004, Attacks in Madrid on the March 14 Elections in Spain: A Dynamic Modelling Approach

    Directory of Open Access Journals (Sweden)

    Juan-Carlos Cortés

    2015-01-01

    Full Text Available The bomb attacks in Madrid three days before the general elections of March 14, 2004, and their possible influence on the victory of PSOE (Spanish Workers Socialist Party, defeating PP (Popular Party, have been a matter of study from several points of view (i.e., sociological, political, or statistical. In this paper, we present a dynamic model based on a system of differential equations such that it, using data from Spanish CIS (National Center of Sociological Research, describes the evolution of voting intention of the Spanish people over time. Using this model, we conclude that the probability is very low that the PSOE would have won had the attack not happened. Moreover, after the attack, the PSOE increased an average of 5.6% in voting on March 14 and an average of 11.2% of the Spanish people changed their vote between March 11 and March 14. These figures are in accordance with other studies.

  16. Quantifying Parameter and Structural Uncertainty of Dynamic Disease Transmission Models Using MCMC: An Application to Rotavirus Vaccination in England and Wales.

    Science.gov (United States)

    Bilcke, Joke; Chapman, Ruth; Atchison, Christina; Cromer, Deborah; Johnson, Helen; Willem, Lander; Cox, Martin; Edmunds, William John; Jit, Mark

    2015-07-01

    Two vaccines (Rotarix and RotaTeq) are highly effective at preventing severe rotavirus disease. Rotavirus vaccination has been introduced in the United Kingdom and other countries partly based on modeling and cost-effectiveness results. However, most of these models fail to account for the uncertainty about several vaccine characteristics and the mechanism of vaccine action. A deterministic dynamic transmission model of rotavirus vaccination in the United Kingdom was developed. This improves on previous models by 1) allowing for 2 different mechanisms of action for Rotarix and RotaTeq, 2) using clinical trial data to understand these mechanisms, and 3) accounting for uncertainty by using Markov Chain Monte Carlo. In the long run, Rotarix and RotaTeq are predicted to reduce the overall rotavirus incidence by 50% (39%-63%) and 44% (30%-62%), respectively but with an increase in incidence in primary school children and adults up to 25 y of age. The vaccines are estimated to give more protection than 1 or 2 natural infections. The duration of protection is highly uncertain but has only impact on the predicted reduction in rotavirus burden for values lower than 10 y. The 2 vaccine mechanism structures fit equally well with the clinical trial data. Long-term postvaccination dynamics cannot be predicted reliably with the data available. Accounting for the joint uncertainty of several vaccine characteristics resulted in more insight into which of these are crucial for determining the impact of rotavirus vaccination. Data for up to at least 10 y postvaccination and covering older children and adults are crucial to address remaining questions on the impact of widespread rotavirus vaccination. © The Author(s) 2015.

  17. Quantifiers for quantum logic

    OpenAIRE

    Heunen, Chris

    2008-01-01

    We consider categorical logic on the category of Hilbert spaces. More generally, in fact, any pre-Hilbert category suffices. We characterise closed subobjects, and prove that they form orthomodular lattices. This shows that quantum logic is just an incarnation of categorical logic, enabling us to establish an existential quantifier for quantum logic, and conclude that there cannot be a universal quantifier.

  18. Submersible UV-Vis spectroscopy for quantifying streamwater organic carbon dynamics: implementation and challenges before and after forest harvest in a headwater stream.

    Science.gov (United States)

    Jollymore, Ashlee; Johnson, Mark S; Hawthorne, Iain

    2012-01-01

    Organic material, including total and dissolved organic carbon (DOC), is ubiquitous within aquatic ecosystems, playing a variety of important and diverse biogeochemical and ecological roles. Determining how land-use changes affect DOC concentrations and bioavailability within aquatic ecosystems is an important means of evaluating the effects on ecological productivity and biogeochemical cycling. This paper presents a methodology case study looking at the deployment of a submersible UV-Vis absorbance spectrophotometer (UV-Vis spectro::lyzer model, s::can, Vienna, Austria) to determine stream organic carbon dynamics within a headwater catchment located near Campbell River (British Columbia, Canada). Field-based absorbance measurements of DOC were made before and after forest harvest, highlighting the advantages of high temporal resolution compared to traditional grab sampling and laboratory measurements. Details of remote deployment are described. High-frequency DOC data is explored by resampling the 30 min time series with a range of resampling time intervals (from daily to weekly time steps). DOC export was calculated for three months from the post-harvest data and resampled time series, showing that sampling frequency has a profound effect on total DOC export. DOC exports derived from weekly measurements were found to underestimate export by as much as 30% compared to DOC export calculated from high-frequency data. Additionally, the importance of the ability to remotely monitor the system through a recently deployed wireless connection is emphasized by examining causes of prior data losses, and how such losses may be prevented through the ability to react when environmental or power disturbances cause system interruption and data loss.

  19. Submersible UV-Vis Spectroscopy for Quantifying Streamwater Organic Carbon Dynamics: Implementation and Challenges before and after Forest Harvest in a Headwater Stream

    Directory of Open Access Journals (Sweden)

    Iain Hawthorne

    2012-03-01

    Full Text Available Organic material, including total and dissolved organic carbon (DOC, is ubiquitous within aquatic ecosystems, playing a variety of important and diverse biogeochemical and ecological roles. Determining how land-use changes affect DOC concentrations and bioavailability within aquatic ecosystems is an important means of evaluating the effects on ecological productivity and biogeochemical cycling. This paper presents a methodology case study looking at the deployment of a submersible UV-Vis absorbance spectrophotometer (UV-Vis spectro::lyzer model, s::can, Vienna, Austria to determine stream organic carbon dynamics within a headwater catchment located near Campbell River (British Columbia, Canada. Field-based absorbance measurements of DOC were made before and after forest harvest, highlighting the advantages of high temporal resolution compared to traditional grab sampling and laboratory measurements. Details of remote deployment are described. High-frequency DOC data is explored by resampling the 30 min time series with a range of resampling time intervals (from daily to weekly time steps. DOC export was calculated for three months from the post-harvest data and resampled time series, showing that sampling frequency has a profound effect on total DOC export. DOC exports derived from weekly measurements were found to underestimate export by as much as 30% compared to DOC export calculated from high-frequency data. Additionally, the importance of the ability to remotely monitor the system through a recently deployed wireless connection is emphasized by examining causes of prior data losses, and how such losses may be prevented through the ability to react when environmental or power disturbances cause system interruption and data loss.

  20. Quantifying Potential Groundwater Recharge In South Texas

    Science.gov (United States)

    Basant, S.; Zhou, Y.; Leite, P. A.; Wilcox, B. P.

    2015-12-01

    Groundwater in South Texas is heavily relied on for human consumption and irrigation for food crops. Like most of the south west US, woody encroachment has altered the grassland ecosystems here too. While brush removal has been widely implemented in Texas with the objective of increasing groundwater recharge, the linkage between vegetation and groundwater recharge in South Texas is still unclear. Studies have been conducted to understand plant-root-water dynamics at the scale of plants. However, little work has been done to quantify the changes in soil water and deep percolation at the landscape scale. Modeling water flow through soil profiles can provide an estimate of the total water flowing into deep percolation. These models are especially powerful with parameterized and calibrated with long term soil water data. In this study we parameterize the HYDRUS soil water model using long term soil water data collected in Jim Wells County in South Texas. Soil water was measured at every 20 cm intervals up to a depth of 200 cm. The parameterized model will be used to simulate soil water dynamics under a variety of precipitation regimes ranging from well above normal to severe drought conditions. The results from the model will be compared with the changes in soil moisture profile observed in response to vegetation cover and treatments from a study in a similar. Comparative studies like this can be used to build new and strengthen existing hypotheses regarding deep percolation and the role of soil texture and vegetation in groundwater recharge.

  1. Connected Car: Quantified Self becomes Quantified Car

    Directory of Open Access Journals (Sweden)

    Melanie Swan

    2015-02-01

    Full Text Available The automotive industry could be facing a situation of profound change and opportunity in the coming decades. There are a number of influencing factors such as increasing urban and aging populations, self-driving cars, 3D parts printing, energy innovation, and new models of transportation service delivery (Zipcar, Uber. The connected car means that vehicles are now part of the connected world, continuously Internet-connected, generating and transmitting data, which on the one hand can be helpfully integrated into applications, like real-time traffic alerts broadcast to smartwatches, but also raises security and privacy concerns. This paper explores the automotive connected world, and describes five killer QS (Quantified Self-auto sensor applications that link quantified-self sensors (sensors that measure the personal biometrics of individuals like heart rate and automotive sensors (sensors that measure driver and passenger biometrics or quantitative automotive performance metrics like speed and braking activity. The applications are fatigue detection, real-time assistance for parking and accidents, anger management and stress reduction, keyless authentication and digital identity verification, and DIY diagnostics. These kinds of applications help to demonstrate the benefit of connected world data streams in the automotive industry and beyond where, more fundamentally for human progress, the automation of both physical and now cognitive tasks is underway.

  2. Self-similar factor approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.; Sornette, D.

    2003-01-01

    The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties

  3. Is Time Predictability Quantifiable?

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    Computer architects and researchers in the realtime domain start to investigate processors and architectures optimized for real-time systems. Optimized for real-time systems means time predictable, i.e., architectures where it is possible to statically derive a tight bound of the worst......-case execution time. To compare different approaches we would like to quantify time predictability. That means we need to measure time predictability. In this paper we discuss the different approaches for these measurements and conclude that time predictability is practically not quantifiable. We can only...... compare the worst-case execution time bounds of different architectures....

  4. Quantifying and modeling soil structure dynamics

    Science.gov (United States)

    Characterization of soil structure has been a topic of scientific discussions ever since soil structure has been recognized as an important factor affecting soil physical, mechanical, chemical, and biological processes. Beyond semi-quantitative soil morphology classes, it is a challenge to describe ...

  5. New Similarity Functions

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Kwasnicka, Halina

    2016-01-01

    spaces, in addition to their similarity in the vector space. Prioritized Weighted Feature Distance (PWFD) works similarly as WFD, but provides the ability to give priorities to desirable features. The accuracy of the proposed functions are compared with other similarity functions on several data sets....... Our results show that the proposed functions work better than other methods proposed in the literature....

  6. Phoneme Similarity and Confusability

    Science.gov (United States)

    Bailey, T.M.; Hahn, U.

    2005-01-01

    Similarity between component speech sounds influences language processing in numerous ways. Explanation and detailed prediction of linguistic performance consequently requires an understanding of these basic similarities. The research reported in this paper contrasts two broad classes of approach to the issue of phoneme similarity-theoretically…

  7. Thermosensory reversal effect quantified

    NARCIS (Netherlands)

    Bergmann Tiest, W.M.; Kappers, A.M.L.

    2008-01-01

    At room temperature, some materials feel colder than others due to differences in thermal conductivity, heat capacity and geometry. When the ambient temperature is well above skin temperature, the roles of 'cold' and 'warm' materials are reversed. In this paper, this effect is quantified by

  8. Thermosensory reversal effect quantified

    NARCIS (Netherlands)

    Bergmann Tiest, W.M.; Kappers, A.M.L.

    2008-01-01

    At room temperature, some materials feel colder than others due to differences in thermal conductivity, heat capacity and geometry. When the ambient temperature is well above skin temperature, the roles of ‘cold’ and ‘warm’ materials are reversed. In this paper, this effect is quantified by

  9. Quantifying requirements volatility effects

    NARCIS (Netherlands)

    Kulk, G.P.; Verhoef, C.

    2008-01-01

    In an organization operating in the bancassurance sector we identified a low-risk IT subportfolio of 84 IT projects comprising together 16,500 function points, each project varying in size and duration, for which we were able to quantify its requirements volatility. This representative portfolio

  10. The quantified relationship

    NARCIS (Netherlands)

    Danaher, J.; Nyholm, S.R.; Earp, B.

    2018-01-01

    The growth of self-tracking and personal surveillance has given rise to the Quantified Self movement. Members of this movement seek to enhance their personal well-being, productivity, and self-actualization through the tracking and gamification of personal data. The technologies that make this

  11. Quantifying IT estimation risks

    NARCIS (Netherlands)

    Kulk, G.P.; Peters, R.J.; Verhoef, C.

    2009-01-01

    A statistical method is proposed for quantifying the impact of factors that influence the quality of the estimation of costs for IT-enabled business projects. We call these factors risk drivers as they influence the risk of the misestimation of project costs. The method can effortlessly be

  12. A new similarity index for nonlinear signal analysis based on local extrema patterns

    Science.gov (United States)

    Niknazar, Hamid; Motie Nasrabadi, Ali; Shamsollahi, Mohammad Bagher

    2018-02-01

    Common similarity measures of time domain signals such as cross-correlation and Symbolic Aggregate approximation (SAX) are not appropriate for nonlinear signal analysis. This is because of the high sensitivity of nonlinear systems to initial points. Therefore, a similarity measure for nonlinear signal analysis must be invariant to initial points and quantify the similarity by considering the main dynamics of signals. The statistical behavior of local extrema (SBLE) method was previously proposed to address this problem. The SBLE similarity index uses quantized amplitudes of local extrema to quantify the dynamical similarity of signals by considering patterns of sequential local extrema. By adding time information of local extrema as well as fuzzifying quantized values, this work proposes a new similarity index for nonlinear and long-term signal analysis, which extends the SBLE method. These new features provide more information about signals and reduce noise sensitivity by fuzzifying them. A number of practical tests were performed to demonstrate the ability of the method in nonlinear signal clustering and classification on synthetic data. In addition, epileptic seizure detection based on electroencephalography (EEG) signal processing was done by the proposed similarity to feature the potentials of the method as a real-world application tool.

  13. Molecular similarity measures.

    Science.gov (United States)

    Maggiora, Gerald M; Shanmugasundaram, Veerabahu

    2011-01-01

    Molecular similarity is a pervasive concept in chemistry. It is essential to many aspects of chemical reasoning and analysis and is perhaps the fundamental assumption underlying medicinal chemistry. Dissimilarity, the complement of similarity, also plays a major role in a growing number of applications of molecular diversity in combinatorial chemistry, high-throughput screening, and related fields. How molecular information is represented, called the representation problem, is important to the type of molecular similarity analysis (MSA) that can be carried out in any given situation. In this work, four types of mathematical structure are used to represent molecular information: sets, graphs, vectors, and functions. Molecular similarity is a pairwise relationship that induces structure into sets of molecules, giving rise to the concept of chemical space. Although all three concepts - molecular similarity, molecular representation, and chemical space - are treated in this chapter, the emphasis is on molecular similarity measures. Similarity measures, also called similarity coefficients or indices, are functions that map pairs of compatible molecular representations that are of the same mathematical form into real numbers usually, but not always, lying on the unit interval. This chapter presents a somewhat pedagogical discussion of many types of molecular similarity measures, their strengths and limitations, and their relationship to one another. An expanded account of the material on chemical spaces presented in the first edition of this book is also provided. It includes a discussion of the topography of activity landscapes and the role that activity cliffs in these landscapes play in structure-activity studies.

  14. Quantifying light pollution

    International Nuclear Information System (INIS)

    Cinzano, P.; Falchi, F.

    2014-01-01

    In this paper we review new available indicators useful to quantify and monitor light pollution, defined as the alteration of the natural quantity of light in the night environment due to introduction of manmade light. With the introduction of recent radiative transfer methods for the computation of light pollution propagation, several new indicators become available. These indicators represent a primary step in light pollution quantification, beyond the bare evaluation of the night sky brightness, which is an observational effect integrated along the line of sight and thus lacking the three-dimensional information. - Highlights: • We review new available indicators useful to quantify and monitor light pollution. • These indicators are a primary step in light pollution quantification. • These indicators allow to improve light pollution mapping from a 2D to a 3D grid. • These indicators allow carrying out a tomography of light pollution. • We show an application of this technique to an Italian region

  15. Quantifying Quantum-Mechanical Processes.

    Science.gov (United States)

    Hsieh, Jen-Hsiang; Chen, Shih-Hsuan; Li, Che-Ming

    2017-10-19

    The act of describing how a physical process changes a system is the basis for understanding observed phenomena. For quantum-mechanical processes in particular, the affect of processes on quantum states profoundly advances our knowledge of the natural world, from understanding counter-intuitive concepts to the development of wholly quantum-mechanical technology. Here, we show that quantum-mechanical processes can be quantified using a generic classical-process model through which any classical strategies of mimicry can be ruled out. We demonstrate the success of this formalism using fundamental processes postulated in quantum mechanics, the dynamics of open quantum systems, quantum-information processing, the fusion of entangled photon pairs, and the energy transfer in a photosynthetic pigment-protein complex. Since our framework does not depend on any specifics of the states being processed, it reveals a new class of correlations in the hierarchy between entanglement and Einstein-Podolsky-Rosen steering and paves the way for the elaboration of a generic method for quantifying physical processes.

  16. Similarity Measure of Graphs

    Directory of Open Access Journals (Sweden)

    Amine Labriji

    2017-07-01

    Full Text Available The topic of identifying the similarity of graphs was considered as highly recommended research field in the Web semantic, artificial intelligence, the shape recognition and information research. One of the fundamental problems of graph databases is finding similar graphs to a graph query. Existing approaches dealing with this problem are usually based on the nodes and arcs of the two graphs, regardless of parental semantic links. For instance, a common connection is not identified as being part of the similarity of two graphs in cases like two graphs without common concepts, the measure of similarity based on the union of two graphs, or the one based on the notion of maximum common sub-graph (SCM, or the distance of edition of graphs. This leads to an inadequate situation in the context of information research. To overcome this problem, we suggest a new measure of similarity between graphs, based on the similarity measure of Wu and Palmer. We have shown that this new measure satisfies the properties of a measure of similarities and we applied this new measure on examples. The results show that our measure provides a run time with a gain of time compared to existing approaches. In addition, we compared the relevance of the similarity values obtained, it appears that this new graphs measure is advantageous and  offers a contribution to solving the problem mentioned above.

  17. Processes of Similarity Judgment

    Science.gov (United States)

    Larkey, Levi B.; Markman, Arthur B.

    2005-01-01

    Similarity underlies fundamental cognitive capabilities such as memory, categorization, decision making, problem solving, and reasoning. Although recent approaches to similarity appreciate the structure of mental representations, they differ in the processes posited to operate over these representations. We present an experiment that…

  18. Judgments of brand similarity

    NARCIS (Netherlands)

    Bijmolt, THA; Wedel, M; Pieters, RGM; DeSarbo, WS

    This paper provides empirical insight into the way consumers make pairwise similarity judgments between brands, and how familiarity with the brands, serial position of the pair in a sequence, and the presentation format affect these judgments. Within the similarity judgment process both the

  19. The semantic similarity ensemble

    Directory of Open Access Journals (Sweden)

    Andrea Ballatore

    2013-12-01

    Full Text Available Computational measures of semantic similarity between geographic terms provide valuable support across geographic information retrieval, data mining, and information integration. To date, a wide variety of approaches to geo-semantic similarity have been devised. A judgment of similarity is not intrinsically right or wrong, but obtains a certain degree of cognitive plausibility, depending on how closely it mimics human behavior. Thus selecting the most appropriate measure for a specific task is a significant challenge. To address this issue, we make an analogy between computational similarity measures and soliciting domain expert opinions, which incorporate a subjective set of beliefs, perceptions, hypotheses, and epistemic biases. Following this analogy, we define the semantic similarity ensemble (SSE as a composition of different similarity measures, acting as a panel of experts having to reach a decision on the semantic similarity of a set of geographic terms. The approach is evaluated in comparison to human judgments, and results indicate that an SSE performs better than the average of its parts. Although the best member tends to outperform the ensemble, all ensembles outperform the average performance of each ensemble's member. Hence, in contexts where the best measure is unknown, the ensemble provides a more cognitively plausible approach.

  20. Gender similarities and differences.

    Science.gov (United States)

    Hyde, Janet Shibley

    2014-01-01

    Whether men and women are fundamentally different or similar has been debated for more than a century. This review summarizes major theories designed to explain gender differences: evolutionary theories, cognitive social learning theory, sociocultural theory, and expectancy-value theory. The gender similarities hypothesis raises the possibility of theorizing gender similarities. Statistical methods for the analysis of gender differences and similarities are reviewed, including effect sizes, meta-analysis, taxometric analysis, and equivalence testing. Then, relying mainly on evidence from meta-analyses, gender differences are reviewed in cognitive performance (e.g., math performance), personality and social behaviors (e.g., temperament, emotions, aggression, and leadership), and psychological well-being. The evidence on gender differences in variance is summarized. The final sections explore applications of intersectionality and directions for future research.

  1. Quantify the complexity of turbulence

    Science.gov (United States)

    Tao, Xingtian; Wu, Huixuan

    2017-11-01

    Many researchers have used Reynolds stress, power spectrum and Shannon entropy to characterize a turbulent flow, but few of them have measured the complexity of turbulence. Yet as this study shows, conventional turbulence statistics and Shannon entropy have limits when quantifying the flow complexity. Thus, it is necessary to introduce new complexity measures- such as topology complexity and excess information-to describe turbulence. Our test flow is a classic turbulent cylinder wake at Reynolds number 8100. Along the stream-wise direction, the flow becomes more isotropic and the magnitudes of normal Reynolds stresses decrease monotonically. These seem to indicate the flow dynamics becomes simpler downstream. However, the Shannon entropy keeps increasing along the flow direction and the dynamics seems to be more complex, because the large-scale vortices cascade to small eddies, the flow is less correlated and more unpredictable. In fact, these two contradictory observations partially describe the complexity of a turbulent wake. Our measurements (up to 40 diameters downstream the cylinder) show that the flow's degree-of-complexity actually increases firstly and then becomes a constant (or drops slightly) along the stream-wise direction. University of Kansas General Research Fund.

  2. Quantifying the transient carbon dynamics of ecosystem scale carbon cycle responses to piñon pine mortality using a large-scale experimental manipulation, remote sensing and model-data fusion

    Science.gov (United States)

    Litvak, M. E.; Hilton, T. W.; Krofcheck, D. J.; Fox, A. M.; Robinson, E.; McDowell, N. G.; Rahn, T.; Sinsabaugh, R.

    2012-12-01

    The southwestern United States experienced an extended drought from 1999-2002 which led to widespread coniferous tree mortality throughout New Mexico, Arizona, Utah and Colorado. Piñon-juniper (PJ) woodlands, which occupy 24 million ha throughout the Southwest, proved to be extremely vulnerable to this drought, experiencing 40 to 95% mortality of piñon pine (Pinus edulis) and 2-25% mortality of juniper (Juniperus monosperma) in less than 3 years (Breshears et al., 2005). Understanding the response trajectories of these woodlands is crucial given that climate projections for the region suggest that episodic droughts, such as the one correlated with these recent conifer mortality, are likely to increase in frequency and severity and to expand northward. We are using a combination of eddy covariance, soil respiration, sap flow and biomass carbon pool measurements made at: (i) an undisturbed PJ woodland (control) in central New Mexico and at a manipulation site within 2 miles of the control where all piñon trees greater than 7 cm diameter at breast height within the 4 ha flux footprint were girdled (decreasing LAI by ~ 1/3) to quantify the response of ecosystem carbon and water dynamics in PJ woodlands to widespread piñon mortality. As expected, piñon mortality triggered an abrupt shift in carbon stocks from productive biomass to detritus, leading to a 25% decrease in gross primary production, and >50% decrease in net ecosystem production in the two years following mortality. Because litter and course woody debris are slow to decompose in these semiarid environments, ecosystem respiration initially decreased following mortality, and only increased two years post mortality following a large monsoon precipitation event. In the three years following mortality, reduced competition for water in these water limited ecosystems and increased light availability has triggered compensatory growth in understory vegetation observed in both remote sensing and ground

  3. Similarity or difference?

    DEFF Research Database (Denmark)

    Villadsen, Anders Ryom

    2013-01-01

    While the organizational structures and strategies of public organizations have attracted substantial research attention among public management scholars, little research has explored how these organizational core dimensions are interconnected and influenced by pressures for similarity....... In this paper I address this topic by exploring the relation between expenditure strategy isomorphism and structure isomorphism in Danish municipalities. Different literatures suggest that organizations exist in concurrent pressures for being similar to and different from other organizations in their field......-shaped relation exists between expenditure strategy isomorphism and structure isomorphism in a longitudinal quantitative study of Danish municipalities....

  4. Modeling of similar economies

    Directory of Open Access Journals (Sweden)

    Sergey B. Kuznetsov

    2017-06-01

    Full Text Available Objective to obtain dimensionless criteria ndash economic indices characterizing the national economy and not depending on its size. Methods mathematical modeling theory of dimensions processing statistical data. Results basing on differential equations describing the national economy with the account of economical environment resistance two dimensionless criteria are obtained which allow to compare economies regardless of their sizes. With the theory of dimensions we show that the obtained indices are not accidental. We demonstrate the implementation of the obtained dimensionless criteria for the analysis of behavior of certain countriesrsquo economies. Scientific novelty the dimensionless criteria are obtained ndash economic indices which allow to compare economies regardless of their sizes and to analyze the dynamic changes in the economies with time. nbsp Practical significance the obtained results can be used for dynamic and comparative analysis of different countriesrsquo economies regardless of their sizes.

  5. Quantifying global exergy resources

    International Nuclear Information System (INIS)

    Hermann, Weston A.

    2006-01-01

    Exergy is used as a common currency to assess and compare the reservoirs of theoretically extractable work we call energy resources. Resources consist of matter or energy with properties different from the predominant conditions in the environment. These differences can be classified as physical, chemical, or nuclear exergy. This paper identifies the primary exergy reservoirs that supply exergy to the biosphere and quantifies the intensive and extensive exergy of their derivative secondary reservoirs, or resources. The interconnecting accumulations and flows among these reservoirs are illustrated to show the path of exergy through the terrestrial system from input to its eventual natural or anthropogenic destruction. The results are intended to assist in evaluation of current resource utilization, help guide fundamental research to enable promising new energy technologies, and provide a basis for comparing the resource potential of future energy options that is independent of technology and cost

  6. Comparing Harmonic Similarity Measures

    NARCIS (Netherlands)

    de Haas, W.B.; Robine, M.; Hanna, P.; Veltkamp, R.C.; Wiering, F.

    2010-01-01

    We present an overview of the most recent developments in polyphonic music retrieval and an experiment in which we compare two harmonic similarity measures. In contrast to earlier work, in this paper we specifically focus on the symbolic chord description as the primary musical representation and

  7. Quantifying hidden individual heterogeneity

    DEFF Research Database (Denmark)

    Steiner, Ulrich; Lenart, Adam; Vaupel, James W.

    Aging is assumed to be driven by the accumulation of damage or some other aging factor which shapes demographic patterns, including the classical late age mortality plateaus. However to date, heterogeneity in these damage stages is not observed. Here, we estimate underlying stage distributions...... and stage dynamics, based on observed survival patterns of isoclonal bacteria. Our results reveal demographic dynamics being dominated by low damage stages and transmission of damage from mother to daughters is low. Still, our models are too simplistic and deterministic. Explaining the observed data...... requires more stochastic processes as our current models includes. We are only at the beginning of understanding the diverse mechanism behind aging and the shaping of senescence....

  8. Quantifying Anthropogenic Dust Emissions

    Science.gov (United States)

    Webb, Nicholas P.; Pierre, Caroline

    2018-02-01

    Anthropogenic land use and land cover change, including local environmental disturbances, moderate rates of wind-driven soil erosion and dust emission. These human-dust cycle interactions impact ecosystems and agricultural production, air quality, human health, biogeochemical cycles, and climate. While the impacts of land use activities and land management on aeolian processes can be profound, the interactions are often complex and assessments of anthropogenic dust loads at all scales remain highly uncertain. Here, we critically review the drivers of anthropogenic dust emission and current evaluation approaches. We then identify and describe opportunities to: (1) develop new conceptual frameworks and interdisciplinary approaches that draw on ecological state-and-transition models to improve the accuracy and relevance of assessments of anthropogenic dust emissions; (2) improve model fidelity and capacity for change detection to quantify anthropogenic impacts on aeolian processes; and (3) enhance field research and monitoring networks to support dust model applications to evaluate the impacts of disturbance processes on local to global-scale wind erosion and dust emissions.

  9. Quantifying loopy network architectures.

    Directory of Open Access Journals (Sweden)

    Eleni Katifori

    Full Text Available Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes from the metric topology (connectivity and edge weight and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs.

  10. Nuclear markers reveal that inter-lake cichlids' similar morphologies do not reflect similar genealogy.

    Science.gov (United States)

    Kassam, Daud; Seki, Shingo; Horic, Michio; Yamaoka, Kosaku

    2006-08-01

    The apparent inter-lake morphological similarity among East African Great Lakes' cichlid species/genera has left evolutionary biologists asking whether such similarity is due to sharing of common ancestor or mere convergent evolution. In order to answer such question, we first used Geometric Morphometrics, GM, to quantify morphological similarity and then subsequently used Amplified Fragment Length Polymorphism, AFLP, to determine if similar morphologies imply shared ancestry or convergent evolution. GM revealed that not all presumed morphological similar pairs were indeed similar, and the dendrogram generated from AFLP data indicated distinct clusters corresponding to each lake and not inter-lake morphological similar pairs. Such results imply that the morphological similarity is due to convergent evolution and not shared ancestry. The congruency of GM and AFLP generated dendrograms imply that GM is capable of picking up phylogenetic signal, and thus GM can be potential tool in phylogenetic systematics.

  11. Self-similarity in the inertial region of wall turbulence.

    Science.gov (United States)

    Klewicki, J; Philip, J; Marusic, I; Chauhan, K; Morrill-Winter, C

    2014-12-01

    The inverse of the von Kármán constant κ is the leading coefficient in the equation describing the logarithmic mean velocity profile in wall bounded turbulent flows. Klewicki [J. Fluid Mech. 718, 596 (2013)] connects the asymptotic value of κ with an emerging condition of dynamic self-similarity on an interior inertial domain that contains a geometrically self-similar hierarchy of scaling layers. A number of properties associated with the asymptotic value of κ are revealed. This is accomplished using a framework that retains connection to invariance properties admitted by the mean statement of dynamics. The development leads toward, but terminates short of, analytically determining a value for κ. It is shown that if adjacent layers on the hierarchy (or their adjacent positions) adhere to the same self-similarity that is analytically shown to exist between any given layer and its position, then κ≡Φ(-2)=0.381966..., where Φ=(1+√5)/2 is the golden ratio. A number of measures, derived specifically from an analysis of the mean momentum equation, are subsequently used to empirically explore the veracity and implications of κ=Φ(-2). Consistent with the differential transformations underlying an invariant form admitted by the governing mean equation, it is demonstrated that the value of κ arises from two geometric features associated with the inertial turbulent motions responsible for momentum transport. One nominally pertains to the shape of the relevant motions as quantified by their area coverage in any given wall-parallel plane, and the other pertains to the changing size of these motions in the wall-normal direction. In accord with self-similar mean dynamics, these two features remain invariant across the inertial domain. Data from direct numerical simulations and higher Reynolds number experiments are presented and discussed relative to the self-similar geometric structure indicated by the analysis, and in particular the special form of self-similarity

  12. Quantifying Cancer Risk from Radiation.

    Science.gov (United States)

    Keil, Alexander P; Richardson, David B

    2017-12-06

    Complex statistical models fitted to data from studies of atomic bomb survivors are used to estimate the human health effects of ionizing radiation exposures. We describe and illustrate an approach to estimate population risks from ionizing radiation exposure that relaxes many assumptions about radiation-related mortality. The approach draws on developments in methods for causal inference. The results offer a different way to quantify radiation's effects and show that conventional estimates of the population burden of excess cancer at high radiation doses are driven strongly by projecting outside the range of current data. Summary results obtained using the proposed approach are similar in magnitude to those obtained using conventional methods, although estimates of radiation-related excess cancers differ for many age, sex, and dose groups. At low doses relevant to typical exposures, the strength of evidence in data is surprisingly weak. Statements regarding human health effects at low doses rely strongly on the use of modeling assumptions. © 2017 Society for Risk Analysis.

  13. Similar or different?

    DEFF Research Database (Denmark)

    Cornér, Solveig; Pyhältö, Kirsi; Peltonen, Jouni

    2018-01-01

    Previous research has identified researcher community and supervisory support as key determinants of the doctoral journey contributing to students’ persistence and robustness. However, we still know little about cross-cultural variation in the researcher community and supervisory support experien...... counter partners, whereas the Finnish students perceived lower levels of instrumental support than the Danish students. The findings imply that seemingly similar contexts hold valid differences in experienced social support and educational strategies at the PhD level....... experienced by PhD students within the same discipline. This study explores the support experiences of 381 PhD students within the humanities and social sciences from three research-intensive universities in Denmark (n=145) and Finland (n=236). The mixed methods design was utilized. The data were collected...... counter partners. The results also indicated that the only form of support in which the students expressed more matched support than mismatched support was informational support. Further investigation showed that the Danish students reported a high level of mismatch in emotional support than their Finnish...

  14. Analyzing complex networks evolution through Information Theory quantifiers

    International Nuclear Information System (INIS)

    Carpi, Laura C.; Rosso, Osvaldo A.; Saco, Patricia M.; Ravetti, Martin Gomez

    2011-01-01

    A methodology to analyze dynamical changes in complex networks based on Information Theory quantifiers is proposed. The square root of the Jensen-Shannon divergence, a measure of dissimilarity between two probability distributions, and the MPR Statistical Complexity are used to quantify states in the network evolution process. Three cases are analyzed, the Watts-Strogatz model, a gene network during the progression of Alzheimer's disease and a climate network for the Tropical Pacific region to study the El Nino/Southern Oscillation (ENSO) dynamic. We find that the proposed quantifiers are able not only to capture changes in the dynamics of the processes but also to quantify and compare states in their evolution.

  15. Analyzing complex networks evolution through Information Theory quantifiers

    Energy Technology Data Exchange (ETDEWEB)

    Carpi, Laura C., E-mail: Laura.Carpi@studentmail.newcastle.edu.a [Civil, Surveying and Environmental Engineering, University of Newcastle, University Drive, Callaghan NSW 2308 (Australia); Departamento de Fisica, Instituto de Ciencias Exatas, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, Belo Horizonte (31270-901), MG (Brazil); Rosso, Osvaldo A., E-mail: rosso@fisica.ufmg.b [Departamento de Fisica, Instituto de Ciencias Exatas, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, Belo Horizonte (31270-901), MG (Brazil); Chaos and Biology Group, Instituto de Calculo, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Pabellon II, Ciudad Universitaria, 1428 Ciudad de Buenos Aires (Argentina); Saco, Patricia M., E-mail: Patricia.Saco@newcastle.edu.a [Civil, Surveying and Environmental Engineering, University of Newcastle, University Drive, Callaghan NSW 2308 (Australia); Departamento de Hidraulica, Facultad de Ciencias Exactas, Ingenieria y Agrimensura, Universidad Nacional de Rosario, Avenida Pellegrini 250, Rosario (Argentina); Ravetti, Martin Gomez, E-mail: martin.ravetti@dep.ufmg.b [Departamento de Engenharia de Producao, Universidade Federal de Minas Gerais, Av. Antonio Carlos, 6627, Belo Horizonte (31270-901), MG (Brazil)

    2011-01-24

    A methodology to analyze dynamical changes in complex networks based on Information Theory quantifiers is proposed. The square root of the Jensen-Shannon divergence, a measure of dissimilarity between two probability distributions, and the MPR Statistical Complexity are used to quantify states in the network evolution process. Three cases are analyzed, the Watts-Strogatz model, a gene network during the progression of Alzheimer's disease and a climate network for the Tropical Pacific region to study the El Nino/Southern Oscillation (ENSO) dynamic. We find that the proposed quantifiers are able not only to capture changes in the dynamics of the processes but also to quantify and compare states in their evolution.

  16. The Fallacy of Quantifying Risk

    Science.gov (United States)

    2012-09-01

    Defense AT&L: September–October 2012 18 The Fallacy of Quantifying Risk David E. Frick, Ph.D. Frick is a 35-year veteran of the Department of...a key to risk analysis was “choosing the right technique” of quantifying risk . The weakness in this argument stems not from the assertion that one...of information about the enemy), yet achiev- ing great outcomes. Attempts at quantifying risk are not, in and of themselves, objectionable. Prudence

  17. Notions of similarity for systems biology models.

    Science.gov (United States)

    Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knüpfer, Christian; Liebermeister, Wolfram; Waltemath, Dagmar

    2018-01-01

    Systems biology models are rapidly increasing in complexity, size and numbers. When building large models, researchers rely on software tools for the retrieval, comparison, combination and merging of models, as well as for version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of 'similarity' may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here we survey existing methods for the comparison of models, introduce quantitative measures for model similarity, and discuss potential applications of combined similarity measures. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on a combination of different model aspects. The six aspects that we define as potentially relevant for similarity are underlying encoding, references to biological entities, quantitative behaviour, qualitative behaviour, mathematical equations and parameters and network structure. We argue that future similarity measures will benefit from combining these model aspects in flexible, problem-specific ways to mimic users' intuition about model similarity, and to support complex model searches in databases. © The Author 2016. Published by Oxford University Press.

  18. Self-similar gravitational clustering

    International Nuclear Information System (INIS)

    Efstathiou, G.; Fall, S.M.; Hogan, C.

    1979-01-01

    The evolution of gravitational clustering is considered and several new scaling relations are derived for the multiplicity function. These include generalizations of the Press-Schechter theory to different densities and cosmological parameters. The theory is then tested against multiplicity function and correlation function estimates for a series of 1000-body experiments. The results are consistent with the theory and show some dependence on initial conditions and cosmological density parameter. The statistical significance of the results, however, is fairly low because of several small number effects in the experiments. There is no evidence for a non-linear bootstrap effect or a dependence of the multiplicity function on the internal dynamics of condensed groups. Empirical estimates of the multiplicity function by Gott and Turner have a feature near the characteristic luminosity predicted by the theory. The scaling relations allow the inference from estimates of the galaxy luminosity function that galaxies must have suffered considerable dissipation if they originally formed from a self-similar hierarchy. A method is also developed for relating the multiplicity function to similar measures of clustering, such as those of Bhavsar, for the distribution of galaxies on the sky. These are shown to depend on the luminosity function in a complicated way. (author)

  19. Dynamics

    CERN Document Server

    Goodman, Lawrence E

    2001-01-01

    Beginning text presents complete theoretical treatment of mechanical model systems and deals with technological applications. Topics include introduction to calculus of vectors, particle motion, dynamics of particle systems and plane rigid bodies, technical applications in plane motions, theory of mechanical vibrations, and more. Exercises and answers appear in each chapter.

  20. Multidominance, ellipsis, and quantifier scope

    NARCIS (Netherlands)

    Temmerman, Tanja Maria Hugo

    2012-01-01

    This dissertation provides a novel perspective on the interaction between quantifier scope and ellipsis. It presents a detailed investigation of the scopal interaction between English negative indefinites, modals, and quantified phrases in ellipsis. One of the crucial observations is that a negative

  1. Quantifiers in Russian Sign Language

    NARCIS (Netherlands)

    Kimmelman, V.; Paperno, D.; Keenan, E.L.

    2017-01-01

    After presenting some basic genetic, historical and typological information about Russian Sign Language, this chapter outlines the quantification patterns it expresses. It illustrates various semantic types of quantifiers, such as generalized existential, generalized universal, proportional,

  2. Quantified Self in de huisartsenpraktijk

    NARCIS (Netherlands)

    de Groot, Martijn; Timmers, Bart; Kooiman, Thea; van Ittersum, Miriam

    2015-01-01

    Quantified Self staat voor de zelfmetende mens. Het aantal mensen dat met zelf gegeneerde gezondheidsgegevens het zorgproces binnenwandelt gaat de komende jaren groeien. Verschillende soorten activity trackers en gezondheidsapplicaties voor de smartphone maken het relatief eenvoudig om persoonlijke

  3. Similarity and self-similarity in high energy density physics: application to laboratory astrophysics

    International Nuclear Information System (INIS)

    Falize, E.

    2008-10-01

    The spectacular recent development of powerful facilities allows the astrophysical community to explore, in laboratory, astrophysical phenomena where radiation and matter are strongly coupled. The titles of the nine chapters of the thesis are: from high energy density physics to laboratory astrophysics; Lie groups, invariance and self-similarity; scaling laws and similarity properties in High-Energy-Density physics; the Burgan-Feix-Munier transformation; dynamics of polytropic gases; stationary radiating shocks and the POLAR project; structure, dynamics and stability of optically thin fluids; from young star jets to laboratory jets; modelling and experiences for laboratory jets

  4. Generalized sample entropy analysis for traffic signals based on similarity measure

    Science.gov (United States)

    Shang, Du; Xu, Mengjia; Shang, Pengjian

    2017-05-01

    Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.

  5. The baryonic self similarity of dark matter

    International Nuclear Information System (INIS)

    Alard, C.

    2014-01-01

    The cosmological simulations indicates that dark matter halos have specific self-similar properties. However, the halo similarity is affected by the baryonic feedback. By using momentum-driven winds as a model to represent the baryon feedback, an equilibrium condition is derived which directly implies the emergence of a new type of similarity. The new self-similar solution has constant acceleration at a reference radius for both dark matter and baryons. This model receives strong support from the observations of galaxies. The new self-similar properties imply that the total acceleration at larger distances is scale-free, the transition between the dark matter and baryons dominated regime occurs at a constant acceleration, and the maximum amplitude of the velocity curve at larger distances is proportional to M 1/4 . These results demonstrate that this self-similar model is consistent with the basics of modified Newtonian dynamics (MOND) phenomenology. In agreement with the observations, the coincidence between the self-similar model and MOND breaks at the scale of clusters of galaxies. Some numerical experiments show that the behavior of the density near the origin is closely approximated by a Einasto profile.

  6. Quasi-Similarity Model of Synthetic Jets

    Czech Academy of Sciences Publication Activity Database

    Tesař, Václav; Kordík, Jozef

    2009-01-01

    Roč. 149, č. 2 (2009), s. 255-265 ISSN 0924-4247 R&D Projects: GA AV ČR IAA200760705; GA ČR GA101/07/1499 Institutional research plan: CEZ:AV0Z20760514 Keywords : jets * synthetic jets * similarity solution Subject RIV: BK - Fluid Dynamics Impact factor: 1.674, year: 2009 http://www.sciencedirect.com

  7. Multidimensional Scaling Visualization using Parametric Similarity Indices

    OpenAIRE

    Machado, J. A. Tenreiro; Lopes, António M.; Galhano, A.M.

    2015-01-01

    In this paper, we apply multidimensional scaling (MDS) and parametric similarity indices (PSI) in the analysis of complex systems (CS). Each CS is viewed as a dynamical system, exhibiting an output time-series to be interpreted as a manifestation of its behavior. We start by adopting a sliding window to sample the original data into several consecutive time periods. Second, we define a given PSI for tracking pieces of data. We then compare the windows for different values of the parameter, an...

  8. A COMPARISON OF SEMANTIC SIMILARITY MODELS IN EVALUATING CONCEPT SIMILARITY

    Directory of Open Access Journals (Sweden)

    Q. X. Xu

    2012-08-01

    Full Text Available The semantic similarities are important in concept definition, recognition, categorization, interpretation, and integration. Many semantic similarity models have been established to evaluate semantic similarities of objects or/and concepts. To find out the suitability and performance of different models in evaluating concept similarities, we make a comparison of four main types of models in this paper: the geometric model, the feature model, the network model, and the transformational model. Fundamental principles and main characteristics of these models are introduced and compared firstly. Land use and land cover concepts of NLCD92 are employed as examples in the case study. The results demonstrate that correlations between these models are very high for a possible reason that all these models are designed to simulate the similarity judgement of human mind.

  9. Quantifying carbon stores and decomposition in dead wood: A review

    Science.gov (United States)

    Matthew B. Russell; Shawn Fraver; Tuomas Aakala; Jeffrey H. Gove; Christopher W. Woodall; Anthony W. D’Amato; Mark J. Ducey

    2015-01-01

    The amount and dynamics of forest dead wood (both standing and downed) has been quantified by a variety of approaches throughout the forest science and ecology literature. Differences in the sampling and quantification of dead wood can lead to differences in our understanding of forests and their role in the sequestration and emissions of CO2, as...

  10. Renewing the Respect for Similarity

    Directory of Open Access Journals (Sweden)

    Shimon eEdelman

    2012-07-01

    Full Text Available In psychology, the concept of similarity has traditionally evoked a mixture of respect, stemmingfrom its ubiquity and intuitive appeal, and concern, due to its dependence on the framing of the problemat hand and on its context. We argue for a renewed focus on similarity as an explanatory concept, bysurveying established results and new developments in the theory and methods of similarity-preservingassociative lookup and dimensionality reduction — critical components of many cognitive functions, aswell as of intelligent data management in computer vision. We focus in particular on the growing familyof algorithms that support associative memory by performing hashing that respects local similarity, andon the uses of similarity in representing structured objects and scenes. Insofar as these similarity-basedideas and methods are useful in cognitive modeling and in AI applications, they should be included inthe core conceptual toolkit of computational neuroscience.

  11. Quantifying and simulating human sensation

    DEFF Research Database (Denmark)

    Quantifying and simulating human sensation – relating science and technology of indoor climate research Abstract In his doctoral thesis from 1970 civil engineer Povl Ole Fanger proposed that the understanding of indoor climate should focus on the comfort of the individual rather than averaged...... this understanding of human sensation was adjusted to technology. I will look into the construction of the equipment, what it measures and the relationship between theory, equipment and tradition....

  12. Quantifying emissions from spontaneous combustion

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-09-01

    Spontaneous combustion can be a significant problem in the coal industry, not only due to the obvious safety hazard and the potential loss of valuable assets, but also with respect to the release of gaseous pollutants, especially CO2, from uncontrolled coal fires. This report reviews methodologies for measuring emissions from spontaneous combustion and discusses methods for quantifying, estimating and accounting for the purpose of preparing emission inventories.

  13. Towards Quantifying a Wider Reality: Shannon Exonerata

    Directory of Open Access Journals (Sweden)

    Robert E. Ulanowicz

    2011-10-01

    Full Text Available In 1872 Ludwig von Boltzmann derived a statistical formula to represent the entropy (an apophasis of a highly simplistic system. In 1948 Claude Shannon independently formulated the same expression to capture the positivist essence of information. Such contradictory thrusts engendered decades of ambiguity concerning exactly what is conveyed by the expression. Resolution of widespread confusion is possible by invoking the third law of thermodynamics, which requires that entropy be treated in a relativistic fashion. Doing so parses the Boltzmann expression into separate terms that segregate apophatic entropy from positivist information. Possibly more importantly, the decomposition itself portrays a dialectic-like agonism between constraint and disorder that may provide a more appropriate description of the behavior of living systems than is possible using conventional dynamics. By quantifying the apophatic side of evolution, the Shannon approach to information achieves what no other treatment of the subject affords: It opens the window on a more encompassing perception of reality.

  14. A Novel Hybrid Similarity Calculation Model

    Directory of Open Access Journals (Sweden)

    Xiaoping Fan

    2017-01-01

    Full Text Available This paper addresses the problems of similarity calculation in the traditional recommendation algorithms of nearest neighbor collaborative filtering, especially the failure in describing dynamic user preference. Proceeding from the perspective of solving the problem of user interest drift, a new hybrid similarity calculation model is proposed in this paper. This model consists of two parts, on the one hand the model uses the function fitting to describe users’ rating behaviors and their rating preferences, and on the other hand it employs the Random Forest algorithm to take user attribute features into account. Furthermore, the paper combines the two parts to build a new hybrid similarity calculation model for user recommendation. Experimental results show that, for data sets of different size, the model’s prediction precision is higher than the traditional recommendation algorithms.

  15. Personalized recommendation with corrected similarity

    International Nuclear Information System (INIS)

    Zhu, Xuzhen; Tian, Hui; Cai, Shimin

    2014-01-01

    Personalized recommendation has attracted a surge of interdisciplinary research. Especially, similarity-based methods in applications of real recommendation systems have achieved great success. However, the computations of similarities are overestimated or underestimated, in particular because of the defective strategy of unidirectional similarity estimation. In this paper, we solve this drawback by leveraging mutual correction of forward and backward similarity estimations, and propose a new personalized recommendation index, i.e., corrected similarity based inference (CSI). Through extensive experiments on four benchmark datasets, the results show a greater improvement of CSI in comparison with these mainstream baselines. And a detailed analysis is presented to unveil and understand the origin of such difference between CSI and mainstream indices. (paper)

  16. Towards Personalized Medicine: Leveraging Patient Similarity and Drug Similarity Analytics

    Science.gov (United States)

    Zhang, Ping; Wang, Fei; Hu, Jianying; Sorrentino, Robert

    2014-01-01

    The rapid adoption of electronic health records (EHR) provides a comprehensive source for exploratory and predictive analytic to support clinical decision-making. In this paper, we investigate how to utilize EHR to tailor treatments to individual patients based on their likelihood to respond to a therapy. We construct a heterogeneous graph which includes two domains (patients and drugs) and encodes three relationships (patient similarity, drug similarity, and patient-drug prior associations). We describe a novel approach for performing a label propagation procedure to spread the label information representing the effectiveness of different drugs for different patients over this heterogeneous graph. The proposed method has been applied on a real-world EHR dataset to help identify personalized treatments for hypercholesterolemia. The experimental results demonstrate the effectiveness of the approach and suggest that the combination of appropriate patient similarity and drug similarity analytics could lead to actionable insights for personalized medicine. Particularly, by leveraging drug similarity in combination with patient similarity, our method could perform well even on new or rarely used drugs for which there are few records of known past performance. PMID:25717413

  17. A Signal Processing Method to Explore Similarity in Protein Flexibility

    Directory of Open Access Journals (Sweden)

    Simina Vasilache

    2010-01-01

    Full Text Available Understanding mechanisms of protein flexibility is of great importance to structural biology. The ability to detect similarities between proteins and their patterns is vital in discovering new information about unknown protein functions. A Distance Constraint Model (DCM provides a means to generate a variety of flexibility measures based on a given protein structure. Although information about mechanical properties of flexibility is critical for understanding protein function for a given protein, the question of whether certain characteristics are shared across homologous proteins is difficult to assess. For a proper assessment, a quantified measure of similarity is necessary. This paper begins to explore image processing techniques to quantify similarities in signals and images that characterize protein flexibility. The dataset considered here consists of three different families of proteins, with three proteins in each family. The similarities and differences found within flexibility measures across homologous proteins do not align with sequence-based evolutionary methods.

  18. Quantifying dynamic contrast-enhanced MRI of the knee in children with juvenile rheumatoid arthritis using an arterial input function (AIF) extracted from popliteal artery enhancement, and the effect of the choice of the AIF on the kinetic parameters.

    Science.gov (United States)

    Workie, Dagnachew W; Dardzinski, Bernard J

    2005-09-01

    Quantification of dynamic contrast-enhanced (DCE) MRI based on pharmacokinetic modeling requires specification of the arterial input function (AIF). A full representation of the plasma concentration data, including the initial rise and decay parts, considering the delay and dispersion of the bolus contrast is important. This work deals with modeling of DCE-MRI data from the knees of children with a history of juvenile rheumatoid arthritis (JRA) by using an AIF extracted from the signal enhancement data from the nearby popliteal artery. Three models for the AIFs were considered: a triexponential (AIF1), a gamma-variate plus a biexponential (AIF2), and a biexponential (AIF3). The pharmacokinetic parameters obtained from the model were Ktrans', kep, and V'p. The results from AIF1 and AIF2 showed no statistically significant difference. However, some statistically significant differences were seen with AIF3, particularly for parameters Ktrans' and V'p in the synovium (SNVM). These results suggest the importance of obtaining an appropriate AIF representation in pharmacokinetic modeling of JRA. Specifically, the initial rising part of the AIF should be incorporated for optimal pharmacokinetic modeling results. The pharmacokinetic parameters (mean+/-SD) derived from AIF1, using the average plasma concentration data, were as follows: SNVM Ktrans'(min-1)=0.52+/-0.34, kep(min-1)=0.71+/-0.39, and V'p=0.33+/-0.16, and for the distal femoral physis (DFP) Ktrans'(min-1)=1.83+/-1.78, kep(min-1)=2.65+/-1.80, and V'p=0.46+/-0.31. The pharmacokinetic parameters in the SNVM may be useful for investigating activity and therapeutic efficacy in studies of JRA. Longitudinal studies are necessary to find or demonstrate the parameter that is more sensitive to disease activity. Copyright (c) 2005 Wiley-Liss, Inc.

  19. Integrating user profile in medical CBIR systems to answer perceptual similarity queries

    Science.gov (United States)

    Bugatti, Pedro H.; Kaster, Daniel S.; Ponciano-Silva, Marcelo; Traina, Agma J. M.; Traina, Caetano, Jr.

    2011-03-01

    Techniques for Content-Based Image Retrieval (CBIR) have been intensively explored due to the increase in the amount of captured images and the need of fast retrieval of them. The medical field is a specific example that generates a large flow of information, especially digital images employed for diagnosing. One issue that still remains unsolved deals with how to reach the perceptual similarity. That is, to achieve an effective retrieval, one must characterize and quantify the perceptual similarity regarding the specialist in the field. Therefore, the present paper was conceived to fill in this gap creating a consistent support to perform similarity queries over medical images, maintaining the semantics of a given query desired by the user. CBIR systems relying in relevance feedback techniques usually request the users to label relevant images. In this paper, we present a simple but highly effective strategy to survey user profiles, taking advantage of such labeling to implicitly gather the user perceptual similarity. The user profiles maintain the settings desired for each user, allowing tuning the similarity assessment, which encompasses dynamically changing the distance function employed through an interactive process. Experiments using computed tomography lung images show that the proposed approach is effective in capturing the users' perception.

  20. Data Used in Quantified Reliability Models

    Science.gov (United States)

    DeMott, Diana; Kleinhammer, Roger K.; Kahn, C. J.

    2014-01-01

    Data is the crux to developing quantitative risk and reliability models, without the data there is no quantification. The means to find and identify reliability data or failure numbers to quantify fault tree models during conceptual and design phases is often the quagmire that precludes early decision makers consideration of potential risk drivers that will influence design. The analyst tasked with addressing a system or product reliability depends on the availability of data. But, where is does that data come from and what does it really apply to? Commercial industries, government agencies, and other international sources might have available data similar to what you are looking for. In general, internal and external technical reports and data based on similar and dissimilar equipment is often the first and only place checked. A common philosophy is "I have a number - that is good enough". But, is it? Have you ever considered the difference in reported data from various federal datasets and technical reports when compared to similar sources from national and/or international datasets? Just how well does your data compare? Understanding how the reported data was derived, and interpreting the information and details associated with the data is as important as the data itself.

  1. Quantifying chaos for ecological stoichiometry.

    Science.gov (United States)

    Duarte, Jorge; Januário, Cristina; Martins, Nuno; Sardanyés, Josep

    2010-09-01

    The theory of ecological stoichiometry considers ecological interactions among species with different chemical compositions. Both experimental and theoretical investigations have shown the importance of species composition in the outcome of the population dynamics. A recent study of a theoretical three-species food chain model considering stoichiometry [B. Deng and I. Loladze, Chaos 17, 033108 (2007)] shows that coexistence between two consumers predating on the same prey is possible via chaos. In this work we study the topological and dynamical measures of the chaotic attractors found in such a model under ecological relevant parameters. By using the theory of symbolic dynamics, we first compute the topological entropy associated with unimodal Poincaré return maps obtained by Deng and Loladze from a dimension reduction. With this measure we numerically prove chaotic competitive coexistence, which is characterized by positive topological entropy and positive Lyapunov exponents, achieved when the first predator reduces its maximum growth rate, as happens at increasing δ1. However, for higher values of δ1 the dynamics become again stable due to an asymmetric bubble-like bifurcation scenario. We also show that a decrease in the efficiency of the predator sensitive to prey's quality (increasing parameter ζ) stabilizes the dynamics. Finally, we estimate the fractal dimension of the chaotic attractors for the stoichiometric ecological model.

  2. Identifying mechanistic similarities in drug responses

    KAUST Repository

    Zhao, C.

    2012-05-15

    Motivation: In early drug development, it would be beneficial to be able to identify those dynamic patterns of gene response that indicate that drugs targeting a particular gene will be likely or not to elicit the desired response. One approach would be to quantitate the degree of similarity between the responses that cells show when exposed to drugs, so that consistencies in the regulation of cellular response processes that produce success or failure can be more readily identified.Results: We track drug response using fluorescent proteins as transcription activity reporters. Our basic assumption is that drugs inducing very similar alteration in transcriptional regulation will produce similar temporal trajectories on many of the reporter proteins and hence be identified as having similarities in their mechanisms of action (MOA). The main body of this work is devoted to characterizing similarity in temporal trajectories/signals. To do so, we must first identify the key points that determine mechanistic similarity between two drug responses. Directly comparing points on the two signals is unrealistic, as it cannot handle delays and speed variations on the time axis. Hence, to capture the similarities between reporter responses, we develop an alignment algorithm that is robust to noise, time delays and is able to find all the contiguous parts of signals centered about a core alignment (reflecting a core mechanism in drug response). Applying the proposed algorithm to a range of real drug experiments shows that the result agrees well with the prior drug MOA knowledge. © The Author 2012. Published by Oxford University Press. All rights reserved.

  3. Domain similarity based orthology detection.

    Science.gov (United States)

    Bitard-Feildel, Tristan; Kemena, Carsten; Greenwood, Jenny M; Bornberg-Bauer, Erich

    2015-05-13

    Orthologous protein detection software mostly uses pairwise comparisons of amino-acid sequences to assert whether two proteins are orthologous or not. Accordingly, when the number of sequences for comparison increases, the number of comparisons to compute grows in a quadratic order. A current challenge of bioinformatic research, especially when taking into account the increasing number of sequenced organisms available, is to make this ever-growing number of comparisons computationally feasible in a reasonable amount of time. We propose to speed up the detection of orthologous proteins by using strings of domains to characterize the proteins. We present two new protein similarity measures, a cosine and a maximal weight matching score based on domain content similarity, and new software, named porthoDom. The qualities of the cosine and the maximal weight matching similarity measures are compared against curated datasets. The measures show that domain content similarities are able to correctly group proteins into their families. Accordingly, the cosine similarity measure is used inside porthoDom, the wrapper developed for proteinortho. porthoDom makes use of domain content similarity measures to group proteins together before searching for orthologs. By using domains instead of amino acid sequences, the reduction of the search space decreases the computational complexity of an all-against-all sequence comparison. We demonstrate that representing and comparing proteins as strings of discrete domains, i.e. as a concatenation of their unique identifiers, allows a drastic simplification of search space. porthoDom has the advantage of speeding up orthology detection while maintaining a degree of accuracy similar to proteinortho. The implementation of porthoDom is released using python and C++ languages and is available under the GNU GPL licence 3 at http://www.bornberglab.org/pages/porthoda .

  4. Similarity measures for face recognition

    CERN Document Server

    Vezzetti, Enrico

    2015-01-01

    Face recognition has several applications, including security, such as (authentication and identification of device users and criminal suspects), and in medicine (corrective surgery and diagnosis). Facial recognition programs rely on algorithms that can compare and compute the similarity between two sets of images. This eBook explains some of the similarity measures used in facial recognition systems in a single volume. Readers will learn about various measures including Minkowski distances, Mahalanobis distances, Hansdorff distances, cosine-based distances, among other methods. The book also summarizes errors that may occur in face recognition methods. Computer scientists "facing face" and looking to select and test different methods of computing similarities will benefit from this book. The book is also useful tool for students undertaking computer vision courses.

  5. Popularity versus similarity in growing networks

    Science.gov (United States)

    Krioukov, Dmitri; Papadopoulos, Fragkiskos; Kitsak, Maksim; Serrano, Mariangeles; Boguna, Marian

    2012-02-01

    Preferential attachment is a powerful mechanism explaining the emergence of scaling in growing networks. If new connections are established preferentially to more popular nodes in a network, then the network is scale-free. Here we show that not only popularity but also similarity is a strong force shaping the network structure and dynamics. We develop a framework where new connections, instead of preferring popular nodes, optimize certain trade-offs between popularity and similarity. The framework admits a geometric interpretation, in which preferential attachment emerges from local optimization processes. As opposed to preferential attachment, the optimization framework accurately describes large-scale evolution of technological (Internet), social (web of trust), and biological (E.coli metabolic) networks, predicting the probability of new links in them with a remarkable precision. The developed framework can thus be used for predicting new links in evolving networks, and provides a different perspective on preferential attachment as an emergent phenomenon.

  6. Predicting the performance of fingerprint similarity searching.

    Science.gov (United States)

    Vogt, Martin; Bajorath, Jürgen

    2011-01-01

    Fingerprints are bit string representations of molecular structure that typically encode structural fragments, topological features, or pharmacophore patterns. Various fingerprint designs are utilized in virtual screening and their search performance essentially depends on three parameters: the nature of the fingerprint, the active compounds serving as reference molecules, and the composition of the screening database. It is of considerable interest and practical relevance to predict the performance of fingerprint similarity searching. A quantitative assessment of the potential that a fingerprint search might successfully retrieve active compounds, if available in the screening database, would substantially help to select the type of fingerprint most suitable for a given search problem. The method presented herein utilizes concepts from information theory to relate the fingerprint feature distributions of reference compounds to screening libraries. If these feature distributions do not sufficiently differ, active database compounds that are similar to reference molecules cannot be retrieved because they disappear in the "background." By quantifying the difference in feature distribution using the Kullback-Leibler divergence and relating the divergence to compound recovery rates obtained for different benchmark classes, fingerprint search performance can be quantitatively predicted.

  7. Revisiting Inter-Genre Similarity

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Gouyon, Fabien

    2013-01-01

    We revisit the idea of ``inter-genre similarity'' (IGS) for machine learning in general, and music genre recognition in particular. We show analytically that the probability of error for IGS is higher than naive Bayes classification with zero-one loss (NB). We show empirically that IGS does...... not perform well, even for data that satisfies all its assumptions....

  8. Fast business process similarity search

    NARCIS (Netherlands)

    Yan, Z.; Dijkman, R.M.; Grefen, P.W.P.J.

    2012-01-01

    Nowadays, it is common for organizations to maintain collections of hundreds or even thousands of business processes. Techniques exist to search through such a collection, for business process models that are similar to a given query model. However, those techniques compare the query model to each

  9. Glove boxes and similar containments

    International Nuclear Information System (INIS)

    Anon.

    1975-01-01

    According to the present invention a glove box or similar containment is provided with an exhaust system including a vortex amplifier venting into the system, the vortex amplifier also having its main inlet in fluid flow connection with the containment and a control inlet in fluid flow connection with the atmosphere outside the containment. (U.S.)

  10. Quantifying Evaporation in a Permeable Pavement System

    Science.gov (United States)

    Studies quantifying evaporation from permeable pavement systems are limited to a few laboratory studies and one field application. This research quantifies evaporation for a larger-scale field application by measuring the water balance from lined permeable pavement sections. Th...

  11. Quantifying sound quality in loudspeaker reproduction

    NARCIS (Netherlands)

    Beerends, John G.; van Nieuwenhuizen, Kevin; van den Broek, E.L.

    2016-01-01

    We present PREQUEL: Perceptual Reproduction Quality Evaluation for Loudspeakers. Instead of quantifying the loudspeaker system itself, PREQUEL quantifies the overall loudspeakers' perceived sound quality by assessing their acoustic output using a set of music signals. This approach introduces a

  12. An Alfven eigenmode similarity experiment

    International Nuclear Information System (INIS)

    Heidbrink, W W; Fredrickson, E; Gorelenkov, N N; Hyatt, A W; Kramer, G; Luo, Y

    2003-01-01

    The major radius dependence of Alfven mode stability is studied by creating plasmas with similar minor radius, shape, magnetic field (0.5 T), density (n e ≅3x10 19 m -3 ), electron temperature (1.0 keV) and beam ion population (near-tangential 80 keV deuterium injection) on both NSTX and DIII-D. The major radius of NSTX is half the major radius of DIII-D. The super-Alfvenic beam ions that drive the modes have overlapping values of v f /v A in the two devices. Observed beam-driven instabilities include toroidicity-induced Alfven eigenmodes (TAE). The stability threshold for the TAE is similar in the two devices. As expected theoretically, the most unstable toroidal mode number n is larger in DIII-D

  13. Compressional Alfven Eigenmode Similarity Study

    Science.gov (United States)

    Heidbrink, W. W.; Fredrickson, E. D.; Gorelenkov, N. N.; Rhodes, T. L.

    2004-11-01

    NSTX and DIII-D are nearly ideal for Alfven eigenmode (AE) similarity experiments, having similar neutral beams, fast-ion to Alfven speed v_f/v_A, fast-ion pressure, and shape of the plasma, but with a factor of 2 difference in the major radius. Toroidicity-induced AE with ˜100 kHz frequencies were compared in an earlier study [1]; this paper focuses on higher frequency AE with f ˜ 1 MHz. Compressional AE (CAE) on NSTX have a polarization, dependence on the fast-ion distribution function, frequency scaling, and low-frequency limit that are qualitatively consistent with CAE theory [2]. Global AE (GAE) are also observed. On DIII-D, coherent modes in this frequency range are observed during low-field (0.6 T) similarity experiments. Experiments will compare the CAE stability limits on DIII-D with the NSTX stability limits, with the aim of determining if CAE will be excited by alphas in a reactor. Predicted differences in the frequency splitting Δ f between excited modes will also be used. \\vspace0.25em [1] W.W. Heidbrink, et al., Plasmas Phys. Control. Fusion 45, 983 (2003). [2] E.D. Fredrickson, et al., Princeton Plasma Physics Laboratory Report PPPL-3955 (2004).

  14. Dynamic Cross-Entropy.

    Science.gov (United States)

    Aur, Dorian; Vila-Rodriguez, Fidel

    2017-01-01

    Complexity measures for time series have been used in many applications to quantify the regularity of one dimensional time series, however many dynamical systems are spatially distributed multidimensional systems. We introduced Dynamic Cross-Entropy (DCE) a novel multidimensional complexity measure that quantifies the degree of regularity of EEG signals in selected frequency bands. Time series generated by discrete logistic equations with varying control parameter r are used to test DCE measures. Sliding window DCE analyses are able to reveal specific period doubling bifurcations that lead to chaos. A similar behavior can be observed in seizures triggered by electroconvulsive therapy (ECT). Sample entropy data show the level of signal complexity in different phases of the ictal ECT. The transition to irregular activity is preceded by the occurrence of cyclic regular behavior. A significant increase of DCE values in successive order from high frequencies in gamma to low frequencies in delta band reveals several phase transitions into less ordered states, possible chaos in the human brain. To our knowledge there are no reliable techniques able to reveal the transition to chaos in case of multidimensional times series. In addition, DCE based on sample entropy appears to be robust to EEG artifacts compared to DCE based on Shannon entropy. The applied technique may offer new approaches to better understand nonlinear brain activity. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Quantifier Scope in Categorical Compositional Distributional Semantics

    Directory of Open Access Journals (Sweden)

    Mehrnoosh Sadrzadeh

    2016-08-01

    Full Text Available In previous work with J. Hedges, we formalised a generalised quantifiers theory of natural language in categorical compositional distributional semantics with the help of bialgebras. In this paper, we show how quantifier scope ambiguity can be represented in that setting and how this representation can be generalised to branching quantifiers.

  16. A compact clinical instrument for quantifying suppression.

    Science.gov (United States)

    Black, Joanne M; Thompson, Benjamin; Maehara, Goro; Hess, Robert F

    2011-02-01

    We describe a compact and convenient clinical apparatus for the measurement of suppression based on a previously reported laboratory-based approach. In addition, we report and validate a novel, rapid psychophysical method for measuring suppression using this apparatus, which makes the technique more applicable to clinical practice. By using a Z800 dual pro head-mounted display driven by a MAC laptop, we provide dichoptic stimulation. Global motion stimuli composed of arrays of moving dots are presented to each eye. One set of dots move in a coherent direction (termed signal) whereas another set of dots move in a random direction (termed noise). To quantify performance, we measure the signal/noise ratio corresponding to a direction-discrimination threshold. Suppression is quantified by assessing the extent to which it matters which eye sees the signal and which eye sees the noise. A space-saving, head-mounted display using current video technology offers an ideal solution for clinical practice. In addition, our optimized psychophysical method provided results that were in agreement with those produced using the original technique. We made measures of suppression on a group of nine adult amblyopic participants using this apparatus with both the original and new psychophysical paradigms. All participants had measurable suppression ranging from mild to severe. The two different psychophysical methods gave a strong correlation for the strength of suppression (rho = -0.83, p = 0.006). Combining the new apparatus and new psychophysical method creates a convenient and rapid technique for parametric measurement of interocular suppression. In addition, this apparatus constitutes the ideal platform for suppressors to combine information between their eyes in a similar way to binocularly normal people. This provides a convenient way for clinicians to implement the newly proposed binocular treatment of amblyopia that is based on antisuppression training.

  17. Quantifying climate risk - the starting point

    International Nuclear Information System (INIS)

    Fairweather, Helen; Luo, Qunying; Liu, De Li; Wiles, Perry

    2007-01-01

    Full text: All natural systems have evolved to their current state as a result inter alia of the climate in which they developed. Similarly, man-made systems (such as agricultural production) have developed to suit the climate experienced over the last 100 or so years. The capacity of different systems to adapt to changes in climate that are outside those that have been experienced previously is largely unknown. This results in considerable uncertainty when predicting climate change impacts. However, it is possible to quantify the relative probabilities of a range of potential impacts of climate change. Quantifying current climate risks is an effective starting point for analysing the probable impacts of future climate change and guiding the selection of appropriate adaptation strategies. For a farming system to be viable within the current climate, its profitability must be sustained and, therefore, possible adaptation strategies need to be tested for continued viability in a changed climate. The methodology outlined in this paper examines historical patterns of key climate variables (rainfall and temperature) across the season and their influence on the productivity of wheat growing in NSW. This analysis is used to identify the time of year that the system is most vulnerable to climate variation, within the constraints of the current climate. Wheat yield is used as a measure of productivity, which is also assumed to be a surrogate for profitability. A time series of wheat yields is sorted into ascending order and categorised into five percentile groupings (i.e. 20th, 40th, 60th and 80th percentiles) for each shire across NSW (-100 years). Five time series of climate data (which are aggregated daily data from the years in each percentile) are analysed to determine the period that provides the greatest climate risk to the production system. Once this period has been determined, this risk is quantified in terms of the degree of separation of the time series

  18. Quantifying the vitamin D economy.

    Science.gov (United States)

    Heaney, Robert P; Armas, Laura A G

    2015-01-01

    Vitamin D enters the body through multiple routes and in a variety of chemical forms. Utilization varies with input, demand, and genetics. Vitamin D and its metabolites are carried in the blood on a Gc protein that has three principal alleles with differing binding affinities and ethnic prevalences. Three major metabolites are produced, which act via two routes, endocrine and autocrine/paracrine, and in two compartments, extracellular and intracellular. Metabolic consumption is influenced by physiological controls, noxious stimuli, and tissue demand. When administered as a supplement, varying dosing schedules produce major differences in serum metabolite profiles. To understand vitamin D's role in human physiology, it is necessary both to identify the foregoing entities, mechanisms, and pathways and, specifically, to quantify them. This review was performed to delineate the principal entities and transitions involved in the vitamin D economy, summarize the status of present knowledge of the applicable rates and masses, draw inferences about functions that are implicit in these quantifications, and point out implications for the determination of adequacy. © The Author(s) 2014. Published by Oxford University Press on behalf of the International Life Sciences Institute. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Quantifying China's regional economic complexity

    Science.gov (United States)

    Gao, Jian; Zhou, Tao

    2018-02-01

    China has experienced an outstanding economic expansion during the past decades, however, literature on non-monetary metrics that reveal the status of China's regional economic development are still lacking. In this paper, we fill this gap by quantifying the economic complexity of China's provinces through analyzing 25 years' firm data. First, we estimate the regional economic complexity index (ECI), and show that the overall time evolution of provinces' ECI is relatively stable and slow. Then, after linking ECI to the economic development and the income inequality, we find that the explanatory power of ECI is positive for the former but negative for the latter. Next, we compare different measures of economic diversity and explore their relationships with monetary macroeconomic indicators. Results show that the ECI index and the non-linear iteration based Fitness index are comparative, and they both have stronger explanatory power than other benchmark measures. Further multivariate regressions suggest the robustness of our results after controlling other socioeconomic factors. Our work moves forward a step towards better understanding China's regional economic development and non-monetary macroeconomic indicators.

  20. Quantifying and Reducing Light Pollution

    Science.gov (United States)

    Gokhale, Vayujeet; Caples, David; Goins, Jordan; Herdman, Ashley; Pankey, Steven; Wren, Emily

    2018-06-01

    We describe the current level of light pollution in and around Kirksville, Missouri and around Anderson Mesa near Flagstaff, Arizona. We quantify the amount of light that is projected up towards the sky, instead of the ground, using Unihedron sky quality meters installed at various locations. We also present results from DSLR photometry of several standard stars, and compare the photometric quality of the data collected at locations with varying levels of light pollution. Presently, light fixture shields and ‘warm-colored’ lights are being installed on Truman State University’s campus in order to reduce light pollution. We discuss the experimental procedure we use to test the effectiveness of the different light fixtures shields in a controlled setting inside the Del and Norma Robison Planetarium.Apart from negatively affecting the quality of the night sky for astronomers, light pollution adversely affects migratory patterns of some animals and sleep-patterns in humans, increases our carbon footprint, and wastes resources and money. This problem threatens to get particularly acute with the increasing use of outdoor LED lamps. We conclude with a call to action to all professional and amateur astronomers to act against the growing nuisance of light pollution.

  1. Quantifying meniscal kinematics in dogs.

    Science.gov (United States)

    Park, Brian H; Banks, Scott A; Pozzi, Antonio

    2017-11-06

    The dog has been used extensively as an experimental model to study meniscal treatments such as meniscectomy, meniscal repair, transplantation, and regeneration. However, there is very little information on meniscal kinematics in the dog. This study used MR imaging to quantify in vitro meniscal kinematics in loaded dog knees in four distinct poses: extension, flexion, internal, and external rotation. A new method was used to track the meniscal poses along the convex and posteriorly tilted tibial plateau. Meniscal displacements were large, displacing 13.5 and 13.7 mm posteriorly on average for the lateral and medial menisci during flexion (p = 0.90). The medial anterior horn and lateral posterior horns were the most mobile structures, showing average translations of 15.9 and 15.1 mm, respectively. Canine menisci are highly mobile and exhibit movements that correlate closely with the relative tibiofemoral positions. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  2. Quantifying the invasiveness of species

    Directory of Open Access Journals (Sweden)

    Robert Colautti

    2014-04-01

    Full Text Available The success of invasive species has been explained by two contrasting but non-exclusive views: (i intrinsic factors make some species inherently good invaders; (ii species become invasive as a result of extrinsic ecological and genetic influences such as release from natural enemies, hybridization or other novel ecological and evolutionary interactions. These viewpoints are rarely distinguished but hinge on distinct mechanisms leading to different management scenarios. To improve tests of these hypotheses of invasion success we introduce a simple mathematical framework to quantify the invasiveness of species along two axes: (i interspecific differences in performance among native and introduced species within a region, and (ii intraspecific differences between populations of a species in its native and introduced ranges. Applying these equations to a sample dataset of occurrences of 1,416 plant species across Europe, Argentina, and South Africa, we found that many species are common in their native range but become rare following introduction; only a few introduced species become more common. Biogeographical factors limiting spread (e.g. biotic resistance, time of invasion therefore appear more common than those promoting invasion (e.g. enemy release. Invasiveness, as measured by occurrence data, is better explained by inter-specific variation in invasion potential than biogeographical changes in performance. We discuss how applying these comparisons to more detailed performance data would improve hypothesis testing in invasion biology and potentially lead to more efficient management strategies.

  3. Integrated cosmological probes: concordance quantified

    Energy Technology Data Exchange (ETDEWEB)

    Nicola, Andrina; Amara, Adam; Refregier, Alexandre, E-mail: andrina.nicola@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch [Department of Physics, ETH Zürich, Wolfgang-Pauli-Strasse 27, CH-8093 Zürich (Switzerland)

    2017-10-01

    Assessing the consistency of parameter constraints derived from different cosmological probes is an important way to test the validity of the underlying cosmological model. In an earlier work [1], we computed constraints on cosmological parameters for ΛCDM from an integrated analysis of CMB temperature anisotropies and CMB lensing from Planck, galaxy clustering and weak lensing from SDSS, weak lensing from DES SV as well as Type Ia supernovae and Hubble parameter measurements. In this work, we extend this analysis and quantify the concordance between the derived constraints and those derived by the Planck Collaboration as well as WMAP9, SPT and ACT. As a measure for consistency, we use the Surprise statistic [2], which is based on the relative entropy. In the framework of a flat ΛCDM cosmological model, we find all data sets to be consistent with one another at a level of less than 1σ. We highlight that the relative entropy is sensitive to inconsistencies in the models that are used in different parts of the analysis. In particular, inconsistent assumptions for the neutrino mass break its invariance on the parameter choice. When consistent model assumptions are used, the data sets considered in this work all agree with each other and ΛCDM, without evidence for tensions.

  4. Quantifying climate changes of the Common Era for Finland

    Science.gov (United States)

    Luoto, Tomi P.; Nevalainen, Liisa

    2017-10-01

    In this study, we aim to quantify summer air temperatures from sediment records from Southern, Central and Northern Finland over the past 2000 years. We use lake sediment archives to estimate paleotemperatures applying fossil Chironomidae assemblages and the transfer function approach. The used enhanced Chironomidae-based temperature calibration set was validated in a 70-year high-resolution sediment record against instrumentally measured temperatures. Since the inferred and observed temperatures showed close correlation, we deduced that the new calibration model is reliable for reconstructions beyond the monitoring records. The 700-year long temperature reconstructions from three sites at multi-decadal temporal resolution showed similar trends, although they had differences in timing of the cold Little Ice Age (LIA) and the initiation of recent warming. The 2000-year multi-centennial reconstructions from three different sites showed resemblance with each other having clear signals of the Medieval Climate Anomaly (MCA) and LIA, but with differences in their timing. The influence of external forcing on climate of the southern and central sites appeared to be complex at the decadal scale, but the North Atlantic Oscillation (NAO) was closely linked to the temperature development of the northern site. Solar activity appears to be synchronous with the temperature fluctuations at the multi-centennial scale in all the sites. The present study provides new insights into centennial and decadal variability in air temperature dynamics in Northern Europe and on the external forcing behind these trends. These results are particularly useful in comparing regional responses and lags of temperature trends between different parts of Scandinavia.

  5. Parkinson's Law Quantified: Three Investigations on Bureaucratic Inefficiency

    OpenAIRE

    Klimek, Peter; Hanel, Rudolf; Thurner, Stefan

    2008-01-01

    We formulate three famous, descriptive essays of C.N. Parkinson on bureaucratic inefficiency in a quantifiable and dynamical socio-physical framework. In the first model we show how the use of recent opinion formation models for small groups can be used to understand Parkinson's observation that decision making bodies such as cabinets or boards become highly inefficient once their size exceeds a critical 'Coefficient of Inefficiency', typically around 20. A second observation of Parkinson - w...

  6. Quantifying seasonal velocity at Khumbu Glacier, Nepal

    Science.gov (United States)

    Miles, E.; Quincey, D. J.; Miles, K.; Hubbard, B. P.; Rowan, A. V.

    2017-12-01

    While the low-gradient debris-covered tongues of many Himalayan glaciers exhibit low surface velocities, quantifying ice flow and its variation through time remains a key challenge for studies aimed at determining the long-term evolution of these glaciers. Recent work has suggested that glaciers in the Everest region of Nepal may show seasonal variability in surface velocity, with ice flow peaking during the summer as monsoon precipitation provides hydrological inputs and thus drives changes in subglacial drainage efficiency. However, satellite and aerial observations of glacier velocity during the monsoon are greatly limited due to cloud cover. Those that do exist do not span the period over which the most dynamic changes occur, and consequently short-term (i.e. daily) changes in flow, as well as the evolution of ice dynamics through the monsoon period, remain poorly understood. In this study, we combine field and remote (satellite image) observations to create a multi-temporal, 3D synthesis of ice deformation rates at Khumbu Glacier, Nepal, focused on the 2017 monsoon period. We first determine net annual and seasonal surface displacements for the whole glacier based on Landsat-8 (OLI) panchromatic data (15m) processed with ImGRAFT. We integrate inclinometer observations from three boreholes drilled by the EverDrill project to determine cumulative deformation at depth, providing a 3D perspective and enabling us to assess the role of basal sliding at each site. We additionally analyze high-frequency on-glacier L1 GNSS data from three sites to characterize variability within surface deformation at sub-seasonal timescales. Finally, each dataset is validated against repeat-dGPS observations at gridded points in the vicinity of the boreholes and GNSS dataloggers. These datasets complement one another to infer thermal regime across the debris-covered ablation area of the glacier, and emphasize the seasonal and spatial variability of ice deformation for glaciers in High

  7. Similarity analysis between quantum images

    Science.gov (United States)

    Zhou, Ri-Gui; Liu, XingAo; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou

    2018-06-01

    Similarity analyses between quantum images are so essential in quantum image processing that it provides fundamental research for the other fields, such as quantum image matching, quantum pattern recognition. In this paper, a quantum scheme based on a novel quantum image representation and quantum amplitude amplification algorithm is proposed. At the end of the paper, three examples and simulation experiments show that the measurement result must be 0 when two images are same, and the measurement result has high probability of being 1 when two images are different.

  8. Similarity flows in relativistic hydrodynamics

    International Nuclear Information System (INIS)

    Blaizot, J.P.; Ollitrault, J.Y.

    1986-01-01

    In ultra-relativistic heavy ion collisions, one expects in particular to observe a deconfinement transition leading to a formation of quark gluon plasma. In the framework of the hydrodynamic model, experimental signatures of such a plasma may be looked for as observable consequences of a first order transition on the evolution of the system. In most of the possible scenario, the phase transition is accompanied with discontinuities in the hydrodynamic flow, such as shock waves. The method presented in this paper has been developed to treat without too much numerical effort such discontinuous flow. It relies heavily on the use of similarity solutions of the hydrodynamic equations

  9. Multi-Scale Scattering Transform in Music Similarity Measuring

    Science.gov (United States)

    Wang, Ruobai

    Scattering transform is a Mel-frequency spectrum based, time-deformation stable method, which can be used in evaluating music similarity. Compared with Dynamic time warping, it has better performance in detecting similar audio signals under local time-frequency deformation. Multi-scale scattering means to combine scattering transforms of different window lengths. This paper argues that, multi-scale scattering transform is a good alternative of dynamic time warping in music similarity measuring. We tested the performance of multi-scale scattering transform against other popular methods, with data designed to represent different conditions.

  10. A New Trajectory Similarity Measure for GPS Data

    KAUST Repository

    Ismail, Anas; Vigneron, Antoine E.

    2016-01-01

    We present a new algorithm for measuring the similarity between trajectories, and in particular between GPS traces. We call this new similarity measure the Merge Distance (MD). Our approach is robust against subsampling and supersampling. We perform experiments to compare this new similarity measure with the two main approaches that have been used so far: Dynamic Time Warping (DTW) and the Euclidean distance. © 2015 ACM.

  11. A New Trajectory Similarity Measure for GPS Data

    KAUST Repository

    Ismail, Anas

    2016-08-08

    We present a new algorithm for measuring the similarity between trajectories, and in particular between GPS traces. We call this new similarity measure the Merge Distance (MD). Our approach is robust against subsampling and supersampling. We perform experiments to compare this new similarity measure with the two main approaches that have been used so far: Dynamic Time Warping (DTW) and the Euclidean distance. © 2015 ACM.

  12. Neural basis for generalized quantifier comprehension.

    Science.gov (United States)

    McMillan, Corey T; Clark, Robin; Moore, Peachie; Devita, Christian; Grossman, Murray

    2005-01-01

    Generalized quantifiers like "all cars" are semantically well understood, yet we know little about their neural representation. Our model of quantifier processing includes a numerosity device, operations that combine number elements and working memory. Semantic theory posits two types of quantifiers: first-order quantifiers identify a number state (e.g. "at least 3") and higher-order quantifiers additionally require maintaining a number state actively in working memory for comparison with another state (e.g. "less than half"). We used BOLD fMRI to test the hypothesis that all quantifiers recruit inferior parietal cortex associated with numerosity, while only higher-order quantifiers recruit prefrontal cortex associated with executive resources like working memory. Our findings showed that first-order and higher-order quantifiers both recruit right inferior parietal cortex, suggesting that a numerosity component contributes to quantifier comprehension. Moreover, only probes of higher-order quantifiers recruited right dorsolateral prefrontal cortex, suggesting involvement of executive resources like working memory. We also observed activation of thalamus and anterior cingulate that may be associated with selective attention. Our findings are consistent with a large-scale neural network centered in frontal and parietal cortex that supports comprehension of generalized quantifiers.

  13. Quantifying Information Flow During Emergencies

    Science.gov (United States)

    Gao, Liang; Song, Chaoming; Gao, Ziyou; Barabási, Albert-László; Bagrow, James P.; Wang, Dashun

    2014-02-01

    Recent advances on human dynamics have focused on the normal patterns of human activities, with the quantitative understanding of human behavior under extreme events remaining a crucial missing chapter. This has a wide array of potential applications, ranging from emergency response and detection to traffic control and management. Previous studies have shown that human communications are both temporally and spatially localized following the onset of emergencies, indicating that social propagation is a primary means to propagate situational awareness. We study real anomalous events using country-wide mobile phone data, finding that information flow during emergencies is dominated by repeated communications. We further demonstrate that the observed communication patterns cannot be explained by inherent reciprocity in social networks, and are universal across different demographics.

  14. How complex a dynamical network can be?

    International Nuclear Information System (INIS)

    Baptista, M.S.; Kakmeni, F. Moukam; Del Magno, Gianluigi; Hussein, M.S.

    2011-01-01

    Positive Lyapunov exponents measure the asymptotic exponential divergence of nearby trajectories of a dynamical system. Not only they quantify how chaotic a dynamical system is, but since their sum is an upper bound for the rate of information production, they also provide a convenient way to quantify the complexity of a dynamical network. We conjecture based on numerical evidences that for a large class of dynamical networks composed by equal nodes, the sum of the positive Lyapunov exponents is bounded by the sum of all the positive Lyapunov exponents of both the synchronization manifold and its transversal directions, the last quantity being in principle easier to compute than the latter. As applications of our conjecture we: (i) show that a dynamical network composed of equal nodes and whose nodes are fully linearly connected produces more information than similar networks but whose nodes are connected with any other possible connecting topology; (ii) show how one can calculate upper bounds for the information production of realistic networks whose nodes have parameter mismatches, randomly chosen; (iii) discuss how to predict the behavior of a large dynamical network by knowing the information provided by a system composed of only two coupled nodes.

  15. Seniority bosons from similarity transformations

    International Nuclear Information System (INIS)

    Geyer, H.B.

    1986-01-01

    The requirement of associating in the boson space seniority with twice the number of non-s bosons defines a similarity transformation which re-expresses the Dyson pair boson images in terms of seniority bosons. In particular the fermion S-pair creation operator is mapped onto an operator which, unlike the pair boson image, does not change the number of non-s bosons. The original results of Otsuka, Arima and Iachello are recovered by this procedure while at the same time they are generalized to include g-bosons or even bosons with J>4 as well as any higher order boson terms. Furthermore the seniority boson images are valid for an arbitrary number of d- or g-bosons - a result which is not readily obtainable within the framework of the usual Marumori- or OAI-method

  16. Determination of subjective similarity for pairs of masses and pairs of clustered microcalcifications on mammograms: Comparison of similarity ranking scores and absolute similarity ratings

    International Nuclear Information System (INIS)

    Muramatsu, Chisako; Li Qiang; Schmidt, Robert A.; Shiraishi, Junji; Suzuki, Kenji; Newstead, Gillian M.; Doi, Kunio

    2007-01-01

    The presentation of images that are similar to that of an unknown lesion seen on a mammogram may be helpful for radiologists to correctly diagnose that lesion. For similar images to be useful, they must be quite similar from the radiologists' point of view. We have been trying to quantify the radiologists' impression of similarity for pairs of lesions and to establish a ''gold standard'' for development and evaluation of a computerized scheme for selecting such similar images. However, it is considered difficult to reliably and accurately determine similarity ratings, because they are subjective. In this study, we compared the subjective similarities obtained by two different methods, an absolute rating method and a 2-alternative forced-choice (2AFC) method, to demonstrate that reliable similarity ratings can be determined by the responses of a group of radiologists. The absolute similarity ratings were previously obtained for pairs of masses and pairs of microcalcifications from five and nine radiologists, respectively. In this study, similarity ranking scores for eight pairs of masses and eight pairs of microcalcifications were determined by use of the 2AFC method. In the first session, the eight pairs of masses and eight pairs of microcalcifications were grouped and compared separately for determining the similarity ranking scores. In the second session, another similarity ranking score was determined by use of mixed pairs, i.e., by comparison of the similarity of a mass pair with that of a calcification pair. Four pairs of masses and four pairs of microcalcifications were grouped together to create two sets of eight pairs. The average absolute similarity ratings and the average similarity ranking scores showed very good correlations in the first study (Pearson's correlation coefficients: 0.94 and 0.98 for masses and microcalcifications, respectively). Moreover, in the second study, the correlations between the absolute ratings and the ranking scores were also

  17. QUANTIFYING LIFE STYLE IMPACT ON LIFESPAN

    Directory of Open Access Journals (Sweden)

    Antonello Lorenzini

    2012-12-01

    Full Text Available A healthy diet, physical activity and avoiding dangerous habits such as smoking are effective ways of increasing health and lifespan. Although a significant portion of the world's population still suffers from malnutrition, especially children, the most common cause of death in the world today is non-communicable diseases. Overweight and obesity significantly increase the relative risk for the most relevant non communicable diseases: cardiovascular disease, type II diabetes and some cancers. Childhood overweight also seems to increase the likelihood of disease in adulthood through epigenetic mechanisms. This worrisome trend now termed "globesity" will deeply impact society unless preventive strategies are put into effect. Researchers of the basic biology of aging have clearly established that animals with short lifespans live longer when their diet is calorie restricted. Although similar experiments carried on rhesus monkeys, a longer-lived species more closely related to humans, yielded mixed results, overall the available scientific data suggest keeping the body mass index in the "normal" range increases the chances of living a longer, healthier life. This can be successfully achieved both by maintaining a healthy diet and by engaging in physical activity. In this review we will try to quantify the relative impact of life style choices on lifespan.

  18. Alaska, Gulf spills share similarities

    International Nuclear Information System (INIS)

    Usher, D.

    1991-01-01

    The accidental Exxon Valdez oil spill in Alaska and the deliberate dumping of crude oil into the Persian Gulf as a tactic of war contain both glaring differences and surprising similarities. Public reaction and public response was much greater to the Exxon Valdez spill in pristine Prince William Sound than to the war-related tragedy in the Persian Gulf. More than 12,000 workers helped in the Alaskan cleanup; only 350 have been involved in Kuwait. But in both instances, environmental damages appear to be less than anticipated. Natures highly effective self-cleansing action is primarily responsible for minimizing the damages. One positive action growing out of the two incidents is increased international cooperation and participation in oil-spill clean-up efforts. In 1990, in the aftermath of the Exxon Valdez spill, 94 nations signed an international accord on cooperation in future spills. The spills can be historic environmental landmarks leading to creation of more sophisticated response systems worldwide

  19. Quantifying coordination among the rearfoot, midfoot, and forefoot segments during running.

    Science.gov (United States)

    Takabayashi, Tomoya; Edama, Mutsuaki; Yokoyama, Erika; Kanaya, Chiaki; Kubo, Masayoshi

    2018-03-01

    Because previous studies have suggested that there is a relationship between injury risk and inter-segment coordination, quantifying coordination between the segments is essential. Even though the midfoot and forefoot segments play important roles in dynamic tasks, previous studies have mostly focused on coordination between the shank and rearfoot segments. This study aimed to quantify coordination among rearfoot, midfoot, and forefoot segments during running. Eleven healthy young men ran on a treadmill. The coupling angle, representing inter-segment coordination, was calculated using a modified vector coding technique. The coupling angle was categorised into four coordination patterns. During the absorption phase, rearfoot-midfoot coordination in the frontal planes was mostly in-phase (rearfoot and midfoot eversion with similar amplitudes). The present study found that the eversion of the midfoot with respect to the rearfoot was comparable in magnitude to the eversion of the rearfoot with respect to the shank. A previous study has suggested that disruption of the coordination between the internal rotation of the shank and eversion of the rearfoot leads to running injuries such as anterior knee pain. Thus, these data might be used in the future to compare to individuals with foot deformities or running injuries.

  20. Quantifying the strength of quorum sensing crosstalk within microbial communities.

    Directory of Open Access Journals (Sweden)

    Kalinga Pavan T Silva

    2017-10-01

    Full Text Available In multispecies microbial communities, the exchange of signals such as acyl-homoserine lactones (AHL enables communication within and between species of Gram-negative bacteria. This process, commonly known as quorum sensing, aids in the regulation of genes crucial for the survival of species within heterogeneous populations of microbes. Although signal exchange was studied extensively in well-mixed environments, less is known about the consequences of crosstalk in spatially distributed mixtures of species. Here, signaling dynamics were measured in a spatially distributed system containing multiple strains utilizing homologous signaling systems. Crosstalk between strains containing the lux, las and rhl AHL-receptor circuits was quantified. In a distributed population of microbes, the impact of community composition on spatio-temporal dynamics was characterized and compared to simulation results using a modified reaction-diffusion model. After introducing a single term to account for crosstalk between each pair of signals, the model was able to reproduce the activation patterns observed in experiments. We quantified the robustness of signal propagation in the presence of interacting signals, finding that signaling dynamics are largely robust to interference. The ability of several wild isolates to participate in AHL-mediated signaling was investigated, revealing distinct signatures of crosstalk for each species. Our results present a route to characterize crosstalk between species and predict systems-level signaling dynamics in multispecies communities.

  1. Similarity of Symbol Frequency Distributions with Heavy Tails

    Directory of Open Access Journals (Sweden)

    Martin Gerlach

    2016-04-01

    Full Text Available Quantifying the similarity between symbolic sequences is a traditional problem in information theory which requires comparing the frequencies of symbols in different sequences. In numerous modern applications, ranging from DNA over music to texts, the distribution of symbol frequencies is characterized by heavy-tailed distributions (e.g., Zipf’s law. The large number of low-frequency symbols in these distributions poses major difficulties to the estimation of the similarity between sequences; e.g., they hinder an accurate finite-size estimation of entropies. Here, we show analytically how the systematic (bias and statistical (fluctuations errors in these estimations depend on the sample size N and on the exponent γ of the heavy-tailed distribution. Our results are valid for the Shannon entropy (α=1, its corresponding similarity measures (e.g., the Jensen-Shanon divergence, and also for measures based on the generalized entropy of order α. For small α’s, including α=1, the errors decay slower than the 1/N decay observed in short-tailed distributions. For α larger than a critical value α^{*}=1+1/γ≤2, the 1/N decay is recovered. We show the practical significance of our results by quantifying the evolution of the English language over the last two centuries using a complete α spectrum of measures. We find that frequent words change more slowly than less frequent words and that α=2 provides the most robust measure to quantify language change.

  2. Gait Recognition Using Image Self-Similarity

    Directory of Open Access Journals (Sweden)

    Chiraz BenAbdelkader

    2004-04-01

    Full Text Available Gait is one of the few biometrics that can be measured at a distance, and is hence useful for passive surveillance as well as biometric applications. Gait recognition research is still at its infancy, however, and we have yet to solve the fundamental issue of finding gait features which at once have sufficient discrimination power and can be extracted robustly and accurately from low-resolution video. This paper describes a novel gait recognition technique based on the image self-similarity of a walking person. We contend that the similarity plot encodes a projection of gait dynamics. It is also correspondence-free, robust to segmentation noise, and works well with low-resolution video. The method is tested on multiple data sets of varying sizes and degrees of difficulty. Performance is best for fronto-parallel viewpoints, whereby a recognition rate of 98% is achieved for a data set of 6 people, and 70% for a data set of 54 people.

  3. Quantifying the impacts of global disasters

    Science.gov (United States)

    Jones, L. M.; Ross, S.; Wilson, R. I.; Borrero, J. C.; Brosnan, D.; Bwarie, J. T.; Geist, E. L.; Hansen, R. A.; Johnson, L. A.; Kirby, S. H.; Long, K.; Lynett, P. J.; Miller, K. M.; Mortensen, C. E.; Perry, S. C.; Porter, K. A.; Real, C. R.; Ryan, K. J.; Thio, H. K.; Wein, A. M.; Whitmore, P.; Wood, N. J.

    2012-12-01

    The US Geological Survey, National Oceanic and Atmospheric Administration, California Geological Survey, and other entities are developing a Tsunami Scenario, depicting a realistic outcome of a hypothetical but plausible large tsunami originating in the eastern Aleutian Arc, affecting the west coast of the United States, including Alaska and Hawaii. The scenario includes earth-science effects, damage and restoration of the built environment, and social and economic impacts. Like the earlier ShakeOut and ARkStorm disaster scenarios, the purpose of the Tsunami Scenario is to apply science to quantify the impacts of natural disasters in a way that can be used by decision makers in the affected sectors to reduce the potential for loss. Most natural disasters are local. A major hurricane can destroy a city or damage a long swath of coastline while mostly sparing inland areas. The largest earthquake on record caused strong shaking along 1500 km of Chile, but left the capital relatively unscathed. Previous scenarios have used the local nature of disasters to focus interaction with the user community. However, the capacity for global disasters is growing with the interdependency of the global economy. Earthquakes have disrupted global computer chip manufacturing and caused stock market downturns. Tsunamis, however, can be global in their extent and direct impact. Moreover, the vulnerability of seaports to tsunami damage can increase the global consequences. The Tsunami Scenario is trying to capture the widespread effects while maintaining the close interaction with users that has been one of the most successful features of the previous scenarios. The scenario tsunami occurs in the eastern Aleutians with a source similar to the 2011 Tohoku event. Geologic similarities support the argument that a Tohoku-like source is plausible in Alaska. It creates a major nearfield tsunami in the Aleutian arc and peninsula, a moderate tsunami in the US Pacific Northwest, large but not the

  4. Quantifying convergence in the sciences

    Directory of Open Access Journals (Sweden)

    Sara Lumbreras

    2016-02-01

    Full Text Available Traditional epistemological models classify knowledge into separate disciplines with different objects of study and specific techniques, with some frameworks even proposing hierarchies (such as Comte’s. According to thinkers such as John Holland or Teilhard de Chardin, the advancement of science involves the convergence of disciplines. This proposed convergence can be studied in a number of ways, such as how works impact research outside a specific area (citation networks or how authors collaborate with other researchers in different fields (collaboration networks. While these studies are delivering significant new insights, they cannot easily show the convergence of different topics within a body of knowledge. This paper attempts to address this question in a quantitative manner, searching for evidence that supports the idea of convergence in the content of the sciences themselves (that is, whether the sciences are dealing with increasingly the same topics. We use Latent Dirichlet Analysis (LDA, a technique that is able to analyze texts and estimate the relative contributions of the topics that were used to generate them. We apply this tool to the corpus of the Santa Fe Institute (SFI working papers, which spans research on Complexity Science from 1989 to 2015. We then analyze the relatedness of the different research areas, the rise and demise of these sub-disciplines over time and, more broadly, the convergence of the research body as a whole. Combining the topic structure obtained from the collected publication history of the SFI community with techniques to infer hierarchy and clustering, we reconstruct a picture of a dynamic community which experiences trends, periodically recurring topics, and shifts in the closeness of scholarship over time. We find that there is support for convergence, and that the application of quantitative methods such as LDA to the study of knowledge can provide valuable insights that can help

  5. Quantifying forecast quality of IT business value

    NARCIS (Netherlands)

    Eveleens, J.L.; van der Pas, M.; Verhoef, C.

    2012-01-01

    This article discusses how to quantify the forecasting quality of IT business value. We address a common economic indicator often used to determine the business value of project proposals, the Net Present Value (NPV). To quantify the forecasting quality of IT business value, we develop a generalized

  6. Quantifying human vitamin kinetics using AMS

    Energy Technology Data Exchange (ETDEWEB)

    Hillegonds, D; Dueker, S; Ognibene, T; Buchholz, B; Lin, Y; Vogel, J; Clifford, A

    2004-02-19

    Tracing vitamin kinetics at physiologic concentrations has been hampered by a lack of quantitative sensitivity for chemically equivalent tracers that could be used safely in healthy people. Instead, elderly or ill volunteers were sought for studies involving pharmacologic doses with radioisotopic labels. These studies fail to be relevant in two ways: vitamins are inherently micronutrients, whose biochemical paths are saturated and distorted by pharmacological doses; and while vitamins remain important for health in the elderly or ill, their greatest effects may be in preventing slow and cumulative diseases by proper consumption throughout youth and adulthood. Neither the target dose nor the target population are available for nutrient metabolic studies through decay counting of radioisotopes at high levels. Stable isotopic labels are quantified by isotope ratio mass spectrometry at levels that trace physiologic vitamin doses, but the natural background of stable isotopes severely limits the time span over which the tracer is distinguishable. Indeed, study periods seldom ranged over a single biological mean life of the labeled nutrients, failing to provide data on the important final elimination phase of the compound. Kinetic data for the absorption phase is similarly rare in micronutrient research because the phase is rapid, requiring many consecutive plasma samples for accurate representation. However, repeated blood samples of sufficient volume for precise stable or radio-isotope quantitations consume an indefensible amount of the volunteer's blood over a short period. Thus, vitamin pharmacokinetics in humans has often relied on compartmental modeling based upon assumptions and tested only for the short period of maximal blood circulation, a period that poorly reflects absorption or final elimination kinetics except for the most simple models.

  7. Quantifying antimicrobial resistance at veal calf farms.

    Directory of Open Access Journals (Sweden)

    Angela B Bosman

    Full Text Available This study was performed to determine a sampling strategy to quantify the prevalence of antimicrobial resistance on veal calf farms, based on the variation in antimicrobial resistance within and between calves on five farms. Faecal samples from 50 healthy calves (10 calves/farm were collected. From each individual sample and one pooled faecal sample per farm, 90 selected Escherichia coli isolates were tested for their resistance against 25 mg/L amoxicillin, 25 mg/L tetracycline, 0.5 mg/L cefotaxime, 0.125 mg/L ciprofloxacin and 8/152 mg/L trimethoprim/sulfamethoxazole (tmp/s by replica plating. From each faecal sample another 10 selected E. coli isolates were tested for their resistance by broth microdilution as a reference. Logistic regression analysis was performed to compare the odds of testing an isolate resistant between both test methods (replica plating vs. broth microdilution and to evaluate the effect of pooling faecal samples. Bootstrap analysis was used to investigate the precision of the estimated prevalence of resistance to each antimicrobial obtained by several simulated sampling strategies. Replica plating showed similar odds of E. coli isolates tested resistant compared to broth microdilution, except for ciprofloxacin (OR 0.29, p ≤ 0.05. Pooled samples showed in general lower odds of an isolate being resistant compared to individual samples, although these differences were not significant. Bootstrap analysis showed that within each antimicrobial the various compositions of a pooled sample provided consistent estimates for the mean proportion of resistant isolates. Sampling strategies should be based on the variation in resistance among isolates within faecal samples and between faecal samples, which may vary by antimicrobial. In our study, the optimal sampling strategy from the perspective of precision of the estimated levels of resistance and practicality consists of a pooled faecal sample from 20 individual animals, of which

  8. Cross-linguistic patterns in the acquisition of quantifiers

    Science.gov (United States)

    Cummins, Chris; Gavarró, Anna; Kuvač Kraljević, Jelena; Hrzica, Gordana; Grohmann, Kleanthes K.; Skordi, Athina; Jensen de López, Kristine; Sundahl, Lone; van Hout, Angeliek; Hollebrandse, Bart; Overweg, Jessica; Faber, Myrthe; van Koert, Margreet; Smith, Nafsika; Vija, Maigi; Zupping, Sirli; Kunnari, Sari; Morisseau, Tiffany; Rusieshvili, Manana; Yatsushiro, Kazuko; Fengler, Anja; Varlokosta, Spyridoula; Konstantzou, Katerina; Farby, Shira; Guasti, Maria Teresa; Vernice, Mirta; Okabe, Reiko; Isobe, Miwa; Crosthwaite, Peter; Hong, Yoonjee; Balčiūnienė, Ingrida; Ahmad Nizar, Yanti Marina; Grech, Helen; Gatt, Daniela; Cheong, Win Nee; Asbjørnsen, Arve; Torkildsen, Janne von Koss; Haman, Ewa; Miękisz, Aneta; Gagarina, Natalia; Puzanova, Julia; Anđelković, Darinka; Savić, Maja; Jošić, Smiljana; Slančová, Daniela; Kapalková, Svetlana; Barberán, Tania; Özge, Duygu; Hassan, Saima; Chan, Cecilia Yuet Hung; Okubo, Tomoya; van der Lely, Heather; Sauerland, Uli; Noveck, Ira

    2016-01-01

    Learners of most languages are faced with the task of acquiring words to talk about number and quantity. Much is known about the order of acquisition of number words as well as the cognitive and perceptual systems and cultural practices that shape it. Substantially less is known about the acquisition of quantifiers. Here, we consider the extent to which systems and practices that support number word acquisition can be applied to quantifier acquisition and conclude that the two domains are largely distinct in this respect. Consequently, we hypothesize that the acquisition of quantifiers is constrained by a set of factors related to each quantifier’s specific meaning. We investigate competence with the expressions for “all,” “none,” “some,” “some…not,” and “most” in 31 languages, representing 11 language types, by testing 768 5-y-old children and 536 adults. We found a cross-linguistically similar order of acquisition of quantifiers, explicable in terms of four factors relating to their meaning and use. In addition, exploratory analyses reveal that language- and learner-specific factors, such as negative concord and gender, are significant predictors of variation. PMID:27482119

  9. Bare quantifier fronting as contrastive topicalization

    Directory of Open Access Journals (Sweden)

    Ion Giurgea

    2015-11-01

    Full Text Available I argue that indefinites (in particular bare quantifiers such as ‘something’, ‘somebody’, etc. which are neither existentially presupposed nor in the restriction of a quantifier over situations, can undergo topicalization in a number of Romance languages (Catalan, Italian, Romanian, Spanish, but only if the sentence contains “verum” focus, i.e. focus on a high degree of certainty of the sentence. I analyze these indefinites as contrastive topics, using Büring’s (1999 theory (where the term ‘S-topic’ is used for what I call ‘contrastive topic’. I propose that the topic is evaluated in relation to a scalar set including generalized quantifiers such as {lP $x P(x, lP MANYx P(x, lP MOSTx P(x, lP “xP(x} or {lP $xP(x, lP P(a, lP P(b …}, and that the contrastive topic is the weakest generalized quantifier in this set. The verum focus, which is part of the “comment” that co-occurs with the “Topic”, introduces a set of alternatives including degrees of certainty of the assertion. The speaker asserts that his claim is certainly true or highly probable, contrasting it with stronger claims for which the degree of probability is unknown. This explains the observation that in downward entailing contexts, the fronted quantified DPs are headed by ‘all’ or ‘many’, whereas ‘some’, small numbers or ‘at least n’ appear in upward entailing contexts. Unlike other cases of non-specific topics, which are property topics, these are quantifier topics: the topic part is a generalized quantifier, the comment is a property of generalized quantifiers. This explains the narrow scope of the fronted quantified DP.

  10. Development of similarity theory for control systems

    Science.gov (United States)

    Myshlyaev, L. P.; Evtushenko, V. F.; Ivushkin, K. A.; Makarov, G. V.

    2018-05-01

    The area of effective application of the traditional similarity theory and the need necessity of its development for systems are discussed. The main statements underlying the similarity theory of control systems are given. The conditions for the similarity of control systems and the need for similarity control control are formulated. Methods and algorithms for estimating and similarity control of control systems and the results of research of control systems based on their similarity are presented. The similarity control of systems includes the current evaluation of the degree of similarity of control systems and the development of actions controlling similarity, and the corresponding targeted change in the state of any element of control systems.

  11. Quantifying DNA melting transitions using single-molecule force spectroscopy

    International Nuclear Information System (INIS)

    Calderon, Christopher P; Chen, W-H; Harris, Nolan C; Kiang, C-H; Lin, K-J

    2009-01-01

    We stretched a DNA molecule using an atomic force microscope (AFM) and quantified the mechanical properties associated with B and S forms of double-stranded DNA (dsDNA), molten DNA, and single-stranded DNA. We also fit overdamped diffusion models to the AFM time series and used these models to extract additional kinetic information about the system. Our analysis provides additional evidence supporting the view that S-DNA is a stable intermediate encountered during dsDNA melting by mechanical force. In addition, we demonstrated that the estimated diffusion models can detect dynamical signatures of conformational degrees of freedom not directly observed in experiments.

  12. Quantifying DNA melting transitions using single-molecule force spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Calderon, Christopher P [Department of Computational and Applied Mathematics, Rice University, Houston, TX (United States); Chen, W-H; Harris, Nolan C; Kiang, C-H [Department of Physics and Astronomy, Rice University, Houston, TX (United States); Lin, K-J [Department of Chemistry, National Chung Hsing University, Taichung, Taiwan (China)], E-mail: chkiang@rice.edu

    2009-01-21

    We stretched a DNA molecule using an atomic force microscope (AFM) and quantified the mechanical properties associated with B and S forms of double-stranded DNA (dsDNA), molten DNA, and single-stranded DNA. We also fit overdamped diffusion models to the AFM time series and used these models to extract additional kinetic information about the system. Our analysis provides additional evidence supporting the view that S-DNA is a stable intermediate encountered during dsDNA melting by mechanical force. In addition, we demonstrated that the estimated diffusion models can detect dynamical signatures of conformational degrees of freedom not directly observed in experiments.

  13. Multidimensional Scaling Visualization Using Parametric Similarity Indices

    Directory of Open Access Journals (Sweden)

    J. A. Tenreiro Machado

    2015-03-01

    Full Text Available In this paper, we apply multidimensional scaling (MDS and parametric similarity indices (PSI in the analysis of complex systems (CS. Each CS is viewed as a dynamical system, exhibiting an output time-series to be interpreted as a manifestation of its behavior. We start by adopting a sliding window to sample the original data into several consecutive time periods. Second, we define a given PSI for tracking pieces of data. We then compare the windows for different values of the parameter, and we generate the corresponding MDS maps of ‘points’. Third, we use Procrustes analysis to linearly transform the MDS charts for maximum superposition and to build a globalMDS map of “shapes”. This final plot captures the time evolution of the phenomena and is sensitive to the PSI adopted. The generalized correlation, theMinkowski distance and four entropy-based indices are tested. The proposed approach is applied to the Dow Jones Industrial Average stock market index and the Europe Brent Spot Price FOB time-series.

  14. Quantify Risk to Manage Cost and Schedule

    National Research Council Canada - National Science Library

    Raymond, Fred

    1999-01-01

    Too many projects suffer from unachievable budget and schedule goals, caused by unrealistic estimates and the failure to quantify and communicate the uncertainty of these estimates to managers and sponsoring executives...

  15. Quantifying drug-protein binding in vivo

    International Nuclear Information System (INIS)

    Buchholz, B; Bench, G; Keating III, G; Palmblad, M; Vogel, J; Grant, P G; Hillegonds, D

    2004-01-01

    Accelerator mass spectrometry (AMS) provides precise quantitation of isotope labeled compounds that are bound to biological macromolecules such as DNA or proteins. The sensitivity is high enough to allow for sub-pharmacological (''micro-'') dosing to determine macromolecular targets without inducing toxicities or altering the system under study, whether it is healthy or diseased. We demonstrated an application of AMS in quantifying the physiologic effects of one dosed chemical compound upon the binding level of another compound in vivo at sub-toxic doses [4].We are using tissues left from this study to develop protocols for quantifying specific binding to isolated and identified proteins. We also developed a new technique to quantify nanogram to milligram amounts of isolated protein at precisions that are comparable to those for quantifying the bound compound by AMS

  16. New frontiers of quantified self 3

    DEFF Research Database (Denmark)

    Rapp, Amon; Cena, Federica; Kay, Judy

    2017-01-01

    Quantified Self (QS) field needs to start thinking of how situated needs may affect the use of self-tracking technologies. In this workshop we will focus on the idiosyncrasies of specific categories of users....

  17. Systematic characterizations of text similarity in full text biomedical publications.

    Science.gov (United States)

    Sun, Zhaohui; Errami, Mounir; Long, Tara; Renard, Chris; Choradia, Nishant; Garner, Harold

    2010-09-15

    Computational methods have been used to find duplicate biomedical publications in MEDLINE. Full text articles are becoming increasingly available, yet the similarities among them have not been systematically studied. Here, we quantitatively investigated the full text similarity of biomedical publications in PubMed Central. 72,011 full text articles from PubMed Central (PMC) were parsed to generate three different datasets: full texts, sections, and paragraphs. Text similarity comparisons were performed on these datasets using the text similarity algorithm eTBLAST. We measured the frequency of similar text pairs and compared it among different datasets. We found that high abstract similarity can be used to predict high full text similarity with a specificity of 20.1% (95% CI [17.3%, 23.1%]) and sensitivity of 99.999%. Abstract similarity and full text similarity have a moderate correlation (Pearson correlation coefficient: -0.423) when the similarity ratio is above 0.4. Among pairs of articles in PMC, method sections are found to be the most repetitive (frequency of similar pairs, methods: 0.029, introduction: 0.0076, results: 0.0043). In contrast, among a set of manually verified duplicate articles, results are the most repetitive sections (frequency of similar pairs, results: 0.94, methods: 0.89, introduction: 0.82). Repetition of introduction and methods sections is more likely to be committed by the same authors (odds of a highly similar pair having at least one shared author, introduction: 2.31, methods: 1.83, results: 1.03). There is also significantly more similarity in pairs of review articles than in pairs containing one review and one nonreview paper (frequency of similar pairs: 0.0167 and 0.0023, respectively). While quantifying abstract similarity is an effective approach for finding duplicate citations, a comprehensive full text analysis is necessary to uncover all potential duplicate citations in the scientific literature and is helpful when

  18. A space-efficient algorithm for local similarities.

    Science.gov (United States)

    Huang, X Q; Hardison, R C; Miller, W

    1990-10-01

    Existing dynamic-programming algorithms for identifying similar regions of two sequences require time and space proportional to the product of the sequence lengths. Often this space requirement is more limiting than the time requirement. We describe a dynamic-programming local-similarity algorithm that needs only space proportional to the sum of the sequence lengths. The method can also find repeats within a single long sequence. To illustrate the algorithm's potential, we discuss comparison of a 73,360 nucleotide sequence containing the human beta-like globin gene cluster and a corresponding 44,594 nucleotide sequence for rabbit, a problem well beyond the capabilities of other dynamic-programming software.

  19. Marriage Matters: Spousal Similarity in Life Satisfaction

    OpenAIRE

    Ulrich Schimmack; Richard Lucas

    2006-01-01

    Examined the concurrent and cross-lagged spousal similarity in life satisfaction over a 21-year period. Analyses were based on married couples (N = 847) in the German Socio-Economic Panel (SOEP). Concurrent spousal similarity was considerably higher than one-year retest similarity, revealing spousal similarity in the variable component of life satisfac-tion. Spousal similarity systematically decreased with length of retest interval, revealing simi-larity in the changing component of life sati...

  20. Quantifying Variability in Growth and Thermal Inactivation Kinetics of Lactobacillus plantarum.

    Science.gov (United States)

    Aryani, D C; den Besten, H M W; Zwietering, M H

    2016-08-15

    The presence and growth of spoilage organisms in food might affect the shelf life. In this study, the effects of experimental, reproduction, and strain variabilities were quantified with respect to growth and thermal inactivation using 20 Lactobacillus plantarum strains. Also, the effect of growth history on thermal resistance was quantified. The strain variability in μmax was similar (P > 0.05) to reproduction variability as a function of pH, aw, and temperature, while being around half of the reproduction variability (P plantarum strains, and the pHmin was between 3.2 and 3.5, the aw,min was between 0.936 and 0.953, the [HLamax], at pH 4.5, was between 29 and 38 mM, and the Tmin was between 3.4 and 8.3°C. The average D values ranged from 0.80 min to 19 min at 55°C, 0.22 to 3.9 min at 58°C, 3.1 to 45 s at 60°C, and 1.8 to 19 s at 63°C. In contrast to growth, the strain variability in thermal resistance was on average six times higher than the reproduction variability and more than ten times higher than the experimental variability. The strain variability was also 1.8 times higher (P 10-log10 differences after thermal treatment. Accurate control and realistic prediction of shelf life is complicated by the natural diversity among microbial strains, and limited information on microbiological variability is available for spoilage microorganisms. Therefore, the objectives of the present study were to quantify strain variability, reproduction (biological) variability, and experimental variability with respect to the growth and thermal inactivation kinetics of Lactobacillus plantarum and to quantify the variability in thermal resistance attributed to growth history. The quantitative knowledge obtained on experimental, reproduction, and strain variabilities can be used to improve experimental designs and to adequately select strains for challenge growth and inactivation tests. Moreover, the integration of strain variability in prediction of microbial growth and

  1. Personality Similarity between Teachers and Their Students Influences Teacher Judgement of Student Achievement

    Science.gov (United States)

    Rausch, Tobias; Karing, Constance; Dörfler, Tobias; Artelt, Cordula

    2016-01-01

    This study examined personality similarity between teachers and their students and its impact on teacher judgement of student achievement in the domains of reading comprehension and mathematics. Personality similarity was quantified through intraclass correlations between personality characteristics of 409 dyads of German teachers and their…

  2. Disordered crystals from first principles I: Quantifying the configuration space

    Science.gov (United States)

    Kühne, Thomas D.; Prodan, Emil

    2018-04-01

    This work represents the first chapter of a project on the foundations of first-principle calculations of the electron transport in crystals at finite temperatures. We are interested in the range of temperatures, where most electronic components operate, that is, room temperature and above. The aim is a predictive first-principle formalism that combines ab-initio molecular dynamics and a finite-temperature Kubo-formula for homogeneous thermodynamic phases. The input for this formula is the ergodic dynamical system (Ω , G , dP) defining the thermodynamic crystalline phase, where Ω is the configuration space for the atomic degrees of freedom, G is the space group acting on Ω and dP is the ergodic Gibbs measure relative to the G-action. The present work develops an algorithmic method for quantifying (Ω , G , dP) from first principles. Using the silicon crystal as a working example, we find the Gibbs measure to be extremely well characterized by a multivariate normal distribution, which can be quantified using a small number of parameters. The latter are computed at various temperatures and communicated in the form of a table. Using this table, one can generate large and accurate thermally-disordered atomic configurations to serve, for example, as input for subsequent simulations of the electronic degrees of freedom.

  3. Quantifying Fire Cycle from Dendroecological Records Using Survival Analyses

    Directory of Open Access Journals (Sweden)

    Dominic Cyr

    2016-06-01

    Full Text Available Quantifying fire regimes in the boreal forest ecosystem is crucial for understanding the past and present dynamics, as well as for predicting its future dynamics. Survival analyses have often been used to estimate the fire cycle in eastern Canada because they make it possible to take into account the censored information that is made prevalent by the typically long fire return intervals and the limited scope of the dendroecological methods that are used to quantify them. Here, we assess how the true length of the fire cycle, the short-term temporal variations in fire activity, and the sampling effort affect the accuracy and precision of estimates obtained from two types of parametric survival models, the Weibull and the exponential models, and one non-parametric model obtained with the Cox regression. Then, we apply those results in a case area located in eastern Canada. Our simulation experiment confirms some documented concerns regarding the detrimental effects of temporal variations in fire activity on parametric estimation of the fire cycle. Cox regressions appear to provide the most accurate and robust estimator, being by far the least affected by temporal variations in fire activity. The Cox-based estimate of the fire cycle for the last 300 years in the case study area is 229 years (CI95: 162–407, compared with the likely overestimated 319 years obtained with the commonly used exponential model.

  4. On different forms of self similarity

    International Nuclear Information System (INIS)

    Aswathy, R.K.; Mathew, Sunil

    2016-01-01

    Fractal geometry is mainly based on the idea of self-similar forms. To be self-similar, a shape must able to be divided into parts that are smaller copies, which are more or less similar to the whole. There are different forms of self similarity in nature and mathematics. In this paper, some of the topological properties of super self similar sets are discussed. It is proved that in a complete metric space with two or more elements, the set of all non super self similar sets are dense in the set of all non-empty compact sub sets. It is also proved that the product of self similar sets are super self similar in product metric spaces and that the super self similarity is preserved under isometry. A characterization of super self similar sets using contracting sub self similarity is also presented. Some relevant counterexamples are provided. The concepts of exact super and sub self similarity are introduced and a necessary and sufficient condition for a set to be exact super self similar in terms of condensation iterated function systems (Condensation IFS’s) is obtained. A method to generate exact sub self similar sets using condensation IFS’s and the denseness of exact super self similar sets are also discussed.

  5. Protein structure similarity from principle component correlation analysis

    Directory of Open Access Journals (Sweden)

    Chou James

    2006-01-01

    Full Text Available Abstract Background Owing to rapid expansion of protein structure databases in recent years, methods of structure comparison are becoming increasingly effective and important in revealing novel information on functional properties of proteins and their roles in the grand scheme of evolutionary biology. Currently, the structural similarity between two proteins is measured by the root-mean-square-deviation (RMSD in their best-superimposed atomic coordinates. RMSD is the golden rule of measuring structural similarity when the structures are nearly identical; it, however, fails to detect the higher order topological similarities in proteins evolved into different shapes. We propose new algorithms for extracting geometrical invariants of proteins that can be effectively used to identify homologous protein structures or topologies in order to quantify both close and remote structural similarities. Results We measure structural similarity between proteins by correlating the principle components of their secondary structure interaction matrix. In our approach, the Principle Component Correlation (PCC analysis, a symmetric interaction matrix for a protein structure is constructed with relationship parameters between secondary elements that can take the form of distance, orientation, or other relevant structural invariants. When using a distance-based construction in the presence or absence of encoded N to C terminal sense, there are strong correlations between the principle components of interaction matrices of structurally or topologically similar proteins. Conclusion The PCC method is extensively tested for protein structures that belong to the same topological class but are significantly different by RMSD measure. The PCC analysis can also differentiate proteins having similar shapes but different topological arrangements. Additionally, we demonstrate that when using two independently defined interaction matrices, comparison of their maximum

  6. An Energy-Based Similarity Measure for Time Series

    Directory of Open Access Journals (Sweden)

    Pierre Brunagel

    2007-11-01

    Full Text Available A new similarity measure, called SimilB, for time series analysis, based on the cross-ΨB-energy operator (2004, is introduced. ΨB is a nonlinear measure which quantifies the interaction between two time series. Compared to Euclidean distance (ED or the Pearson correlation coefficient (CC, SimilB includes the temporal information and relative changes of the time series using the first and second derivatives of the time series. SimilB is well suited for both nonstationary and stationary time series and particularly those presenting discontinuities. Some new properties of ΨB are presented. Particularly, we show that ΨB as similarity measure is robust to both scale and time shift. SimilB is illustrated with synthetic time series and an artificial dataset and compared to the CC and the ED measures.

  7. Density-based similarity measures for content based search

    Energy Technology Data Exchange (ETDEWEB)

    Hush, Don R [Los Alamos National Laboratory; Porter, Reid B [Los Alamos National Laboratory; Ruggiero, Christy E [Los Alamos National Laboratory

    2009-01-01

    We consider the query by multiple example problem where the goal is to identify database samples whose content is similar to a coUection of query samples. To assess the similarity we use a relative content density which quantifies the relative concentration of the query distribution to the database distribution. If the database distribution is a mixture of the query distribution and a background distribution then it can be shown that database samples whose relative content density is greater than a particular threshold {rho} are more likely to have been generated by the query distribution than the background distribution. We describe an algorithm for predicting samples with relative content density greater than {rho} that is computationally efficient and possesses strong performance guarantees. We also show empirical results for applications in computer network monitoring and image segmentation.

  8. Branch length similarity entropy-based descriptors for shape representation

    Science.gov (United States)

    Kwon, Ohsung; Lee, Sang-Hee

    2017-11-01

    In previous studies, we showed that the branch length similarity (BLS) entropy profile could be successfully used for the shape recognition such as battle tanks, facial expressions, and butterflies. In the present study, we proposed new descriptors, roundness, symmetry, and surface roughness, for the recognition, which are more accurate and fast in the computation than the previous descriptors. The roundness represents how closely a shape resembles to a circle, the symmetry characterizes how much one shape is similar with another when the shape is moved in flip, and the surface roughness quantifies the degree of vertical deviations of a shape boundary. To evaluate the performance of the descriptors, we used the database of leaf images with 12 species. Each species consisted of 10 - 20 leaf images and the total number of images were 160. The evaluation showed that the new descriptors successfully discriminated the leaf species. We believe that the descriptors can be a useful tool in the field of pattern recognition.

  9. Large margin classification with indefinite similarities

    KAUST Repository

    Alabdulmohsin, Ibrahim; Cisse, Moustapha; Gao, Xin; Zhang, Xiangliang

    2016-01-01

    Classification with indefinite similarities has attracted attention in the machine learning community. This is partly due to the fact that many similarity functions that arise in practice are not symmetric positive semidefinite, i.e. the Mercer

  10. Testing Self-Similarity Through Lamperti Transformations

    KAUST Repository

    Lee, Myoungji; Genton, Marc G.; Jun, Mikyoung

    2016-01-01

    extensively, while statistical tests for self-similarity are scarce and limited to processes indexed in one dimension. This paper proposes a statistical hypothesis test procedure for self-similarity of a stochastic process indexed in one dimension and multi

  11. Personality similarity and life satisfaction in couples

    OpenAIRE

    Furler Katrin; Gomez Veronica; Grob Alexander

    2013-01-01

    The present study examined the association between personality similarity and life satisfaction in a large nationally representative sample of 1608 romantic couples. Similarity effects were computed for the Big Five personality traits as well as for personality profiles with global and differentiated indices of similarity. Results showed substantial actor and partner effects indicating that both partners' personality traits were related to both partners' life satisfaction. Personality similar...

  12. Quantifying graininess of glossy food products

    DEFF Research Database (Denmark)

    Møller, Flemming; Carstensen, Jens Michael

    The sensory quality of yoghurt can be altered when changing the milk composition or processing conditions. Part of the sensory quality may be assessed visually. It is described how a non-contact method for quantifying surface gloss and grains in yoghurt can be made. It was found that the standard...

  13. Quantifying antimicrobial resistance at veal calf farms

    NARCIS (Netherlands)

    Bosman, A.B.; Wagenaar, J.A.; Stegeman, A.; Vernooij, H.; Mevius, D.J.

    2012-01-01

    This study was performed to determine a sampling strategy to quantify the prevalence of antimicrobial resistance on veal calf farms, based on the variation in antimicrobial resistance within and between calves on five farms. Faecal samples from 50 healthy calves (10 calves/farm) were collected. From

  14. QS Spiral: Visualizing Periodic Quantified Self Data

    DEFF Research Database (Denmark)

    Larsen, Jakob Eg; Cuttone, Andrea; Jørgensen, Sune Lehmann

    2013-01-01

    In this paper we propose an interactive visualization technique QS Spiral that aims to capture the periodic properties of quantified self data and let the user explore those recurring patterns. The approach is based on time-series data visualized as a spiral structure. The interactivity includes ...

  15. Quantifying recontamination through factory environments - a review

    NARCIS (Netherlands)

    Asselt-den Aantrekker, van E.D.; Boom, R.M.; Zwietering, M.H.; Schothorst, van M.

    2003-01-01

    Recontamination of food products can be the origin of foodborne illnesses and should therefore be included in quantitative microbial risk assessment (MRA) studies. In order to do this, recontamination should be quantified using predictive models. This paper gives an overview of the relevant

  16. Quantifying quantum coherence with quantum Fisher information.

    Science.gov (United States)

    Feng, X N; Wei, L F

    2017-11-14

    Quantum coherence is one of the old but always important concepts in quantum mechanics, and now it has been regarded as a necessary resource for quantum information processing and quantum metrology. However, the question of how to quantify the quantum coherence has just been paid the attention recently (see, e.g., Baumgratz et al. PRL, 113. 140401 (2014)). In this paper we verify that the well-known quantum Fisher information (QFI) can be utilized to quantify the quantum coherence, as it satisfies the monotonicity under the typical incoherent operations and the convexity under the mixing of the quantum states. Differing from most of the pure axiomatic methods, quantifying quantum coherence by QFI could be experimentally testable, as the bound of the QFI is practically measurable. The validity of our proposal is specifically demonstrated with the typical phase-damping and depolarizing evolution processes of a generic single-qubit state, and also by comparing it with the other quantifying methods proposed previously.

  17. Interbank exposures: quantifying the risk of contagion

    OpenAIRE

    C. H. Furfine

    1999-01-01

    This paper examines the likelihood that failure of one bank would cause the subsequent collapse of a large number of other banks. Using unique data on interbank payment flows, the magnitude of bilateral federal funds exposures is quantified. These exposures are used to simulate the impact of various failure scenarios, and the risk of contagion is found to be economically small.

  18. Quantifying Productivity Gains from Foreign Investment

    NARCIS (Netherlands)

    C. Fons-Rosen (Christian); S. Kalemli-Ozcan (Sebnem); B.E. Sorensen (Bent); C. Villegas-Sanchez (Carolina)

    2013-01-01

    textabstractWe quantify the causal effect of foreign investment on total factor productivity (TFP) using a new global firm-level database. Our identification strategy relies on exploiting the difference in the amount of foreign investment by financial and industrial investors and simultaneously

  19. Power Curve Measurements, quantify the production increase

    DEFF Research Database (Denmark)

    Gómez Arranz, Paula; Vesth, Allan

    The purpose of this report is to quantify the production increase on a given turbine with respect to another given turbine. The used methodology is the “side by side” comparison method, provided by the client. This method involves the use of two neighboring turbines and it is based...

  20. Quantifying capital goods for waste landfilling

    DEFF Research Database (Denmark)

    Brogaard, Line Kai-Sørensen; Stentsøe, Steen; Willumsen, Hans Christian

    2013-01-01

    Materials and energy used for construction of a hill-type landfill of 4 million m3 were quantified in detail. The landfill is engineered with a liner and leachate collections system, as well as a gas collection and control system. Gravel and clay were the most common materials used, amounting...

  1. Quantifying interspecific coagulation efficiency of phytoplankton

    DEFF Research Database (Denmark)

    Hansen, J.L.S.; Kiørboe, Thomas

    1997-01-01

    . nordenskjoeldii. Mutual coagulation between Skeletonema costatum and the non-sticky cel:ls of Ditylum brightwellii also proceeded with hall the efficiency of S. costatum alone. The latex beads were suitable to be used as 'standard particles' to quantify the ability of phytoplankton to prime aggregation...

  2. New frontiers of quantified self 2

    DEFF Research Database (Denmark)

    Rapp, Amon; Cena, Federica; Kay, Judy

    2016-01-01

    While the Quantified Self (QS) community is described in terms of "self-knowledge through numbers" people are increasingly demanding value and meaning. In this workshop we aim at refocusing the QS debate on the value of data for providing new services....

  3. Quantifying temporal ventriloquism in audiovisual synchrony perception

    NARCIS (Netherlands)

    Kuling, I.A.; Kohlrausch, A.G.; Juola, J.F.

    2013-01-01

    The integration of visual and auditory inputs in the human brain works properly only if the components are perceived in close temporal proximity. In the present study, we quantified cross-modal interactions in the human brain for audiovisual stimuli with temporal asynchronies, using a paradigm from

  4. Reliability-How to Quantify and Improve?

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 5; Issue 5. Reliability - How to Quantify and Improve? - Improving the Health of Products. N K Srinivasan. General Article Volume 5 Issue 5 May 2000 pp 55-63. Fulltext. Click here to view fulltext PDF. Permanent link:

  5. A simplified score to quantify comorbidity in COPD.

    Directory of Open Access Journals (Sweden)

    Nirupama Putcha

    Full Text Available Comorbidities are common in COPD, but quantifying their burden is difficult. Currently there is a COPD-specific comorbidity index to predict mortality and another to predict general quality of life. We sought to develop and validate a COPD-specific comorbidity score that reflects comorbidity burden on patient-centered outcomes.Using the COPDGene study (GOLD II-IV COPD, we developed comorbidity scores to describe patient-centered outcomes employing three techniques: 1 simple count, 2 weighted score, and 3 weighted score based upon statistical selection procedure. We tested associations, area under the Curve (AUC and calibration statistics to validate scores internally with outcomes of respiratory disease-specific quality of life (St. George's Respiratory Questionnaire, SGRQ, six minute walk distance (6MWD, modified Medical Research Council (mMRC dyspnea score and exacerbation risk, ultimately choosing one score for external validation in SPIROMICS.Associations between comorbidities and all outcomes were comparable across the three scores. All scores added predictive ability to models including age, gender, race, current smoking status, pack-years smoked and FEV1 (p<0.001 for all comparisons. Area under the curve (AUC was similar between all three scores across outcomes: SGRQ (range 0·7624-0·7676, MMRC (0·7590-0·7644, 6MWD (0·7531-0·7560 and exacerbation risk (0·6831-0·6919. Because of similar performance, the comorbidity count was used for external validation. In the SPIROMICS cohort, the comorbidity count performed well to predict SGRQ (AUC 0·7891, MMRC (AUC 0·7611, 6MWD (AUC 0·7086, and exacerbation risk (AUC 0·7341.Quantifying comorbidity provides a more thorough understanding of the risk for patient-centered outcomes in COPD. A comorbidity count performs well to quantify comorbidity in a diverse population with COPD.

  6. Self-similar oscillations of the Extrap pinch

    International Nuclear Information System (INIS)

    Tendler, M.

    1987-11-01

    The method of the dynamic stabilization is invoked to explain the enhanced stability of a Z-pinch in EXTRAP configuration. The oscillatory motion is assumed to be forced on EXTRAP due to self-similar oscillations of a Z-pinch. Using a scaling for the net energy loss with plasma density and temperature typical for divertor configurations, a new analytic, self-similar solution of the fluid equations is presented. Strongly unharmonic oscillations of the plasma parameters in the pinch arise. These results are used in a discussion on the stability of EXTRAP, considered as a system with a time dependent internal magnetic field. The effect of the dynamic stabilization is considered by taking estimates. (author)

  7. Similar star formation rate and metallicity variability time-scales drive the fundamental metallicity relation

    Science.gov (United States)

    Torrey, Paul; Vogelsberger, Mark; Hernquist, Lars; McKinnon, Ryan; Marinacci, Federico; Simcoe, Robert A.; Springel, Volker; Pillepich, Annalisa; Naiman, Jill; Pakmor, Rüdiger; Weinberger, Rainer; Nelson, Dylan; Genel, Shy

    2018-06-01

    The fundamental metallicity relation (FMR) is a postulated correlation between galaxy stellar mass, star formation rate (SFR), and gas-phase metallicity. At its core, this relation posits that offsets from the mass-metallicity relation (MZR) at a fixed stellar mass are correlated with galactic SFR. In this Letter, we use hydrodynamical simulations to quantify the time-scales over which populations of galaxies oscillate about the average SFR and metallicity values at fixed stellar mass. We find that Illustris and IllustrisTNG predict that galaxy offsets from the star formation main sequence and MZR oscillate over similar time-scales, are often anticorrelated in their evolution, evolve with the halo dynamical time, and produce a pronounced FMR. Our models indicate that galaxies oscillate about equilibrium SFR and metallicity values - set by the galaxy's stellar mass - and that SFR and metallicity offsets evolve in an anticorrelated fashion. This anticorrelated variability of the metallicity and SFR offsets drives the existence of the FMR in our models. In contrast to Illustris and IllustrisTNG, we speculate that the SFR and metallicity evolution tracks may become decoupled in galaxy formation models dominated by feedback-driven globally bursty SFR histories, which could weaken the FMR residual correlation strength. This opens the possibility of discriminating between bursty and non-bursty feedback models based on the strength and persistence of the FMR - especially at high redshift.

  8. The many faces of graph dynamics

    Science.gov (United States)

    Pignolet, Yvonne Anne; Roy, Matthieu; Schmid, Stefan; Tredan, Gilles

    2017-06-01

    The topological structure of complex networks has fascinated researchers for several decades, resulting in the discovery of many universal properties and reoccurring characteristics of different kinds of networks. However, much less is known today about the network dynamics: indeed, complex networks in reality are not static, but rather dynamically evolve over time. Our paper is motivated by the empirical observation that network evolution patterns seem far from random, but exhibit structure. Moreover, the specific patterns appear to depend on the network type, contradicting the existence of a ‘one fits it all’ model. However, we still lack observables to quantify these intuitions, as well as metrics to compare graph evolutions. Such observables and metrics are needed for extrapolating or predicting evolutions, as well as for interpolating graph evolutions. To explore the many faces of graph dynamics and to quantify temporal changes, this paper suggests to build upon the concept of centrality, a measure of node importance in a network. In particular, we introduce the notion of centrality distance, a natural similarity measure for two graphs which depends on a given centrality, characterizing the graph type. Intuitively, centrality distances reflect the extent to which (non-anonymous) node roles are different or, in case of dynamic graphs, have changed over time, between two graphs. We evaluate the centrality distance approach for five evolutionary models and seven real-world social and physical networks. Our results empirically show the usefulness of centrality distances for characterizing graph dynamics compared to a null-model of random evolution, and highlight the differences between the considered scenarios. Interestingly, our approach allows us to compare the dynamics of very different networks, in terms of scale and evolution speed.

  9. Testing Self-Similarity Through Lamperti Transformations

    KAUST Repository

    Lee, Myoungji

    2016-07-14

    Self-similar processes have been widely used in modeling real-world phenomena occurring in environmetrics, network traffic, image processing, and stock pricing, to name but a few. The estimation of the degree of self-similarity has been studied extensively, while statistical tests for self-similarity are scarce and limited to processes indexed in one dimension. This paper proposes a statistical hypothesis test procedure for self-similarity of a stochastic process indexed in one dimension and multi-self-similarity for a random field indexed in higher dimensions. If self-similarity is not rejected, our test provides a set of estimated self-similarity indexes. The key is to test stationarity of the inverse Lamperti transformations of the process. The inverse Lamperti transformation of a self-similar process is a strongly stationary process, revealing a theoretical connection between the two processes. To demonstrate the capability of our test, we test self-similarity of fractional Brownian motions and sheets, their time deformations and mixtures with Gaussian white noise, and the generalized Cauchy family. We also apply the self-similarity test to real data: annual minimum water levels of the Nile River, network traffic records, and surface heights of food wrappings. © 2016, International Biometric Society.

  10. Similarity increases altruistic punishment in humans.

    Science.gov (United States)

    Mussweiler, Thomas; Ockenfels, Axel

    2013-11-26

    Humans are attracted to similar others. As a consequence, social networks are homogeneous in sociodemographic, intrapersonal, and other characteristics--a principle called homophily. Despite abundant evidence showing the importance of interpersonal similarity and homophily for human relationships, their behavioral correlates and cognitive foundations are poorly understood. Here, we show that perceived similarity substantially increases altruistic punishment, a key mechanism underlying human cooperation. We induced (dis)similarity perception by manipulating basic cognitive mechanisms in an economic cooperation game that included a punishment phase. We found that similarity-focused participants were more willing to punish others' uncooperative behavior. This influence of similarity is not explained by group identity, which has the opposite effect on altruistic punishment. Our findings demonstrate that pure similarity promotes reciprocity in ways known to encourage cooperation. At the same time, the increased willingness to punish norm violations among similarity-focused participants provides a rationale for why similar people are more likely to build stable social relationships. Finally, our findings show that altruistic punishment is differentially involved in encouraging cooperation under pure similarity vs. in-group conditions.

  11. Quantifying wetland–aquifer interactions in a humid subtropical climate region: An integrated approach

    Science.gov (United States)

    Mendoza-Sanchez, Itza; Phanikumar, Mantha S.; Niu, Jie; Masoner, Jason R.; Cozzarelli, Isabelle M.; McGuire, Jennifer T.

    2013-01-01

    Wetlands are widely recognized as sentinels of global climate change. Long-term monitoring data combined with process-based modeling has the potential to shed light on key processes and how they change over time. This paper reports the development and application of a simple water balance model based on long-term climate, soil, vegetation and hydrological dynamics to quantify groundwater–surface water (GW–SW) interactions at the Norman landfill research site in Oklahoma, USA. Our integrated approach involved model evaluation by means of the following independent measurements: (a) groundwater inflow calculation using stable isotopes of oxygen and hydrogen (16O, 18O, 1H, 2H); (b) seepage flux measurements in the wetland hyporheic sediment; and (c) pan evaporation measurements on land and in the wetland. The integrated approach was useful for identifying the dominant hydrological processes at the site, including recharge and subsurface flows. Simulated recharge compared well with estimates obtained using isotope methods from previous studies and allowed us to identify specific annual signatures of this important process during the period of study (1997–2007). Similarly, observations of groundwater inflow and outflow rates to and from the wetland using seepage meters and isotope methods were found to be in good agreement with simulation results. Results indicate that subsurface flow components in the system are seasonal and readily respond to rainfall events. The wetland water balance is dominated by local groundwater inputs and regional groundwater flow contributes little to the overall water balance.

  12. Quantifying Stock Return Distributions in Financial Markets.

    Science.gov (United States)

    Botta, Federico; Moat, Helen Susannah; Stanley, H Eugene; Preis, Tobias

    2015-01-01

    Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales.

  13. Similar speaker recognition using nonlinear analysis

    International Nuclear Information System (INIS)

    Seo, J.P.; Kim, M.S.; Baek, I.C.; Kwon, Y.H.; Lee, K.S.; Chang, S.W.; Yang, S.I.

    2004-01-01

    Speech features of the conventional speaker identification system, are usually obtained by linear methods in spectral space. However, these methods have the drawback that speakers with similar voices cannot be distinguished, because the characteristics of their voices are also similar in spectral space. To overcome the difficulty in linear methods, we propose to use the correlation exponent in the nonlinear space as a new feature vector for speaker identification among persons with similar voices. We show that our proposed method surprisingly reduces the error rate of speaker identification system to speakers with similar voices

  14. A masking index for quantifying hidden glitches

    OpenAIRE

    Berti-Equille, Laure; Loh, J. M.; Dasu, T.

    2015-01-01

    Data glitches are errors in a dataset. They are complex entities that often span multiple attributes and records. When they co-occur in data, the presence of one type of glitch can hinder the detection of another type of glitch. This phenomenon is called masking. In this paper, we define two important types of masking and propose a novel, statistically rigorous indicator called masking index for quantifying the hidden glitches. We outline four cases of masking: outliers masked by missing valu...

  15. How are the catastrophical risks quantifiable

    International Nuclear Information System (INIS)

    Chakraborty, S.

    1985-01-01

    For the assessment and evaluation of industrial risks the question must be asked how are the catastrophical risks quantifiable. Typical real catastrophical risks and risk assessment based on modelling assumptions have been placed against each other in order to put the risks into proper perspective. However, the society is risk averse when there is a catastrophic potential of severe accidents in a large scale industrial facility even though there is extremely low probability of occurence. (orig.) [de

  16. Quantifying Distributional Model Risk via Optimal Transport

    OpenAIRE

    Blanchet, Jose; Murthy, Karthyek R. A.

    2016-01-01

    This paper deals with the problem of quantifying the impact of model misspecification when computing general expected values of interest. The methodology that we propose is applicable in great generality, in particular, we provide examples involving path dependent expectations of stochastic processes. Our approach consists in computing bounds for the expectation of interest regardless of the probability measure used, as long as the measure lies within a prescribed tolerance measured in terms ...

  17. Quantifying Anthropogenic Stress on Groundwater Resources

    OpenAIRE

    Ashraf, Batool; AghaKouchak, Amir; Alizadeh, Amin; Mousavi Baygi, Mohammad; R. Moftakhari, Hamed; Mirchi, Ali; Anjileli, Hassan; Madani, Kaveh

    2017-01-01

    This study explores a general framework for quantifying anthropogenic influences on groundwater budget based on normalized human outflow (hout) and inflow (hin). The framework is useful for sustainability assessment of groundwater systems and allows investigating the effects of different human water abstraction scenarios on the overall aquifer regime (e.g., depleted, natural flow-dominated, and human flow-dominated). We apply this approach to selected regions in the USA, Germany and Iran to e...

  18. Identifying mechanistic similarities in drug responses

    KAUST Repository

    Zhao, C.; Hua, J.; Bittner, M. L.; Ivanov, I.; Dougherty, a. E. R.

    2012-01-01

    Motivation: In early drug development, it would be beneficial to be able to identify those dynamic patterns of gene response that indicate that drugs targeting a particular gene will be likely or not to elicit the desired response. One approach

  19. Quantifying the limits of transition state theory in enzymatic catalysis.

    Science.gov (United States)

    Zinovjev, Kirill; Tuñón, Iñaki

    2017-11-21

    While being one of the most popular reaction rate theories, the applicability of transition state theory to the study of enzymatic reactions has been often challenged. The complex dynamic nature of the protein environment raised the question about the validity of the nonrecrossing hypothesis, a cornerstone in this theory. We present a computational strategy to quantify the error associated to transition state theory from the number of recrossings observed at the equicommittor, which is the best possible dividing surface. Application of a direct multidimensional transition state optimization to the hydride transfer step in human dihydrofolate reductase shows that both the participation of the protein degrees of freedom in the reaction coordinate and the error associated to the nonrecrossing hypothesis are small. Thus, the use of transition state theory, even with simplified reaction coordinates, provides a good theoretical framework for the study of enzymatic catalysis. Copyright © 2017 the Author(s). Published by PNAS.

  20. Word embeddings quantify 100 years of gender and ethnic stereotypes.

    Science.gov (United States)

    Garg, Nikhil; Schiebinger, Londa; Jurafsky, Dan; Zou, James

    2018-04-17

    Word embeddings are a powerful machine-learning framework that represents each English word by a vector. The geometric relationship between these vectors captures meaningful semantic relationships between the corresponding words. In this paper, we develop a framework to demonstrate how the temporal dynamics of the embedding helps to quantify changes in stereotypes and attitudes toward women and ethnic minorities in the 20th and 21st centuries in the United States. We integrate word embeddings trained on 100 y of text data with the US Census to show that changes in the embedding track closely with demographic and occupation shifts over time. The embedding captures societal shifts-e.g., the women's movement in the 1960s and Asian immigration into the United States-and also illuminates how specific adjectives and occupations became more closely associated with certain populations over time. Our framework for temporal analysis of word embedding opens up a fruitful intersection between machine learning and quantitative social science.

  1. Quantifying Urban Fragmentation under Economic Transition in Shanghai City, China

    Directory of Open Access Journals (Sweden)

    Heyuan You

    2015-12-01

    Full Text Available Urban fragmentation affects sustainability through multiple impacts on economic, social, and environmental cost. Characterizing the dynamics of urban fragmentation in relation to economic transition should provide implications for sustainability. However, rather few efforts have been made in this issue. Using the case of Shanghai (China, this paper quantifies urban fragmentation in relation to economic transition. In particular, urban fragmentation is quantified by a time-series of remotely sensed images and a set of landscape metrics; and economic transition is described by a set of indicators from three aspects (globalization, decentralization, and marketization. Results show that urban fragmentation presents an increasing linear trend. Multivariate regression identifies positive linear correlation between urban fragmentation and economic transition. More specifically, the relative influence is different for the three components of economic transition. The relative influence of decentralization is stronger than that of globalization and marketization. The joint influences of decentralization and globalization are the strongest for urban fragmentation. The demonstrated methodology can be applicable to other places after making suitable adjustment of the economic transition indicators and fragmentation metrics.

  2. Quantifying the effects of land use and climate on Holocene vegetation in Europe

    Science.gov (United States)

    Marquer, Laurent; Gaillard, Marie-José; Sugita, Shinya; Poska, Anneli; Trondman, Anna-Kari; Mazier, Florence; Nielsen, Anne Birgitte; Fyfe, Ralph M.; Jönsson, Anna Maria; Smith, Benjamin; Kaplan, Jed O.; Alenius, Teija; Birks, H. John B.; Bjune, Anne E.; Christiansen, Jörg; Dodson, John; Edwards, Kevin J.; Giesecke, Thomas; Herzschuh, Ulrike; Kangur, Mihkel; Koff, Tiiu; Latałowa, Małgorzata; Lechterbeck, Jutta; Olofsson, Jörgen; Seppä, Heikki

    2017-09-01

    Early agriculture can be detected in palaeovegetation records, but quantification of the relative importance of climate and land use in influencing regional vegetation composition since the onset of agriculture is a topic that is rarely addressed. We present a novel approach that combines pollen-based REVEALS estimates of plant cover with climate, anthropogenic land-cover and dynamic vegetation modelling results. This is used to quantify the relative impacts of land use and climate on Holocene vegetation at a sub-continental scale, i.e. northern and western Europe north of the Alps. We use redundancy analysis and variation partitioning to quantify the percentage of variation in vegetation composition explained by the climate and land-use variables, and Monte Carlo permutation tests to assess the statistical significance of each variable. We further use a similarity index to combine pollen-based REVEALS estimates with climate-driven dynamic vegetation modelling results. The overall results indicate that climate is the major driver of vegetation when the Holocene is considered as a whole and at the sub-continental scale, although land use is important regionally. Four critical phases of land-use effects on vegetation are identified. The first phase (from 7000 to 6500 BP) corresponds to the early impacts on vegetation of farming and Neolithic forest clearance and to the dominance of climate as a driver of vegetation change. During the second phase (from 4500 to 4000 BP), land use becomes a major control of vegetation. Climate is still the principal driver, although its influence decreases gradually. The third phase (from 2000 to 1500 BP) is characterised by the continued role of climate on vegetation as a consequence of late-Holocene climate shifts and specific climate events that influence vegetation as well as land use. The last phase (from 500 to 350 BP) shows an acceleration of vegetation changes, in particular during the last century, caused by new farming

  3. Retinoid-binding proteins: similar protein architectures bind similar ligands via completely different ways.

    Directory of Open Access Journals (Sweden)

    Yu-Ru Zhang

    Full Text Available BACKGROUND: Retinoids are a class of compounds that are chemically related to vitamin A, which is an essential nutrient that plays a key role in vision, cell growth and differentiation. In vivo, retinoids must bind with specific proteins to perform their necessary functions. Plasma retinol-binding protein (RBP and epididymal retinoic acid binding protein (ERABP carry retinoids in bodily fluids, while cellular retinol-binding proteins (CRBPs and cellular retinoic acid-binding proteins (CRABPs carry retinoids within cells. Interestingly, although all of these transport proteins possess similar structures, the modes of binding for the different retinoid ligands with their carrier proteins are different. METHODOLOGY/PRINCIPAL FINDINGS: In this work, we analyzed the various retinoid transport mechanisms using structure and sequence comparisons, binding site analyses and molecular dynamics simulations. Our results show that in the same family of proteins and subcellular location, the orientation of a retinoid molecule within a binding protein is same, whereas when different families of proteins are considered, the orientation of the bound retinoid is completely different. In addition, none of the amino acid residues involved in ligand binding is conserved between the transport proteins. However, for each specific binding protein, the amino acids involved in the ligand binding are conserved. The results of this study allow us to propose a possible transport model for retinoids. CONCLUSIONS/SIGNIFICANCE: Our results reveal the differences in the binding modes between the different retinoid-binding proteins.

  4. On self-similar Tolman models

    International Nuclear Information System (INIS)

    Maharaj, S.D.

    1988-01-01

    The self-similar spherically symmetric solutions of the Einstein field equation for the case of dust are identified. These form a subclass of the Tolman models. These self-similar models contain the solution recently presented by Chi [J. Math. Phys. 28, 1539 (1987)], thereby refuting the claim of having found a new solution to the Einstein field equations

  5. Mining Diagnostic Assessment Data for Concept Similarity

    Science.gov (United States)

    Madhyastha, Tara; Hunt, Earl

    2009-01-01

    This paper introduces a method for mining multiple-choice assessment data for similarity of the concepts represented by the multiple choice responses. The resulting similarity matrix can be used to visualize the distance between concepts in a lower-dimensional space. This gives an instructor a visualization of the relative difficulty of concepts…

  6. Similarity indices I: what do they measure

    International Nuclear Information System (INIS)

    Johnston, J.W.

    1976-11-01

    A method for estimating the effects of environmental effusions on ecosystems is described. The characteristics of 25 similarity indices used in studies of ecological communities were investigated. The type of data structure, to which these indices are frequently applied, was described as consisting of vectors of measurements on attributes (species) observed in a set of samples. A general similarity index was characterized as the result of a two-step process defined on a pair of vectors. In the first step an attribute similarity score is obtained for each attribute by comparing the attribute values observed in the pair of vectors. The result is a vector of attribute similarity scores. These are combined in the second step to arrive at the similarity index. The operation in the first step was characterized as a function, g, defined on pairs of attribute values. The second operation was characterized as a function, F, defined on the vector of attribute similarity scores from the first step. Usually, F was a simple sum or weighted sum of the attribute similarity scores. It is concluded that similarity indices should not be used as the test statistic to discriminate between two ecological communities

  7. Measuring transferring similarity via local information

    Science.gov (United States)

    Yin, Likang; Deng, Yong

    2018-05-01

    Recommender systems have developed along with the web science, and how to measure the similarity between users is crucial for processing collaborative filtering recommendation. Many efficient models have been proposed (i.g., the Pearson coefficient) to measure the direct correlation. However, the direct correlation measures are greatly affected by the sparsity of dataset. In other words, the direct correlation measures would present an inauthentic similarity if two users have a very few commonly selected objects. Transferring similarity overcomes this drawback by considering their common neighbors (i.e., the intermediates). Yet, the transferring similarity also has its drawback since it can only provide the interval of similarity. To break the limitations, we propose the Belief Transferring Similarity (BTS) model. The contributions of BTS model are: (1) BTS model addresses the issue of the sparsity of dataset by considering the high-order similarity. (2) BTS model transforms uncertain interval to a certain state based on fuzzy systems theory. (3) BTS model is able to combine the transferring similarity of different intermediates using information fusion method. Finally, we compare BTS models with nine different link prediction methods in nine different networks, and we also illustrate the convergence property and efficiency of the BTS model.

  8. On distributional assumptions and whitened cosine similarities

    DEFF Research Database (Denmark)

    Loog, Marco

    2008-01-01

    Recently, an interpretation of the whitened cosine similarity measure as a Bayes decision rule was proposed (C. Liu, "The Bayes Decision Rule Induced Similarity Measures,'' IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1086-1090, June 2007. This communication makes th...

  9. Self-Similar Traffic In Wireless Networks

    OpenAIRE

    Jerjomins, R.; Petersons, E.

    2005-01-01

    Many studies have shown that traffic in Ethernet and other wired networks is self-similar. This paper reveals that wireless network traffic is also self-similar and long-range dependant by analyzing big amount of data captured from the wireless router.

  10. Similarity Structure of Wave-Collapse

    DEFF Research Database (Denmark)

    Rypdal, Kristoffer; Juul Rasmussen, Jens; Thomsen, Kenneth

    1985-01-01

    Similarity transformations of the cubic Schrödinger equation (CSE) are investigated. The transformations are used to remove the explicit time variation in the CSE and reduce it to differential equations in the spatial variables only. Two different methods for similarity reduction are employed and...

  11. Similarity indices I: what do they measure.

    Energy Technology Data Exchange (ETDEWEB)

    Johnston, J.W.

    1976-11-01

    A method for estimating the effects of environmental effusions on ecosystems is described. The characteristics of 25 similarity indices used in studies of ecological communities were investigated. The type of data structure, to which these indices are frequently applied, was described as consisting of vectors of measurements on attributes (species) observed in a set of samples. A general similarity index was characterized as the result of a two-step process defined on a pair of vectors. In the first step an attribute similarity score is obtained for each attribute by comparing the attribute values observed in the pair of vectors. The result is a vector of attribute similarity scores. These are combined in the second step to arrive at the similarity index. The operation in the first step was characterized as a function, g, defined on pairs of attribute values. The second operation was characterized as a function, F, defined on the vector of attribute similarity scores from the first step. Usually, F was a simple sum or weighted sum of the attribute similarity scores. It is concluded that similarity indices should not be used as the test statistic to discriminate between two ecological communities.

  12. Information filtering based on transferring similarity.

    Science.gov (United States)

    Sun, Duo; Zhou, Tao; Liu, Jian-Guo; Liu, Run-Ran; Jia, Chun-Xiao; Wang, Bing-Hong

    2009-07-01

    In this Brief Report, we propose an index of user similarity, namely, the transferring similarity, which involves all high-order similarities between users. Accordingly, we design a modified collaborative filtering algorithm, which provides remarkably higher accurate predictions than the standard collaborative filtering. More interestingly, we find that the algorithmic performance will approach its optimal value when the parameter, contained in the definition of transferring similarity, gets close to its critical value, before which the series expansion of transferring similarity is convergent and after which it is divergent. Our study is complementary to the one reported in [E. A. Leicht, P. Holme, and M. E. J. Newman, Phys. Rev. E 73, 026120 (2006)], and is relevant to the missing link prediction problem.

  13. Self-similar continued root approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.

    2012-01-01

    A novel method of summing asymptotic series is advanced. Such series repeatedly arise when employing perturbation theory in powers of a small parameter for complicated problems of condensed matter physics, statistical physics, and various applied problems. The method is based on the self-similar approximation theory involving self-similar root approximants. The constructed self-similar continued roots extrapolate asymptotic series to finite values of the expansion parameter. The self-similar continued roots contain, as a particular case, continued fractions and Padé approximants. A theorem on the convergence of the self-similar continued roots is proved. The method is illustrated by several examples from condensed-matter physics.

  14. Correlation between social proximity and mobility similarity.

    Science.gov (United States)

    Fan, Chao; Liu, Yiding; Huang, Junming; Rong, Zhihai; Zhou, Tao

    2017-09-20

    Human behaviors exhibit ubiquitous correlations in many aspects, such as individual and collective levels, temporal and spatial dimensions, content, social and geographical layers. With rich Internet data of online behaviors becoming available, it attracts academic interests to explore human mobility similarity from the perspective of social network proximity. Existent analysis shows a strong correlation between online social proximity and offline mobility similarity, namely, mobile records between friends are significantly more similar than between strangers, and those between friends with common neighbors are even more similar. We argue the importance of the number and diversity of common friends, with a counter intuitive finding that the number of common friends has no positive impact on mobility similarity while the diversity plays a key role, disagreeing with previous studies. Our analysis provides a novel view for better understanding the coupling between human online and offline behaviors, and will help model and predict human behaviors based on social proximity.

  15. Scalar Similarity for Relaxed Eddy Accumulation Methods

    Science.gov (United States)

    Ruppert, Johannes; Thomas, Christoph; Foken, Thomas

    2006-07-01

    The relaxed eddy accumulation (REA) method allows the measurement of trace gas fluxes when no fast sensors are available for eddy covariance measurements. The flux parameterisation used in REA is based on the assumption of scalar similarity, i.e., similarity of the turbulent exchange of two scalar quantities. In this study changes in scalar similarity between carbon dioxide, sonic temperature and water vapour were assessed using scalar correlation coefficients and spectral analysis. The influence on REA measurements was assessed by simulation. The evaluation is based on observations over grassland, irrigated cotton plantation and spruce forest. Scalar similarity between carbon dioxide, sonic temperature and water vapour showed a distinct diurnal pattern and change within the day. Poor scalar similarity was found to be linked to dissimilarities in the energy contained in the low frequency part of the turbulent spectra ( definition.

  16. Surf similarity and solitary wave runup

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Madsen, Per A.

    2008-01-01

    The notion of surf similarity in the runup of solitary waves is revisited. We show that the surf similarity parameter for solitary waves may be effectively reduced to the beach slope divided by the offshore wave height to depth ratio. This clarifies its physical interpretation relative to a previ...... functional dependence on their respective surf similarity parameters. Important equivalencies in the runup of sinusoidal and solitary waves are thus revealed.......The notion of surf similarity in the runup of solitary waves is revisited. We show that the surf similarity parameter for solitary waves may be effectively reduced to the beach slope divided by the offshore wave height to depth ratio. This clarifies its physical interpretation relative...... to a previous parameterization, which was not given in an explicit form. Good coherency with experimental (breaking) runup data is preserved with this simpler parameter. A recasting of analytical (nonbreaking) runup expressions for sinusoidal and solitary waves additionally shows that they contain identical...

  17. Similarity in Bilateral Isolated Internal Orbital Fractures.

    Science.gov (United States)

    Chen, Hung-Chang; Cox, Jacob T; Sanyal, Abanti; Mahoney, Nicholas R

    2018-04-13

    In evaluating patients sustaining bilateral isolated internal orbital fractures, the authors have observed both similar fracture locations and also similar expansion of orbital volumes. In this study, we aim to investigate if there is a propensity for the 2 orbits to fracture in symmetrically similar patterns when sustaining similar trauma. A retrospective chart review was performed studying all cases at our institution of bilateral isolated internal orbital fractures involving the medial wall and/or the floor at the time of presentation. The similarity of the bilateral fracture locations was evaluated using the Fisher's exact test. The bilateral expanded orbital volumes were analyzed using the Wilcoxon signed-rank test to assess for orbital volume similarity. Twenty-four patients with bilateral internal orbital fractures were analyzed for fracture location similarity. Seventeen patients (70.8%) had 100% concordance in the orbital subregion fractured, and the association between the right and the left orbital fracture subregion locations was statistically significant (P < 0.0001). Fifteen patients were analyzed for orbital volume similarity. The average orbital cavity volume was 31.2 ± 3.8 cm on the right and 32.0 ± 3.7 cm on the left. There was a statistically significant difference between right and left orbital cavity volumes (P = 0.0026). The data from this study suggest that an individual who suffers isolated bilateral internal orbital fractures has a statistically significant similarity in the location of their orbital fractures. However, there does not appear to be statistically significant similarity in the expansion of the orbital volumes in these patients.

  18. Learning Faster by Discovering and Exploiting Object Similarities

    Directory of Open Access Journals (Sweden)

    Tadej Janež

    2013-03-01

    Full Text Available In this paper we explore the question: “Is it possible to speed up the learning process of an autonomous agent by performing experiments in a more complex environment (i.e., an environment with a greater number of different objects?” To this end, we use a simple robotic domain, where the robot has to learn a qualitative model predicting the change in the robot's distance to an object. To quantify the environment's complexity, we defined cardinal complexity as the number of objects in the robot's world, and behavioural complexity as the number of objects' distinct behaviours. We propose Error reduction merging (ERM, a new learning method that automatically discovers similarities in the structure of the agent's environment. ERM identifies different types of objects solely from the data measured and merges the observations of objects that behave in the same or similar way in order to speed up the agent's learning. We performed a series of experiments in worlds of increasing complexity. The results in our simple domain indicate that ERM was capable of discovering structural similarities in the data which indeed made the learning faster, clearly superior to conventional learning. This observed trend occurred with various machine learning algorithms used inside the ERM method.

  19. Similarity of the ruminal bacteria across individual lactating cows.

    Science.gov (United States)

    Jami, Elie; Mizrahi, Itzhak

    2012-06-01

    Dairy cattle hold enormous significance for man as a source of milk and meat. Their remarkable ability to convert indigestible plant mass into these digestible food products resides in the rumen - an anaerobic chambered compartment - in the bovine digestive system. The rumen houses a complex microbiota which is responsible for the degradation of plant material, consequently enabling the conversion of plant fibers into milk and meat and determining their quality and quantity. Hence, an understanding of this complex ecosystem has major economic implications. One important question that is yet to be addressed is the degree of conservation of rumen microbial composition across individual animals. Here we quantified the degree of similarity between rumen bacterial populations of 16 individual cows. We used real-time PCR to determine the variance of specific ruminal bacterial species with different metabolic functions, revealing that while some bacterial strains vary greatly across animals, others show only very low variability. This variance could not be linked to the metabolic traits of these bacteria. We examined the degree of similarity in the dominant bacterial populations across all animals using automated ribosomal intergenic spacer analysis (ARISA), and identified a bacterial community consisting of 32% operational taxonomic units (OTUs) shared by at least 90% of the animals and 19% OTUs shared by 100% of the animals. Looking only at the presence or absence of each OTU gave an average similarity of 75% between each cow pair. When abundance of each OTU was added to the analysis, this similarity decreased to an average of less than 60%. Thus, as suggested in similar recent studies of the human gut, a bovine rumen core microbiome does exist, but taxa abundance may vary greatly across animals. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Measure of Node Similarity in Multilayer Networks.

    Directory of Open Access Journals (Sweden)

    Anders Mollgaard

    Full Text Available The weight of links in a network is often related to the similarity of the nodes. Here, we introduce a simple tunable measure for analysing the similarity of nodes across different link weights. In particular, we use the measure to analyze homophily in a group of 659 freshman students at a large university. Our analysis is based on data obtained using smartphones equipped with custom data collection software, complemented by questionnaire-based data. The network of social contacts is represented as a weighted multilayer network constructed from different channels of telecommunication as well as data on face-to-face contacts. We find that even strongly connected individuals are not more similar with respect to basic personality traits than randomly chosen pairs of individuals. In contrast, several socio-demographics variables have a significant degree of similarity. We further observe that similarity might be present in one layer of the multilayer network and simultaneously be absent in the other layers. For a variable such as gender, our measure reveals a transition from similarity between nodes connected with links of relatively low weight to dis-similarity for the nodes connected by the strongest links. We finally analyze the overlap between layers in the network for different levels of acquaintanceships.

  1. Trajectory similarity join in spatial networks

    KAUST Repository

    Shang, Shuo

    2017-09-07

    The matching of similar pairs of objects, called similarity join, is fundamental functionality in data management. We consider the case of trajectory similarity join (TS-Join), where the objects are trajectories of vehicles moving in road networks. Thus, given two sets of trajectories and a threshold θ, the TS-Join returns all pairs of trajectories from the two sets with similarity above θ. This join targets applications such as trajectory near-duplicate detection, data cleaning, ridesharing recommendation, and traffic congestion prediction. With these applications in mind, we provide a purposeful definition of similarity. To enable efficient TS-Join processing on large sets of trajectories, we develop search space pruning techniques and take into account the parallel processing capabilities of modern processors. Specifically, we present a two-phase divide-and-conquer algorithm. For each trajectory, the algorithm first finds similar trajectories. Then it merges the results to achieve a final result. The algorithm exploits an upper bound on the spatiotemporal similarity and a heuristic scheduling strategy for search space pruning. The algorithm\\'s per-trajectory searches are independent of each other and can be performed in parallel, and the merging has constant cost. An empirical study with real data offers insight in the performance of the algorithm and demonstrates that is capable of outperforming a well-designed baseline algorithm by an order of magnitude.

  2. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  3. A Similarity Search Using Molecular Topological Graphs

    Directory of Open Access Journals (Sweden)

    Yoshifumi Fukunishi

    2009-01-01

    Full Text Available A molecular similarity measure has been developed using molecular topological graphs and atomic partial charges. Two kinds of topological graphs were used. One is the ordinary adjacency matrix and the other is a matrix which represents the minimum path length between two atoms of the molecule. The ordinary adjacency matrix is suitable to compare the local structures of molecules such as functional groups, and the other matrix is suitable to compare the global structures of molecules. The combination of these two matrices gave a similarity measure. This method was applied to in silico drug screening, and the results showed that it was effective as a similarity measure.

  4. Similarity-based pattern analysis and recognition

    CERN Document Server

    Pelillo, Marcello

    2013-01-01

    This accessible text/reference presents a coherent overview of the emerging field of non-Euclidean similarity learning. The book presents a broad range of perspectives on similarity-based pattern analysis and recognition methods, from purely theoretical challenges to practical, real-world applications. The coverage includes both supervised and unsupervised learning paradigms, as well as generative and discriminative models. Topics and features: explores the origination and causes of non-Euclidean (dis)similarity measures, and how they influence the performance of traditional classification alg

  5. Methods to Quantify Nickel in Soils and Plant Tissues

    Directory of Open Access Journals (Sweden)

    Bruna Wurr Rodak

    2015-06-01

    Full Text Available In comparison with other micronutrients, the levels of nickel (Ni available in soils and plant tissues are very low, making quantification very difficult. The objective of this paper is to present optimized determination methods of Ni availability in soils by extractants and total content in plant tissues for routine commercial laboratory analyses. Samples of natural and agricultural soils were processed and analyzed by Mehlich-1 extraction and by DTPA. To quantify Ni in the plant tissues, samples were digested with nitric acid in a closed system in a microwave oven. The measurement was performed by inductively coupled plasma/optical emission spectrometry (ICP-OES. There was a positive and significant correlation between the levels of available Ni in the soils subjected to Mehlich-1 and DTPA extraction, while for plant tissue samples the Ni levels recovered were high and similar to the reference materials. The availability of Ni in some of the natural soil and plant tissue samples were lower than the limits of quantification. Concentrations of this micronutrient were higher in the soil samples in which Ni had been applied. Nickel concentration differed in the plant parts analyzed, with highest levels in the grains of soybean. The grain, in comparison with the shoot and leaf concentrations, were better correlated with the soil available levels for both extractants. The methods described in this article were efficient in quantifying Ni and can be used for routine laboratory analysis of soils and plant tissues.

  6. Quantifying complexity in translational research: an integrated approach.

    Science.gov (United States)

    Munoz, David A; Nembhard, Harriet Black; Kraschnewski, Jennifer L

    2014-01-01

    The purpose of this paper is to quantify complexity in translational research. The impact of major operational steps and technical requirements is calculated with respect to their ability to accelerate moving new discoveries into clinical practice. A three-phase integrated quality function deployment (QFD) and analytic hierarchy process (AHP) method was used to quantify complexity in translational research. A case study in obesity was used to usability. Generally, the evidence generated was valuable for understanding various components in translational research. Particularly, the authors found that collaboration networks, multidisciplinary team capacity and community engagement are crucial for translating new discoveries into practice. As the method is mainly based on subjective opinion, some argue that the results may be biased. However, a consistency ratio is calculated and used as a guide to subjectivity. Alternatively, a larger sample may be incorporated to reduce bias. The integrated QFD-AHP framework provides evidence that could be helpful to generate agreement, develop guidelines, allocate resources wisely, identify benchmarks and enhance collaboration among similar projects. Current conceptual models in translational research provide little or no clue to assess complexity. The proposed method aimed to fill this gap. Additionally, the literature review includes various features that have not been explored in translational research.

  7. Quantifier spreading: children misled by ostensive cues

    Directory of Open Access Journals (Sweden)

    Katalin É. Kiss

    2017-04-01

    Full Text Available This paper calls attention to a methodological problem of acquisition experiments. It shows that the economy of the stimulus employed in child language experiments may lend an increased ostensive effect to the message communicated to the child. Thus, when the visual stimulus in a sentence-picture matching task is a minimal model abstracting away from the details of the situation, children often regard all the elements of the stimulus as ostensive clues to be represented in the corresponding sentence. The use of such minimal stimuli is mistaken when the experiment aims to test whether or not a certain element of the stimulus is relevant for the linguistic representation or interpretation. The paper illustrates this point by an experiment involving quantifier spreading. It is claimed that children find a universally quantified sentence like 'Every girl is riding a bicycle 'to be a false description of a picture showing three girls riding bicycles and a solo bicycle because they are misled to believe that all the elements in the visual stimulus are relevant, hence all of them are to be represented by the corresponding linguistic description. When the iconic drawings were replaced by photos taken in a natural environment rich in accidental details, the occurrence of quantifier spreading was radically reduced. It is shown that an extra object in the visual stimulus can lead to the rejection of the sentence also in the case of sentences involving no quantification, which gives further support to the claim that the source of the problem is not (or not only the grammatical or cognitive difficulty of quantification but the unintended ostensive effect of the extra object.  This article is part of the special collection: Acquisition of Quantification

  8. Quantifying information leakage of randomized protocols

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Malacaria, Pasquale

    2015-01-01

    The quantification of information leakage provides a quantitative evaluation of the security of a system. We propose the usage of Markovian processes to model deterministic and probabilistic systems. By using a methodology generalizing the lattice of information approach we model refined attackers...... capable to observe the internal behavior of the system, and quantify the information leakage of such systems. We also use our method to obtain an algorithm for the computation of channel capacity from our Markovian models. Finally, we show how to use the method to analyze timed and non-timed attacks...

  9. Characterization of autoregressive processes using entropic quantifiers

    Science.gov (United States)

    Traversaro, Francisco; Redelico, Francisco O.

    2018-01-01

    The aim of the contribution is to introduce a novel information plane, the causal-amplitude informational plane. As previous works seems to indicate, Bandt and Pompe methodology for estimating entropy does not allow to distinguish between probability distributions which could be fundamental for simulation or for probability analysis purposes. Once a time series is identified as stochastic by the causal complexity-entropy informational plane, the novel causal-amplitude gives a deeper understanding of the time series, quantifying both, the autocorrelation strength and the probability distribution of the data extracted from the generating processes. Two examples are presented, one from climate change model and the other from financial markets.

  10. Quantifying Heuristic Bias: Anchoring, Availability, and Representativeness.

    Science.gov (United States)

    Richie, Megan; Josephson, S Andrew

    2018-01-01

    Construct: Authors examined whether a new vignette-based instrument could isolate and quantify heuristic bias. Heuristics are cognitive shortcuts that may introduce bias and contribute to error. There is no standardized instrument available to quantify heuristic bias in clinical decision making, limiting future study of educational interventions designed to improve calibration of medical decisions. This study presents validity data to support a vignette-based instrument quantifying bias due to the anchoring, availability, and representativeness heuristics. Participants completed questionnaires requiring assignment of probabilities to potential outcomes of medical and nonmedical scenarios. The instrument randomly presented scenarios in one of two versions: Version A, encouraging heuristic bias, and Version B, worded neutrally. The primary outcome was the difference in probability judgments for Version A versus Version B scenario options. Of 167 participants recruited, 139 enrolled. Participants assigned significantly higher mean probability values to Version A scenario options (M = 9.56, SD = 3.75) than Version B (M = 8.98, SD = 3.76), t(1801) = 3.27, p = .001. This result remained significant analyzing medical scenarios alone (Version A, M = 9.41, SD = 3.92; Version B, M = 8.86, SD = 4.09), t(1204) = 2.36, p = .02. Analyzing medical scenarios by heuristic revealed a significant difference between Version A and B for availability (Version A, M = 6.52, SD = 3.32; Version B, M = 5.52, SD = 3.05), t(404) = 3.04, p = .003, and representativeness (Version A, M = 11.45, SD = 3.12; Version B, M = 10.67, SD = 3.71), t(396) = 2.28, p = .02, but not anchoring. Stratifying by training level, students maintained a significant difference between Version A and B medical scenarios (Version A, M = 9.83, SD = 3.75; Version B, M = 9.00, SD = 3.98), t(465) = 2.29, p = .02, but not residents or attendings. Stratifying by heuristic and training level, availability maintained

  11. An index for quantifying flocking behavior.

    Science.gov (United States)

    Quera, Vicenç; Herrando, Salvador; Beltran, Francesc S; Salas, Laura; Miñano, Meritxell

    2007-12-01

    One of the classic research topics in adaptive behavior is the collective displacement of groups of organisms such as flocks of birds, schools of fish, herds of mammals, and crowds of people. However, most agent-based simulations of group behavior do not provide a quantitative index for determining the point at which the flock emerges. An index was developed of the aggregation of moving individuals in a flock and an example was provided of how it can be used to quantify the degree to which a group of moving individuals actually forms a flock.

  12. Certain Verbs Are Syntactically Explicit Quantifiers

    Directory of Open Access Journals (Sweden)

    Anna Szabolcsi

    2010-12-01

    Full Text Available Quantification over individuals, times, and worlds can in principle be made explicit in the syntax of the object language, or left to the semantics and spelled out in the meta-language. The traditional view is that quantification over individuals is syntactically explicit, whereas quantification over times and worlds is not. But a growing body of literature proposes a uniform treatment. This paper examines the scopal interaction of aspectual raising verbs (begin, modals (can, and intensional raising verbs (threaten with quantificational subjects in Shupamem, Dutch, and English. It appears that aspectual raising verbs and at least modals may undergo the same kind of overt or covert scope-changing operations as nominal quantifiers; the case of intensional raising verbs is less clear. Scope interaction is thus shown to be a new potential diagnostic of object-linguistic quantification, and the similarity in the scope behavior of nominal and verbal quantifiers supports the grammatical plausibility of ontological symmetry, explored in Schlenker (2006.ReferencesBen-Shalom, D. 1996. Semantic Trees. Ph.D. thesis, UCLA.Bittner, M. 1993. Case, Scope, and Binding. Dordrecht: Reidel.Cresswell, M. 1990. Entities and Indices. Dordrecht: Kluwer.Cresti, D. 1995. ‘Extraction and reconstruction’. Natural Language Semantics 3: 79–122.http://dx.doi.org/10.1007/BF01252885Curry, B. H. & Feys, R. 1958. Combinatory Logic I. Dordrecht: North-Holland.Dowty, D. R. 1988. ‘Type raising, functional composition, and non-constituent conjunction’. In Richard T. Oehrle, Emmon W. Bach & Deirdre Wheeler (eds. ‘Categorial Grammars and Natural Language Structures’, 153–197. Dordrecht: Reidel.Fox, D. 2002. ‘TOn Logical Form’. In Randall Hendrick (ed. ‘Minimalist Syntax’, 82–124. Oxford: Blackwell.Gallin, D. 1975. Intensional and higher-order modal logic: with applications to Montague semantics. North Holland Pub. Co.; American Elsevier Pub. Co., Amsterdam

  13. HYPOTHESIS TESTING WITH THE SIMILARITY INDEX

    Science.gov (United States)

    Mulltilocus DNA fingerprinting methods have been used extensively to address genetic issues in wildlife populations. Hypotheses concerning population subdivision and differing levels of diversity can be addressed through the use of the similarity index (S), a band-sharing coeffic...

  14. On self-similarity of crack layer

    Science.gov (United States)

    Botsis, J.; Kunin, B.

    1987-01-01

    The crack layer (CL) theory of Chudnovsky (1986), based on principles of thermodynamics of irreversible processes, employs a crucial hypothesis of self-similarity. The self-similarity hypothesis states that the value of the damage density at a point x of the active zone at a time t coincides with that at the corresponding point in the initial (t = 0) configuration of the active zone, the correspondence being given by a time-dependent affine transformation of the space variables. In this paper, the implications of the self-similarity hypothesis for qusi-static CL propagation is investigated using polystyrene as a model material and examining the evolution of damage distribution along the trailing edge which is approximated by a straight segment perpendicular to the crack path. The results support the self-similarity hypothesis adopted by the CL theory.

  15. Bilateral Trade Flows and Income Distribution Similarity

    Science.gov (United States)

    2016-01-01

    Current models of bilateral trade neglect the effects of income distribution. This paper addresses the issue by accounting for non-homothetic consumer preferences and hence investigating the role of income distribution in the context of the gravity model of trade. A theoretically justified gravity model is estimated for disaggregated trade data (Dollar volume is used as dependent variable) using a sample of 104 exporters and 108 importers for 1980–2003 to achieve two main goals. We define and calculate new measures of income distribution similarity and empirically confirm that greater similarity of income distribution between countries implies more trade. Using distribution-based measures as a proxy for demand similarities in gravity models, we find consistent and robust support for the hypothesis that countries with more similar income-distributions trade more with each other. The hypothesis is also confirmed at disaggregated level for differentiated product categories. PMID:27137462

  16. Discovering Music Structure via Similarity Fusion

    DEFF Research Database (Denmark)

    for representing music structure is studied in a simplified scenario consisting of 4412 songs and two similarity measures among them. The results suggest that the PLSA model is a useful framework to combine different sources of information, and provides a reasonable space for song representation.......Automatic methods for music navigation and music recommendation exploit the structure in the music to carry out a meaningful exploration of the “song space”. To get a satisfactory performance from such systems, one should incorporate as much information about songs similarity as possible; however...... semantics”, in such a way that all observed similarities can be satisfactorily explained using the latent semantics. Therefore, one can think of these semantics as the real structure in music, in the sense that they can explain the observed similarities among songs. The suitability of the PLSA model...

  17. Abundance estimation of spectrally similar minerals

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-07-01

    Full Text Available This paper evaluates a spectral unmixing method for estimating the partial abundance of spectrally similar minerals in complex mixtures. The method requires formulation of a linear function of individual spectra of individual minerals. The first...

  18. Lagrangian-similarity diffusion-deposition model

    International Nuclear Information System (INIS)

    Horst, T.W.

    1979-01-01

    A Lagrangian-similarity diffusion model has been incorporated into the surface-depletion deposition model. This model predicts vertical concentration profiles far downwind of the source that agree with those of a one-dimensional gradient-transfer model

  19. Discovering Music Structure via Similarity Fusion

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Parrado-Hernandez, Emilio; Meng, Anders

    Automatic methods for music navigation and music recommendation exploit the structure in the music to carry out a meaningful exploration of the “song space”. To get a satisfactory performance from such systems, one should incorporate as much information about songs similarity as possible; however...... semantics”, in such a way that all observed similarities can be satisfactorily explained using the latent semantics. Therefore, one can think of these semantics as the real structure in music, in the sense that they can explain the observed similarities among songs. The suitability of the PLSA model...... for representing music structure is studied in a simplified scenario consisting of 4412 songs and two similarity measures among them. The results suggest that the PLSA model is a useful framework to combine different sources of information, and provides a reasonable space for song representation....

  20. Outsourced similarity search on metric data assets

    KAUST Repository

    Yiu, Man Lung

    2012-02-01

    This paper considers a cloud computing setting in which similarity querying of metric data is outsourced to a service provider. The data is to be revealed only to trusted users, not to the service provider or anyone else. Users query the server for the most similar data objects to a query example. Outsourcing offers the data owner scalability and a low-initial investment. The need for privacy may be due to the data being sensitive (e.g., in medicine), valuable (e.g., in astronomy), or otherwise confidential. Given this setting, the paper presents techniques that transform the data prior to supplying it to the service provider for similarity queries on the transformed data. Our techniques provide interesting trade-offs between query cost and accuracy. They are then further extended to offer an intuitive privacy guarantee. Empirical studies with real data demonstrate that the techniques are capable of offering privacy while enabling efficient and accurate processing of similarity queries.

  1. Quantified safety objectives in high technology: Meaning and demonstration

    International Nuclear Information System (INIS)

    Vinck, W.F.; Gilby, E.; Chicken, J.

    1986-01-01

    An overview and trends-analysis is given of the types of quantified criteria and objectives which are presently applied or envisaged and discussed in Europe in the nuclear application, more specifically Nuclear Power Plants (NPPs), and in non-nuclear applications, more specifically in the chemical and petrochemical process industry. Some comparative deductions are made. Attention is paid to the similarities or discrepancies between such criteria and objectives and to problems associated with the demonstration that they are implemented. The role of cost-effectiveness of Risk deduction is briefly discussed and mention made of a search made into combining the technical, economic and socio-political factors playing a role in Risk acceptance

  2. Protein structural similarity search by Ramachandran codes

    Directory of Open Access Journals (Sweden)

    Chang Chih-Hung

    2007-08-01

    Full Text Available Abstract Background Protein structural data has increased exponentially, such that fast and accurate tools are necessary to access structure similarity search. To improve the search speed, several methods have been designed to reduce three-dimensional protein structures to one-dimensional text strings that are then analyzed by traditional sequence alignment methods; however, the accuracy is usually sacrificed and the speed is still unable to match sequence similarity search tools. Here, we aimed to improve the linear encoding methodology and develop efficient search tools that can rapidly retrieve structural homologs from large protein databases. Results We propose a new linear encoding method, SARST (Structural similarity search Aided by Ramachandran Sequential Transformation. SARST transforms protein structures into text strings through a Ramachandran map organized by nearest-neighbor clustering and uses a regenerative approach to produce substitution matrices. Then, classical sequence similarity search methods can be applied to the structural similarity search. Its accuracy is similar to Combinatorial Extension (CE and works over 243,000 times faster, searching 34,000 proteins in 0.34 sec with a 3.2-GHz CPU. SARST provides statistically meaningful expectation values to assess the retrieved information. It has been implemented into a web service and a stand-alone Java program that is able to run on many different platforms. Conclusion As a database search method, SARST can rapidly distinguish high from low similarities and efficiently retrieve homologous structures. It demonstrates that the easily accessible linear encoding methodology has the potential to serve as a foundation for efficient protein structural similarity search tools. These search tools are supposed applicable to automated and high-throughput functional annotations or predictions for the ever increasing number of published protein structures in this post-genomic era.

  3. Similarity search processing. Paralelization and indexing technologies.

    Directory of Open Access Journals (Sweden)

    Eder Dos Santos

    2015-08-01

    The next Scientific-Technical Report addresses the similarity search and the implementation of metric structures on parallel environments. It also presents the state of the art related to similarity search on metric structures and parallelism technologies. Comparative analysis are also proposed, seeking to identify the behavior of a set of metric spaces and metric structures over processing platforms multicore-based and GPU-based.

  4. Parallel trajectory similarity joins in spatial networks

    KAUST Repository

    Shang, Shuo

    2018-04-04

    The matching of similar pairs of objects, called similarity join, is fundamental functionality in data management. We consider two cases of trajectory similarity joins (TS-Joins), including a threshold-based join (Tb-TS-Join) and a top-k TS-Join (k-TS-Join), where the objects are trajectories of vehicles moving in road networks. Given two sets of trajectories and a threshold θ, the Tb-TS-Join returns all pairs of trajectories from the two sets with similarity above θ. In contrast, the k-TS-Join does not take a threshold as a parameter, and it returns the top-k most similar trajectory pairs from the two sets. The TS-Joins target diverse applications such as trajectory near-duplicate detection, data cleaning, ridesharing recommendation, and traffic congestion prediction. With these applications in mind, we provide purposeful definitions of similarity. To enable efficient processing of the TS-Joins on large sets of trajectories, we develop search space pruning techniques and enable use of the parallel processing capabilities of modern processors. Specifically, we present a two-phase divide-and-conquer search framework that lays the foundation for the algorithms for the Tb-TS-Join and the k-TS-Join that rely on different pruning techniques to achieve efficiency. For each trajectory, the algorithms first find similar trajectories. Then they merge the results to obtain the final result. The algorithms for the two joins exploit different upper and lower bounds on the spatiotemporal trajectory similarity and different heuristic scheduling strategies for search space pruning. Their per-trajectory searches are independent of each other and can be performed in parallel, and the mergings have constant cost. An empirical study with real data offers insight in the performance of the algorithms and demonstrates that they are capable of outperforming well-designed baseline algorithms by an order of magnitude.

  5. Parallel trajectory similarity joins in spatial networks

    KAUST Repository

    Shang, Shuo; Chen, Lisi; Wei, Zhewei; Jensen, Christian S.; Zheng, Kai; Kalnis, Panos

    2018-01-01

    The matching of similar pairs of objects, called similarity join, is fundamental functionality in data management. We consider two cases of trajectory similarity joins (TS-Joins), including a threshold-based join (Tb-TS-Join) and a top-k TS-Join (k-TS-Join), where the objects are trajectories of vehicles moving in road networks. Given two sets of trajectories and a threshold θ, the Tb-TS-Join returns all pairs of trajectories from the two sets with similarity above θ. In contrast, the k-TS-Join does not take a threshold as a parameter, and it returns the top-k most similar trajectory pairs from the two sets. The TS-Joins target diverse applications such as trajectory near-duplicate detection, data cleaning, ridesharing recommendation, and traffic congestion prediction. With these applications in mind, we provide purposeful definitions of similarity. To enable efficient processing of the TS-Joins on large sets of trajectories, we develop search space pruning techniques and enable use of the parallel processing capabilities of modern processors. Specifically, we present a two-phase divide-and-conquer search framework that lays the foundation for the algorithms for the Tb-TS-Join and the k-TS-Join that rely on different pruning techniques to achieve efficiency. For each trajectory, the algorithms first find similar trajectories. Then they merge the results to obtain the final result. The algorithms for the two joins exploit different upper and lower bounds on the spatiotemporal trajectory similarity and different heuristic scheduling strategies for search space pruning. Their per-trajectory searches are independent of each other and can be performed in parallel, and the mergings have constant cost. An empirical study with real data offers insight in the performance of the algorithms and demonstrates that they are capable of outperforming well-designed baseline algorithms by an order of magnitude.

  6. Are calanco landforms similar to river basins?

    Science.gov (United States)

    Caraballo-Arias, N A; Ferro, V

    2017-12-15

    In the past badlands have been often considered as ideal field laboratories for studying landscape evolution because of their geometrical similarity to larger fluvial systems. For a given hydrological process, no scientific proof exists that badlands can be considered a model of river basin prototypes. In this paper the measurements carried out on 45 Sicilian calanchi, a type of badlands that appears as a small-scale hydrographic unit, are used to establish their morphological similarity with river systems whose data are available in the literature. At first the geomorphological similarity is studied by identifying the dimensionless groups, which can assume the same value or a scaled one in a fixed ratio, representing drainage basin shape, stream network and relief properties. Then, for each property, the dimensionless groups are calculated for the investigated calanchi and the river basins and their corresponding scale ratio is evaluated. The applicability of Hack's, Horton's and Melton's laws for establishing similarity criteria is also tested. The developed analysis allows to conclude that a quantitative morphological similarity between calanco landforms and river basins can be established using commonly applied dimensionless groups. In particular, the analysis showed that i) calanchi and river basins have a geometrically similar shape respect to the parameters Rf and Re with a scale factor close to 1, ii) calanchi and river basins are similar respect to the bifurcation and length ratios (λ=1), iii) for the investigated calanchi the Melton number assumes values less than that (0.694) corresponding to the river case and a scale ratio ranging from 0.52 and 0.78 can be used, iv) calanchi and river basins have similar mean relief ratio values (λ=1.13) and v) calanchi present active geomorphic processes and therefore fall in a more juvenile stage with respect to river basins. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. A Measure of Similarity Between Trajectories of Vessels

    Directory of Open Access Journals (Sweden)

    Le QI

    2016-03-01

    Full Text Available The measurement of similarity between trajectories of vessels is one of the kernel problems that must be addressed to promote the development of maritime intelligent traffic system (ITS. In this study, a new model of trajectory similarity measurement was established to improve the data processing efficiency in dynamic application and to reflect actual sailing behaviors of vessels. In this model, a feature point detection algorithm was proposed to extract feature points, reduce data storage space and save computational resources. A new synthesized distance algorithm was also created to measure the similarity between trajectories by using the extracted feature points. An experiment was conducted to measure the similarity between the real trajectories of vessels. The growth of these trajectories required measurements to be conducted under different voyages. The results show that the similarity measurement between the vessel trajectories is efficient and correct. Comparison of the synthesized distance with the sailing behaviors of vessels proves that results are consistent with actual situations. The experiment results demonstrate the promising application of the proposed model in studying vessel traffic and in supplying reliable data for the development of maritime ITS.

  8. Quantifying the dilution effect for models in ecological epidemiology.

    Science.gov (United States)

    Roberts, M G; Heesterbeek, J A P

    2018-03-01

    The dilution effect , where an increase in biodiversity results in a reduction in the prevalence of an infectious disease, has been the subject of speculation and controversy. Conversely, an amplification effect occurs when increased biodiversity is related to an increase in prevalence. We explore the conditions under which these effects arise, using multi species compartmental models that integrate ecological and epidemiological interactions. We introduce three potential metrics for quantifying dilution and amplification, one based on infection prevalence in a focal host species, one based on the size of the infected subpopulation of that species and one based on the basic reproduction number. We introduce our approach in the simplest epidemiological setting with two species, and show that the existence and strength of a dilution effect is influenced strongly by the choices made to describe the system and the metric used to gauge the effect. We show that our method can be generalized to any number of species and to more complicated ecological and epidemiological dynamics. Our method allows a rigorous analysis of ecological systems where dilution effects have been postulated, and contributes to future progress in understanding the phenomenon of dilution in the context of infectious disease dynamics and infection risk. © 2018 The Author(s).

  9. Quantify uncertain emergency search techniques (QUEST) -- Theory and user's guide

    International Nuclear Information System (INIS)

    Johnson, M.M.; Goldsby, M.E.; Plantenga, T.D.; Porter, T.L.; West, T.H.; Wilcox, W.B.; Hensley, W.K.

    1998-01-01

    As recent world events show, criminal and terrorist access to nuclear materials is a growing national concern. The national laboratories are taking the lead in developing technologies to counter these potential threats to the national security. Sandia National laboratories, with support from Pacific Northwest National Laboratory and the Bechtel Nevada, Remote Sensing Laboratory, has developed QUEST (a model to Quantify Uncertain Emergency Search Techniques), to enhance the performance of organizations in the search for lost or stolen nuclear material. In addition, QUEST supports a wide range of other applications, such as environmental monitoring, nuclear facilities inspections, and searcher training. QUEST simulates the search for nuclear materials and calculates detector response for various source types and locations. The probability of detecting a radioactive source during a search is a function of many different variables, including source type, search location and structure geometry (including shielding), search dynamics (path and speed), and detector type and size. Through calculation of dynamic detector response, QUEST makes possible quantitative comparisons of various sensor technologies and search patterns. The QUEST model can be used as a tool to examine the impact of new detector technologies, explore alternative search concepts, and provide interactive search/inspector training

  10. A Generalizable Methodology for Quantifying User Satisfaction

    Science.gov (United States)

    Huang, Te-Yuan; Chen, Kuan-Ta; Huang, Polly; Lei, Chin-Laung

    Quantifying user satisfaction is essential, because the results can help service providers deliver better services. In this work, we propose a generalizable methodology, based on survival analysis, to quantify user satisfaction in terms of session times, i. e., the length of time users stay with an application. Unlike subjective human surveys, our methodology is based solely on passive measurement, which is more cost-efficient and better able to capture subconscious reactions. Furthermore, by using session times, rather than a specific performance indicator, such as the level of distortion of voice signals, the effects of other factors like loudness and sidetone, can also be captured by the developed models. Like survival analysis, our methodology is characterized by low complexity and a simple model-developing process. The feasibility of our methodology is demonstrated through case studies of ShenZhou Online, a commercial MMORPG in Taiwan, and the most prevalent VoIP application in the world, namely Skype. Through the model development process, we can also identify the most significant performance factors and their impacts on user satisfaction and discuss how they can be exploited to improve user experience and optimize resource allocation.

  11. Quantifying the efficiency of river regulation

    Directory of Open Access Journals (Sweden)

    R. Rödel

    2005-01-01

    Full Text Available Dam-affected hydrologic time series give rise to uncertainties when they are used for calibrating large-scale hydrologic models or for analysing runoff records. It is therefore necessary to identify and to quantify the impact of impoundments on runoff time series. Two different approaches were employed. The first, classic approach compares the volume of the dams that are located upstream from a station with the annual discharge. The catchment areas of the stations are calculated and then related to geo-referenced dam attributes. The paper introduces a data set of geo-referenced dams linked with 677 gauging stations in Europe. Second, the intensity of the impoundment impact on runoff times series can be quantified more exactly and directly when long-term runoff records are available. Dams cause a change in the variability of flow regimes. This effect can be measured using the model of linear single storage. The dam-caused storage change ΔS can be assessed through the volume of the emptying process between two flow regimes. As an example, the storage change ΔS is calculated for regulated long-term series of the Luleälven in northern Sweden.

  12. Quantifying meta-correlations in financial markets

    Science.gov (United States)

    Kenett, Dror Y.; Preis, Tobias; Gur-Gershgoren, Gitit; Ben-Jacob, Eshel

    2012-08-01

    Financial markets are modular multi-level systems, in which the relationships between the individual components are not constant in time. Sudden changes in these relationships significantly affect the stability of the entire system, and vice versa. Our analysis is based on historical daily closing prices of the 30 components of the Dow Jones Industrial Average (DJIA) from March 15th, 1939 until December 31st, 2010. We quantify the correlation among these components by determining Pearson correlation coefficients, to investigate whether mean correlation of the entire portfolio can be used as a precursor for changes in the index return. To this end, we quantify the meta-correlation - the correlation of mean correlation and index return. We find that changes in index returns are significantly correlated with changes in mean correlation. Furthermore, we study the relationship between the index return and correlation volatility - the standard deviation of correlations for a given time interval. This parameter provides further evidence of the effect of the index on market correlations and their fluctuations. Our empirical findings provide new information and quantification of the index leverage effect, and have implications to risk management, portfolio optimization, and to the increased stability of financial markets.

  13. Semantic similarity between ontologies at different scales

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Qingpeng; Haglin, David J.

    2016-04-01

    In the past decade, existing and new knowledge and datasets has been encoded in different ontologies for semantic web and biomedical research. The size of ontologies is often very large in terms of number of concepts and relationships, which makes the analysis of ontologies and the represented knowledge graph computational and time consuming. As the ontologies of various semantic web and biomedical applications usually show explicit hierarchical structures, it is interesting to explore the trade-offs between ontological scales and preservation/precision of results when we analyze ontologies. This paper presents the first effort of examining the capability of this idea via studying the relationship between scaling biomedical ontologies at different levels and the semantic similarity values. We evaluate the semantic similarity between three Gene Ontology slims (Plant, Yeast, and Candida, among which the latter two belong to the same kingdom—Fungi) using four popular measures commonly applied to biomedical ontologies (Resnik, Lin, Jiang-Conrath, and SimRel). The results of this study demonstrate that with proper selection of scaling levels and similarity measures, we can significantly reduce the size of ontologies without losing substantial detail. In particular, the performance of Jiang-Conrath and Lin are more reliable and stable than that of the other two in this experiment, as proven by (a) consistently showing that Yeast and Candida are more similar (as compared to Plant) at different scales, and (b) small deviations of the similarity values after excluding a majority of nodes from several lower scales. This study provides a deeper understanding of the application of semantic similarity to biomedical ontologies, and shed light on how to choose appropriate semantic similarity measures for biomedical engineering.

  14. Quantifying uncertainty in observational rainfall datasets

    Science.gov (United States)

    Lennard, Chris; Dosio, Alessandro; Nikulin, Grigory; Pinto, Izidine; Seid, Hussen

    2015-04-01

    rainfall datasets available over Africa on monthly, daily and sub-daily time scales as appropriate to quantify spatial and temporal differences between the datasets. We find regional wet and dry biases between datasets (using the ensemble mean as a reference) with generally larger biases in reanalysis products. Rainfall intensity is poorly represented in some datasets which demonstrates some datasets should not be used for rainfall intensity analyses. Using 10 CORDEX models we show in east Africa that the spread between observed datasets is often similar to the spread between models. We recommend that specific observational rainfall datasets datasets be used for specific investigations and also that where many datasets are applicable to an investigation, a probabilistic view be adopted for rainfall studies over Africa. Endris, H. S., P. Omondi, S. Jain, C. Lennard, B. Hewitson, L. Chang'a, J. L. Awange, A. Dosio, P. Ketiem, G. Nikulin, H-J. Panitz, M. Büchner, F. Stordal, and L. Tazalika (2013) Assessment of the Performance of CORDEX Regional Climate Models in Simulating East African Rainfall. J. Climate, 26, 8453-8475. DOI: 10.1175/JCLI-D-12-00708.1 Gbobaniyi, E., A. Sarr, M. B. Sylla, I. Diallo, C. Lennard, A. Dosio, A. Dhie ?diou, A. Kamga, N. A. B. Klutse, B. Hewitson, and B. Lamptey (2013) Climatology, annual cycle and interannual variability of precipitation and temperature in CORDEX simulations over West Africa. Int. J. Climatol., DOI: 10.1002/joc.3834 Hernández-Díaz, L., R. Laprise, L. Sushama, A. Martynov, K. Winger, and B. Dugas (2013) Climate simulation over CORDEX Africa domain using the fifth-generation Canadian Regional Climate Model (CRCM5). Clim. Dyn. 40, 1415-1433. DOI: 10.1007/s00382-012-1387-z Kalognomou, E., C. Lennard, M. Shongwe, I. Pinto, A. Favre, M. Kent, B. Hewitson, A. Dosio, G. Nikulin, H. Panitz, and M. Büchner (2013) A diagnostic evaluation of precipitation in CORDEX models over southern Africa. Journal of Climate, 26, 9477-9506. DOI:10

  15. Using heteroclinic orbits to quantify topological entropy in fluid flows

    International Nuclear Information System (INIS)

    Sattari, Sulimon; Chen, Qianting; Mitchell, Kevin A.

    2016-01-01

    Topological approaches to mixing are important tools to understand chaotic fluid flows, ranging from oceanic transport to the design of micro-mixers. Typically, topological entropy, the exponential growth rate of material lines, is used to quantify topological mixing. Computing topological entropy from the direct stretching rate is computationally expensive and sheds little light on the source of the mixing. Earlier approaches emphasized that topological entropy could be viewed as generated by the braiding of virtual, or “ghost,” rods stirring the fluid in a periodic manner. Here, we demonstrate that topological entropy can also be viewed as generated by the braiding of ghost rods following heteroclinic orbits instead. We use the machinery of homotopic lobe dynamics, which extracts symbolic dynamics from finite-length pieces of stable and unstable manifolds attached to fixed points of the fluid flow. As an example, we focus on the topological entropy of a bounded, chaotic, two-dimensional, double-vortex cavity flow. Over a certain parameter range, the topological entropy is primarily due to the braiding of a period-three orbit. However, this orbit does not explain the topological entropy for parameter values where it does not exist, nor does it explain the excess of topological entropy for the entire range of its existence. We show that braiding by heteroclinic orbits provides an accurate computation of topological entropy when the period-three orbit does not exist, and that it provides an explanation for some of the excess topological entropy when the period-three orbit does exist. Furthermore, the computation of symbolic dynamics using heteroclinic orbits has been automated and can be used to compute topological entropy for a general 2D fluid flow.

  16. Model Checking Quantified Computation Tree Logic

    NARCIS (Netherlands)

    Rensink, Arend; Baier, C; Hermanns, H.

    2006-01-01

    Propositional temporal logic is not suitable for expressing properties on the evolution of dynamically allocated entities over time. In particular, it is not possible to trace such entities through computation steps, since this requires the ability to freely mix quantification and temporal

  17. Sizing up the competition: quantifying the influence of the mental lexicon on auditory and visual spoken word recognition.

    Science.gov (United States)

    Strand, Julia F; Sommers, Mitchell S

    2011-09-01

    Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America

  18. Measure of Node Similarity in Multilayer Networks

    DEFF Research Database (Denmark)

    Møllgaard, Anders; Zettler, Ingo; Dammeyer, Jesper

    2016-01-01

    The weight of links in a network is often related to the similarity of thenodes. Here, we introduce a simple tunable measure for analysing the similarityof nodes across different link weights. In particular, we use the measure toanalyze homophily in a group of 659 freshman students at a large...... university.Our analysis is based on data obtained using smartphones equipped with customdata collection software, complemented by questionnaire-based data. The networkof social contacts is represented as a weighted multilayer network constructedfrom different channels of telecommunication as well as data...... might bepresent in one layer of the multilayer network and simultaneously be absent inthe other layers. For a variable such as gender, our measure reveals atransition from similarity between nodes connected with links of relatively lowweight to dis-similarity for the nodes connected by the strongest...

  19. Universal self-similarity of propagating populations.

    Science.gov (United States)

    Eliazar, Iddo; Klafter, Joseph

    2010-07-01

    This paper explores the universal self-similarity of propagating populations. The following general propagation model is considered: particles are randomly emitted from the origin of a d-dimensional Euclidean space and propagate randomly and independently of each other in space; all particles share a statistically common--yet arbitrary--motion pattern; each particle has its own random propagation parameters--emission epoch, motion frequency, and motion amplitude. The universally self-similar statistics of the particles' displacements and first passage times (FPTs) are analyzed: statistics which are invariant with respect to the details of the displacement and FPT measurements and with respect to the particles' underlying motion pattern. Analysis concludes that the universally self-similar statistics are governed by Poisson processes with power-law intensities and by the Fréchet and Weibull extreme-value laws.

  20. Universal self-similarity of propagating populations

    Science.gov (United States)

    Eliazar, Iddo; Klafter, Joseph

    2010-07-01

    This paper explores the universal self-similarity of propagating populations. The following general propagation model is considered: particles are randomly emitted from the origin of a d -dimensional Euclidean space and propagate randomly and independently of each other in space; all particles share a statistically common—yet arbitrary—motion pattern; each particle has its own random propagation parameters—emission epoch, motion frequency, and motion amplitude. The universally self-similar statistics of the particles’ displacements and first passage times (FPTs) are analyzed: statistics which are invariant with respect to the details of the displacement and FPT measurements and with respect to the particles’ underlying motion pattern. Analysis concludes that the universally self-similar statistics are governed by Poisson processes with power-law intensities and by the Fréchet and Weibull extreme-value laws.

  1. Trajectory similarity join in spatial networks

    KAUST Repository

    Shang, Shuo; Chen, Lisi; Wei, Zhewei; Jensen, Christian S.; Zheng, Kai; Kalnis, Panos

    2017-01-01

    With these applications in mind, we provide a purposeful definition of similarity. To enable efficient TS-Join processing on large sets of trajectories, we develop search space pruning techniques and take into account the parallel processing capabilities of modern processors. Specifically, we present a two-phase divide-and-conquer algorithm. For each trajectory, the algorithm first finds similar trajectories. Then it merges the results to achieve a final result. The algorithm exploits an upper bound on the spatiotemporal similarity and a heuristic scheduling strategy for search space pruning. The algorithm's per-trajectory searches are independent of each other and can be performed in parallel, and the merging has constant cost. An empirical study with real data offers insight in the performance of the algorithm and demonstrates that is capable of outperforming a well-designed baseline algorithm by an order of magnitude.

  2. Phonological similarity in working memory span tasks.

    Science.gov (United States)

    Chow, Michael; Macnamara, Brooke N; Conway, Andrew R A

    2016-08-01

    In a series of four experiments, we explored what conditions are sufficient to produce a phonological similarity facilitation effect in working memory span tasks. By using the same set of memoranda, but differing the secondary-task requirements across experiments, we showed that a phonological similarity facilitation effect is dependent upon the semantic relationship between the memoranda and the secondary-task stimuli, and is robust to changes in the representation, ordering, and pool size of the secondary-task stimuli. These findings are consistent with interference accounts of memory (Brown, Neath, & Chater, Psychological Review, 114, 539-576, 2007; Oberauer, Lewandowsky, Farrell, Jarrold, & Greaves, Psychonomic Bulletin & Review, 19, 779-819, 2012), whereby rhyming stimuli provide a form of categorical similarity that allows distractors to be excluded from retrieval at recall.

  3. Unveiling Music Structure Via PLSA Similarity Fusion

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Meng, Anders; Petersen, Kaare Brandt

    2007-01-01

    Nowadays there is an increasing interest in developing methods for building music recommendation systems. In order to get a satisfactory performance from such a system, one needs to incorporate as much information about songs similarity as possible; however, how to do so is not obvious. In this p......Nowadays there is an increasing interest in developing methods for building music recommendation systems. In order to get a satisfactory performance from such a system, one needs to incorporate as much information about songs similarity as possible; however, how to do so is not obvious...... observed similarities can be satisfactorily explained using the latent semantics. Additionally, this approach significantly simplifies the song retrieval phase, leading to a more practical system implementation. The suitability of the PLSA model for representing music structure is studied in a simplified...

  4. Quantifying uncertainties of climate signals related to the 11-year solar cycle

    Science.gov (United States)

    Kruschke, T.; Kunze, M.; Matthes, K. B.; Langematz, U.; Wahl, S.

    2017-12-01

    Although state-of-the-art reconstructions based on proxies and (semi-)empirical models converge in terms of total solar irradiance, they still significantly differ in terms of spectral solar irradiance (SSI) with respect to the mean spectral distribution of energy input and temporal variability. This study aims at quantifying uncertainties for the Earth's climate related to the 11-year solar cycle by forcing two chemistry-climate models (CCMs) - CESM1(WACCM) and EMAC - with five different SSI reconstructions (NRLSSI1, NRLSSI2, SATIRE-T, SATIRE-S, CMIP6-SSI) and the reference spectrum RSSV1-ATLAS3, derived from observations. We conduct a unique set of timeslice experiments. External forcings and boundary conditions are fixed and identical for all experiments, except for the solar forcing. The set of analyzed simulations consists of one solar minimum simulation, employing RSSV1-ATLAS3 and five solar maximum experiments. The latter are a result of adding the amplitude of solar cycle 22 according to the five reconstructions to RSSV1-ATLAS3. Our results show that the climate response to the 11y solar cycle is generally robust across CCMs and SSI forcings. However, analyzing the variance of the solar maximum ensemble by means of ANOVA-statistics reveals additional information on the uncertainties of the mean climate signals. The annual mean response agrees very well between the two CCMs for most parts of the lower and middle atmosphere. Only the upper mesosphere is subject to significant differences related to the choice of the model. However, the different SSI forcings lead to significant differences in ozone concentrations, shortwave heating rates, and temperature throughout large parts of the mesosphere and upper stratosphere. Regarding the seasonal evolution of the climate signals, our findings for short wave heating rates, and temperature are similar to the annual means with respect to the relative importance of the choice of the model or the SSI forcing for the

  5. Large margin classification with indefinite similarities

    KAUST Repository

    Alabdulmohsin, Ibrahim

    2016-01-07

    Classification with indefinite similarities has attracted attention in the machine learning community. This is partly due to the fact that many similarity functions that arise in practice are not symmetric positive semidefinite, i.e. the Mercer condition is not satisfied, or the Mercer condition is difficult to verify. Examples of such indefinite similarities in machine learning applications are ample including, for instance, the BLAST similarity score between protein sequences, human-judged similarities between concepts and words, and the tangent distance or the shape matching distance in computer vision. Nevertheless, previous works on classification with indefinite similarities are not fully satisfactory. They have either introduced sources of inconsistency in handling past and future examples using kernel approximation, settled for local-minimum solutions using non-convex optimization, or produced non-sparse solutions by learning in Krein spaces. Despite the large volume of research devoted to this subject lately, we demonstrate in this paper how an old idea, namely the 1-norm support vector machine (SVM) proposed more than 15 years ago, has several advantages over more recent work. In particular, the 1-norm SVM method is conceptually simpler, which makes it easier to implement and maintain. It is competitive, if not superior to, all other methods in terms of predictive accuracy. Moreover, it produces solutions that are often sparser than more recent methods by several orders of magnitude. In addition, we provide various theoretical justifications by relating 1-norm SVM to well-established learning algorithms such as neural networks, SVM, and nearest neighbor classifiers. Finally, we conduct a thorough experimental evaluation, which reveals that the evidence in favor of 1-norm SVM is statistically significant.

  6. Morphological similarities between DBM and a microeconomic model of sprawl

    Science.gov (United States)

    Caruso, Geoffrey; Vuidel, Gilles; Cavailhès, Jean; Frankhauser, Pierre; Peeters, Dominique; Thomas, Isabelle

    2011-03-01

    We present a model that simulates the growth of a metropolitan area on a 2D lattice. The model is dynamic and based on microeconomics. Households show preferences for nearby open spaces and neighbourhood density. They compete on the land market. They travel along a road network to access the CBD. A planner ensures the connectedness and maintenance of the road network. The spatial pattern of houses, green spaces and road network self-organises, emerging from agents individualistic decisions. We perform several simulations and vary residential preferences. Our results show morphologies and transition phases that are similar to Dieletric Breakdown Models (DBM). Such similarities were observed earlier by other authors, but we show here that it can be deducted from the functioning of the land market and thus explicitly connected to urban economic theory.

  7. Similarity joins in relational database systems

    CERN Document Server

    Augsten, Nikolaus

    2013-01-01

    State-of-the-art database systems manage and process a variety of complex objects, including strings and trees. For such objects equality comparisons are often not meaningful and must be replaced by similarity comparisons. This book describes the concepts and techniques to incorporate similarity into database systems. We start out by discussing the properties of strings and trees, and identify the edit distance as the de facto standard for comparing complex objects. Since the edit distance is computationally expensive, token-based distances have been introduced to speed up edit distance comput

  8. Outsourced Similarity Search on Metric Data Assets

    DEFF Research Database (Denmark)

    Yiu, Man Lung; Assent, Ira; Jensen, Christian S.

    2012-01-01

    . Outsourcing offers the data owner scalability and a low initial investment. The need for privacy may be due to the data being sensitive (e.g., in medicine), valuable (e.g., in astronomy), or otherwise confidential. Given this setting, the paper presents techniques that transform the data prior to supplying......This paper considers a cloud computing setting in which similarity querying of metric data is outsourced to a service provider. The data is to be revealed only to trusted users, not to the service provider or anyone else. Users query the server for the most similar data objects to a query example...

  9. Measure of Node Similarity in Multilayer Networks

    DEFF Research Database (Denmark)

    Møllgaard, Anders; Zettler, Ingo; Dammeyer, Jesper

    2016-01-01

    university.Our analysis is based on data obtained using smartphones equipped with customdata collection software, complemented by questionnaire-based data. The networkof social contacts is represented as a weighted multilayer network constructedfrom different channels of telecommunication as well as data...... might bepresent in one layer of the multilayer network and simultaneously be absent inthe other layers. For a variable such as gender, our measure reveals atransition from similarity between nodes connected with links of relatively lowweight to dis-similarity for the nodes connected by the strongest...

  10. Cultural similarity and adjustment of expatriate academics

    DEFF Research Database (Denmark)

    Selmer, Jan; Lauring, Jakob

    2009-01-01

    The findings of a number of recent empirical studies of business expatriates, using different samples and methodologies, seem to support the counter-intuitive proposition that cultural similarity may be as difficult to adjust to as cultural dissimilarity. However, it is not obvious...... and non-EU countries. Results showed that although the perceived cultural similarity between host and home country for the two groups of investigated respondents was different, there was neither any difference in their adjustment nor in the time it took for them to become proficient. Implications...

  11. How to quantify conduits in wood?

    Science.gov (United States)

    Scholz, Alexander; Klepsch, Matthias; Karimi, Zohreh; Jansen, Steven

    2013-01-01

    Vessels and tracheids represent the most important xylem cells with respect to long distance water transport in plants. Wood anatomical studies frequently provide several quantitative details of these cells, such as vessel diameter, vessel density, vessel element length, and tracheid length, while important information on the three dimensional structure of the hydraulic network is not considered. This paper aims to provide an overview of various techniques, although there is no standard protocol to quantify conduits due to high anatomical variation and a wide range of techniques available. Despite recent progress in image analysis programs and automated methods for measuring cell dimensions, density, and spatial distribution, various characters remain time-consuming and tedious. Quantification of vessels and tracheids is not only important to better understand functional adaptations of tracheary elements to environment parameters, but will also be essential for linking wood anatomy with other fields such as wood development, xylem physiology, palaeobotany, and dendrochronology.

  12. Message passing for quantified Boolean formulas

    International Nuclear Information System (INIS)

    Zhang, Pan; Ramezanpour, Abolfazl; Zecchina, Riccardo; Zdeborová, Lenka

    2012-01-01

    We introduce two types of message passing algorithms for quantified Boolean formulas (QBF). The first type is a message passing based heuristics that can prove unsatisfiability of the QBF by assigning the universal variables in such a way that the remaining formula is unsatisfiable. In the second type, we use message passing to guide branching heuristics of a Davis–Putnam–Logemann–Loveland (DPLL) complete solver. Numerical experiments show that on random QBFs our branching heuristics give robust exponential efficiency gain with respect to state-of-the-art solvers. We also manage to solve some previously unsolved benchmarks from the QBFLIB library. Apart from this, our study sheds light on using message passing in small systems and as subroutines in complete solvers

  13. Quantifying decoherence in continuous variable systems

    Energy Technology Data Exchange (ETDEWEB)

    Serafini, A [Dipartimento di Fisica ' ER Caianiello' , Universita di Salerno, INFM UdR Salerno, INFN Sezione Napoli, Gruppo Collegato Salerno, Via S Allende, 84081 Baronissi, SA (Italy); Paris, M G A [Dipartimento di Fisica and INFM, Universita di Milano, Milan (Italy); Illuminati, F [Dipartimento di Fisica ' ER Caianiello' , Universita di Salerno, INFM UdR Salerno, INFN Sezione Napoli, Gruppo Collegato Salerno, Via S Allende, 84081 Baronissi, SA (Italy); De Siena, S [Dipartimento di Fisica ' ER Caianiello' , Universita di Salerno, INFM UdR Salerno, INFN Sezione Napoli, Gruppo Collegato Salerno, Via S Allende, 84081 Baronissi, SA (Italy)

    2005-04-01

    We present a detailed report on the decoherence of quantum states of continuous variable systems under the action of a quantum optical master equation resulting from the interaction with general Gaussian uncorrelated environments. The rate of decoherence is quantified by relating it to the decay rates of various, complementary measures of the quantum nature of a state, such as the purity, some non-classicality indicators in phase space, and, for two-mode states, entanglement measures and total correlations between the modes. Different sets of physically relevant initial configurations are considered, including one- and two-mode Gaussian states, number states, and coherent superpositions. Our analysis shows that, generally, the use of initially squeezed configurations does not help to preserve the coherence of Gaussian states, whereas it can be effective in protecting coherent superpositions of both number states and Gaussian wavepackets. (review article)

  14. Quantifying decoherence in continuous variable systems

    International Nuclear Information System (INIS)

    Serafini, A; Paris, M G A; Illuminati, F; De Siena, S

    2005-01-01

    We present a detailed report on the decoherence of quantum states of continuous variable systems under the action of a quantum optical master equation resulting from the interaction with general Gaussian uncorrelated environments. The rate of decoherence is quantified by relating it to the decay rates of various, complementary measures of the quantum nature of a state, such as the purity, some non-classicality indicators in phase space, and, for two-mode states, entanglement measures and total correlations between the modes. Different sets of physically relevant initial configurations are considered, including one- and two-mode Gaussian states, number states, and coherent superpositions. Our analysis shows that, generally, the use of initially squeezed configurations does not help to preserve the coherence of Gaussian states, whereas it can be effective in protecting coherent superpositions of both number states and Gaussian wavepackets. (review article)

  15. Crowdsourcing for quantifying transcripts: An exploratory study.

    Science.gov (United States)

    Azzam, Tarek; Harman, Elena

    2016-02-01

    This exploratory study attempts to demonstrate the potential utility of crowdsourcing as a supplemental technique for quantifying transcribed interviews. Crowdsourcing is the harnessing of the abilities of many people to complete a specific task or a set of tasks. In this study multiple samples of crowdsourced individuals were asked to rate and select supporting quotes from two different transcripts. The findings indicate that the different crowdsourced samples produced nearly identical ratings of the transcripts, and were able to consistently select the same supporting text from the transcripts. These findings suggest that crowdsourcing, with further development, can potentially be used as a mixed method tool to offer a supplemental perspective on transcribed interviews. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Animal biometrics: quantifying and detecting phenotypic appearance.

    Science.gov (United States)

    Kühl, Hjalmar S; Burghardt, Tilo

    2013-07-01

    Animal biometrics is an emerging field that develops quantified approaches for representing and detecting the phenotypic appearance of species, individuals, behaviors, and morphological traits. It operates at the intersection between pattern recognition, ecology, and information sciences, producing computerized systems for phenotypic measurement and interpretation. Animal biometrics can benefit a wide range of disciplines, including biogeography, population ecology, and behavioral research. Currently, real-world applications are gaining momentum, augmenting the quantity and quality of ecological data collection and processing. However, to advance animal biometrics will require integration of methodologies among the scientific disciplines involved. Such efforts will be worthwhile because the great potential of this approach rests with the formal abstraction of phenomics, to create tractable interfaces between different organizational levels of life. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Quantifying capital goods for waste incineration

    DEFF Research Database (Denmark)

    Brogaard, Line Kai-Sørensen; Riber, C.; Christensen, Thomas Højlund

    2013-01-01

    material used amounting to 19,000–26,000tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 4000–5000MWh. In terms of the environmental burden...... that, compared to data reported in the literature on direct emissions from the operation of incinerators, the environmental impacts caused by the construction of buildings and machinery (capital goods) could amount to 2–3% with respect to kg CO2 per tonne of waste combusted.......Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000–240,000tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main...

  18. Pendulum Underwater - An Approach for Quantifying Viscosity

    Science.gov (United States)

    Leme, José Costa; Oliveira, Agostinho

    2017-12-01

    The purpose of the experiment presented in this paper is to quantify the viscosity of a liquid. Viscous effects are important in the flow of fluids in pipes, in the bloodstream, in the lubrication of engine parts, and in many other situations. In the present paper, the authors explore the oscillations of a physical pendulum in the form of a long and lightweight wire that carries a ball at its lower end, which is totally immersed in water, so as to determine the water viscosity. The system used represents a viscous damped pendulum and we tried different theoretical models to describe it. The experimental part of the present paper is based on a very simple and low-cost image capturing apparatus that can easily be replicated in a physics classroom. Data on the pendulum's amplitude as a function of time were acquired using digital video analysis with the open source software Tracker.

  19. Quantifying gait patterns in Parkinson's disease

    Science.gov (United States)

    Romero, Mónica; Atehortúa, Angélica; Romero, Eduardo

    2017-11-01

    Parkinson's disease (PD) is constituted by a set of motor symptoms, namely tremor, rigidity, and bradykinesia, which are usually described but not quantified. This work proposes an objective characterization of PD gait patterns by approximating the single stance phase a single grounded pendulum. This model estimates the force generated by the gait during the single support from gait data. This force describes the motion pattern for different stages of the disease. The model was validated using recorded videos of 8 young control subjects, 10 old control subjects and 10 subjects with Parkinson's disease in different stages. The estimated force showed differences among stages of Parkinson disease, observing a decrease of the estimated force for the advanced stages of this illness.

  20. Quantifying brain microstructure with diffusion MRI

    DEFF Research Database (Denmark)

    Novikov, Dmitry S.; Jespersen, Sune N.; Kiselev, Valerij G.

    2016-01-01

    the potential to quantify the relevant length scales for neuronal tissue, such as the packing correlation length for neuronal fibers, the degree of neuronal beading, and compartment sizes. The second avenue corresponds to the long-time limit, when the observed signal can be approximated as a sum of multiple non......-exchanging anisotropic Gaussian components. Here the challenge lies in parameter estimation and in resolving its hidden degeneracies. The third avenue employs multiple diffusion encoding techniques, able to access information not contained in the conventional diffusion propagator. We conclude with our outlook...... on the future research directions which can open exciting possibilities for developing markers of pathology and development based on methods of studying mesoscopic transport in disordered systems....

  1. Quantifying Temporal Genomic Erosion in Endangered Species.

    Science.gov (United States)

    Díez-Del-Molino, David; Sánchez-Barreiro, Fatima; Barnes, Ian; Gilbert, M Thomas P; Dalén, Love

    2018-03-01

    Many species have undergone dramatic population size declines over the past centuries. Although stochastic genetic processes during and after such declines are thought to elevate the risk of extinction, comparative analyses of genomic data from several endangered species suggest little concordance between genome-wide diversity and current population sizes. This is likely because species-specific life-history traits and ancient bottlenecks overshadow the genetic effect of recent demographic declines. Therefore, we advocate that temporal sampling of genomic data provides a more accurate approach to quantify genetic threats in endangered species. Specifically, genomic data from predecline museum specimens will provide valuable baseline data that enable accurate estimation of recent decreases in genome-wide diversity, increases in inbreeding levels, and accumulation of deleterious genetic variation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Extraction of quantifiable information from complex systems

    CERN Document Server

    Dahmen, Wolfgang; Griebel, Michael; Hackbusch, Wolfgang; Ritter, Klaus; Schneider, Reinhold; Schwab, Christoph; Yserentant, Harry

    2014-01-01

    In April 2007, the  Deutsche Forschungsgemeinschaft (DFG) approved the  Priority Program 1324 “Mathematical Methods for Extracting Quantifiable Information from Complex Systems.” This volume presents a comprehensive overview of the most important results obtained over the course of the program.   Mathematical models of complex systems provide the foundation for further technological developments in science, engineering and computational finance.  Motivated by the trend toward steadily increasing computer power, ever more realistic models have been developed in recent years. These models have also become increasingly complex, and their numerical treatment poses serious challenges.   Recent developments in mathematics suggest that, in the long run, much more powerful numerical solution strategies could be derived if the interconnections between the different fields of research were systematically exploited at a conceptual level. Accordingly, a deeper understanding of the mathematical foundations as w...

  3. Quantifying the evolution of individual scientific impact.

    Science.gov (United States)

    Sinatra, Roberta; Wang, Dashun; Deville, Pierre; Song, Chaoming; Barabási, Albert-László

    2016-11-04

    Despite the frequent use of numerous quantitative indicators to gauge the professional impact of a scientist, little is known about how scientific impact emerges and evolves in time. Here, we quantify the changes in impact and productivity throughout a career in science, finding that impact, as measured by influential publications, is distributed randomly within a scientist's sequence of publications. This random-impact rule allows us to formulate a stochastic model that uncouples the effects of productivity, individual ability, and luck and unveils the existence of universal patterns governing the emergence of scientific success. The model assigns a unique individual parameter Q to each scientist, which is stable during a career, and it accurately predicts the evolution of a scientist's impact, from the h-index to cumulative citations, and independent recognitions, such as prizes. Copyright © 2016, American Association for the Advancement of Science.

  4. Quantifying creativity: can measures span the spectrum?

    Science.gov (United States)

    Simonton, Dean Keith

    2012-03-01

    Because the cognitive neuroscientists have become increasingly interested in the phenomenon of creativity, the issue arises of how creativity is to be optimally measured. Unlike intelligence, which can be assessed across the full range of intellectual ability creativity measures tend to concentrate on different sections of the overall spectrum. After first defining creativity in terms of the three criteria of novelty, usefulness, and surprise, this article provides an overview of the available measures. Not only do these instruments vary according to whether they focus on the creative process, person, or product, but they differ regarding whether they tap into "little-c" versus "Big-C" creativity; only productivity and eminence measures reach into genius-level manifestations of the phenomenon. The article closes by discussing whether various alternative assessment techniques can be integrated into a single measure that quantifies creativity across the full spectrum.

  5. Quantifying capital goods for waste incineration

    International Nuclear Information System (INIS)

    Brogaard, L.K.; Riber, C.; Christensen, T.H.

    2013-01-01

    Highlights: • Materials and energy used for the construction of waste incinerators were quantified. • The data was collected from five incineration plants in Scandinavia. • Included were six main materials, electronic systems, cables and all transportation. • The capital goods contributed 2–3% compared to the direct emissions impact on GW. - Abstract: Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000–240,000 tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main material used amounting to 19,000–26,000 tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 4000–5000 MW h. In terms of the environmental burden of producing the materials used in the construction, steel for the building and the machinery contributed the most. The material and energy used for the construction corresponded to the emission of 7–14 kg CO 2 per tonne of waste combusted throughout the lifetime of the incineration plant. The assessment showed that, compared to data reported in the literature on direct emissions from the operation of incinerators, the environmental impacts caused by the construction of buildings and machinery (capital goods) could amount to 2–3% with respect to kg CO 2 per tonne of waste combusted

  6. Quantifying structural states of soft mudrocks

    Science.gov (United States)

    Li, B.; Wong, R. C. K.

    2016-05-01

    In this paper, a cm model is proposed to quantify structural states of soft mudrocks, which are dependent on clay fractions and porosities. Physical properties of natural and reconstituted soft mudrock samples are used to derive two parameters in the cm model. With the cm model, a simplified homogenization approach is proposed to estimate geomechanical properties and fabric orientation distributions of soft mudrocks based on the mixture theory. Soft mudrocks are treated as a mixture of nonclay minerals and clay-water composites. Nonclay minerals have a high stiffness and serve as a structural framework of mudrocks when they have a high volume fraction. Clay-water composites occupy the void space among nonclay minerals and serve as an in-fill matrix. With the increase of volume fraction of clay-water composites, there is a transition in the structural state from the state of framework supported to the state of matrix supported. The decreases in shear strength and pore size as well as increases in compressibility and anisotropy in fabric are quantitatively related to such transition. The new homogenization approach based on the proposed cm model yields better performance evaluation than common effective medium modeling approaches because the interactions among nonclay minerals and clay-water composites are considered. With wireline logging data, the cm model is applied to quantify the structural states of Colorado shale formations at different depths in the Cold Lake area, Alberta, Canada. Key geomechancial parameters are estimated based on the proposed homogenization approach and the critical intervals with low strength shale formations are identified.

  7. Quantifying climatological ranges and anomalies for Pacific coral reef ecosystems.

    Science.gov (United States)

    Gove, Jamison M; Williams, Gareth J; McManus, Margaret A; Heron, Scott F; Sandin, Stuart A; Vetter, Oliver J; Foley, David G

    2013-01-01

    Coral reef ecosystems are exposed to a range of environmental forcings that vary on daily to decadal time scales and across spatial scales spanning from reefs to archipelagos. Environmental variability is a major determinant of reef ecosystem structure and function, including coral reef extent and growth rates, and the abundance, diversity, and morphology of reef organisms. Proper characterization of environmental forcings on coral reef ecosystems is critical if we are to understand the dynamics and implications of abiotic-biotic interactions on reef ecosystems. This study combines high-resolution bathymetric information with remotely sensed sea surface temperature, chlorophyll-a and irradiance data, and modeled wave data to quantify environmental forcings on coral reefs. We present a methodological approach to develop spatially constrained, island- and atoll-scale metrics that quantify climatological range limits and anomalous environmental forcings across U.S. Pacific coral reef ecosystems. Our results indicate considerable spatial heterogeneity in climatological ranges and anomalies across 41 islands and atolls, with emergent spatial patterns specific to each environmental forcing. For example, wave energy was greatest at northern latitudes and generally decreased with latitude. In contrast, chlorophyll-a was greatest at reef ecosystems proximate to the equator and northern-most locations, showing little synchrony with latitude. In addition, we find that the reef ecosystems with the highest chlorophyll-a concentrations; Jarvis, Howland, Baker, Palmyra and Kingman are each uninhabited and are characterized by high hard coral cover and large numbers of predatory fishes. Finally, we find that scaling environmental data to the spatial footprint of individual islands and atolls is more likely to capture local environmental forcings, as chlorophyll-a concentrations decreased at relatively short distances (>7 km) from 85% of our study locations. These metrics will help

  8. Quantifying polypeptide conformational space: sensitivity to conformation and ensemble definition.

    Science.gov (United States)

    Sullivan, David C; Lim, Carmay

    2006-08-24

    Quantifying the density of conformations over phase space (the conformational distribution) is needed to model important macromolecular processes such as protein folding. In this work, we quantify the conformational distribution for a simple polypeptide (N-mer polyalanine) using the cumulative distribution function (CDF), which gives the probability that two randomly selected conformations are separated by less than a "conformational" distance and whose inverse gives conformation counts as a function of conformational radius. An important finding is that the conformation counts obtained by the CDF inverse depend critically on the assignment of a conformation's distance span and the ensemble (e.g., unfolded state model): varying ensemble and conformation definition (1 --> 2 A) varies the CDF-based conformation counts for Ala(50) from 10(11) to 10(69). In particular, relatively short molecular dynamics (MD) relaxation of Ala(50)'s random-walk ensemble reduces the number of conformers from 10(55) to 10(14) (using a 1 A root-mean-square-deviation radius conformation definition) pointing to potential disconnections in comparing the results from simplified models of unfolded proteins with those from all-atom MD simulations. Explicit waters are found to roughen the landscape considerably. Under some common conformation definitions, the results herein provide (i) an upper limit to the number of accessible conformations that compose unfolded states of proteins, (ii) the optimal clustering radius/conformation radius for counting conformations for a given energy and solvent model, (iii) a means of comparing various studies, and (iv) an assessment of the applicability of random search in protein folding.

  9. Quantifying Selective Pressures Driving Bacterial Evolution Using Lineage Analysis

    Science.gov (United States)

    Lambert, Guillaume; Kussell, Edo

    2015-01-01

    Organisms use a variety of strategies to adapt to their environments and maximize long-term growth potential, but quantitative characterization of the benefits conferred by the use of such strategies, as well as their impact on the whole population's rate of growth, remains challenging. Here, we use a path-integral framework that describes how selection acts on lineages—i.e., the life histories of individuals and their ancestors—to demonstrate that lineage-based measurements can be used to quantify the selective pressures acting on a population. We apply this analysis to Escherichia coli bacteria exposed to cyclical treatments of carbenicillin, an antibiotic that interferes with cell-wall synthesis and affects cells in an age-dependent manner. While the extensive characterization of the life history of thousands of cells is necessary to accurately extract the age-dependent selective pressures caused by carbenicillin, the same measurement can be recapitulated using lineage-based statistics of a single surviving cell. Population-wide evolutionary pressures can be extracted from the properties of the surviving lineages within a population, providing an alternative and efficient procedure to quantify the evolutionary forces acting on a population. Importantly, this approach is not limited to age-dependent selection, and the framework can be generalized to detect signatures of other trait-specific selection using lineage-based measurements. Our results establish a powerful way to study the evolutionary dynamics of life under selection and may be broadly useful in elucidating selective pressures driving the emergence of antibiotic resistance and the evolution of survival strategies in biological systems.

  10. Quantifying Climatological Ranges and Anomalies for Pacific Coral Reef Ecosystems

    Science.gov (United States)

    Gove, Jamison M.; Williams, Gareth J.; McManus, Margaret A.; Heron, Scott F.; Sandin, Stuart A.; Vetter, Oliver J.; Foley, David G.

    2013-01-01

    Coral reef ecosystems are exposed to a range of environmental forcings that vary on daily to decadal time scales and across spatial scales spanning from reefs to archipelagos. Environmental variability is a major determinant of reef ecosystem structure and function, including coral reef extent and growth rates, and the abundance, diversity, and morphology of reef organisms. Proper characterization of environmental forcings on coral reef ecosystems is critical if we are to understand the dynamics and implications of abiotic–biotic interactions on reef ecosystems. This study combines high-resolution bathymetric information with remotely sensed sea surface temperature, chlorophyll-a and irradiance data, and modeled wave data to quantify environmental forcings on coral reefs. We present a methodological approach to develop spatially constrained, island- and atoll-scale metrics that quantify climatological range limits and anomalous environmental forcings across U.S. Pacific coral reef ecosystems. Our results indicate considerable spatial heterogeneity in climatological ranges and anomalies across 41 islands and atolls, with emergent spatial patterns specific to each environmental forcing. For example, wave energy was greatest at northern latitudes and generally decreased with latitude. In contrast, chlorophyll-a was greatest at reef ecosystems proximate to the equator and northern-most locations, showing little synchrony with latitude. In addition, we find that the reef ecosystems with the highest chlorophyll-a concentrations; Jarvis, Howland, Baker, Palmyra and Kingman are each uninhabited and are characterized by high hard coral cover and large numbers of predatory fishes. Finally, we find that scaling environmental data to the spatial footprint of individual islands and atolls is more likely to capture local environmental forcings, as chlorophyll-a concentrations decreased at relatively short distances (>7 km) from 85% of our study locations. These metrics will

  11. Quantifying climatological ranges and anomalies for Pacific coral reef ecosystems.

    Directory of Open Access Journals (Sweden)

    Jamison M Gove

    Full Text Available Coral reef ecosystems are exposed to a range of environmental forcings that vary on daily to decadal time scales and across spatial scales spanning from reefs to archipelagos. Environmental variability is a major determinant of reef ecosystem structure and function, including coral reef extent and growth rates, and the abundance, diversity, and morphology of reef organisms. Proper characterization of environmental forcings on coral reef ecosystems is critical if we are to understand the dynamics and implications of abiotic-biotic interactions on reef ecosystems. This study combines high-resolution bathymetric information with remotely sensed sea surface temperature, chlorophyll-a and irradiance data, and modeled wave data to quantify environmental forcings on coral reefs. We present a methodological approach to develop spatially constrained, island- and atoll-scale metrics that quantify climatological range limits and anomalous environmental forcings across U.S. Pacific coral reef ecosystems. Our results indicate considerable spatial heterogeneity in climatological ranges and anomalies across 41 islands and atolls, with emergent spatial patterns specific to each environmental forcing. For example, wave energy was greatest at northern latitudes and generally decreased with latitude. In contrast, chlorophyll-a was greatest at reef ecosystems proximate to the equator and northern-most locations, showing little synchrony with latitude. In addition, we find that the reef ecosystems with the highest chlorophyll-a concentrations; Jarvis, Howland, Baker, Palmyra and Kingman are each uninhabited and are characterized by high hard coral cover and large numbers of predatory fishes. Finally, we find that scaling environmental data to the spatial footprint of individual islands and atolls is more likely to capture local environmental forcings, as chlorophyll-a concentrations decreased at relatively short distances (>7 km from 85% of our study locations

  12. Clustering biomolecular complexes by residue contacts similarity

    NARCIS (Netherlands)

    Garcia Lopes Maia Rodrigues, João; Trellet, Mikaël; Schmitz, Christophe; Kastritis, Panagiotis; Karaca, Ezgi; Melquiond, Adrien S J; Bonvin, Alexandre M J J; Garcia Lopes Maia Rodrigues, João

    Inaccuracies in computational molecular modeling methods are often counterweighed by brute-force generation of a plethora of putative solutions. These are then typically sieved via structural clustering based on similarity measures such as the root mean square deviation (RMSD) of atomic positions.

  13. Similarity principles for equipment qualification by experience

    International Nuclear Information System (INIS)

    Kana, D.D.; Pomerening, D.J.

    1988-07-01

    A methodology is developed for seismic qualification of nuclear plant equipment by applying similarity principles to existing experience data. Experience data are available from previous qualifications by analysis or testing, or from actual earthquake events. Similarity principles are defined in terms of excitation, equipment physical characteristics, and equipment response. Physical similarity is further defined in terms of a critical transfer function for response at a location on a primary structure, whose response can be assumed directly related to ultimate fragility of the item under elevated levels of excitation. Procedures are developed for combining experience data into composite specifications for qualification of equipment that can be shown to be physically similar to the reference equipment. Other procedures are developed for extending qualifications beyond the original specifications under certain conditions. Some examples for application of the procedures and verification of them are given for certain cases that can be approximated by a two degree of freedom simple primary/secondary system. Other examples are based on use of actual test data available from previous qualifications. Relationships of the developments with other previously-published methods are discussed. The developments are intended to elaborate on the rather broad revised guidelines developed by the IEEE 344 Standards Committee for equipment qualification in new nuclear plants. However, the results also contribute to filling a gap that exists between the IEEE 344 methodology and that previously developed by the Seismic Qualification Utilities Group. The relationship of the results to safety margin methodology is also discussed. (author)

  14. 7 CFR 51.1997 - Similar type.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Similar type. 51.1997 Section 51.1997 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946...

  15. Efficient Similarity Retrieval in Music Databases

    DEFF Research Database (Denmark)

    Ruxanda, Maria Magdalena; Jensen, Christian Søndergaard

    2006-01-01

    Audio music is increasingly becoming available in digital form, and the digital music collections of individuals continue to grow. Addressing the need for effective means of retrieving music from such collections, this paper proposes new techniques for content-based similarity search. Each music...

  16. Similarity search of business process models

    NARCIS (Netherlands)

    Dumas, M.; García-Bañuelos, L.; Dijkman, R.M.

    2009-01-01

    Similarity search is a general class of problems in which a given object, called a query object, is compared against a collection of objects in order to retrieve those that most closely resemble the query object. This paper reviews recent work on an instance of this class of problems, where the

  17. Evaluating gender similarities and differences using metasynthesis.

    Science.gov (United States)

    Zell, Ethan; Krizan, Zlatan; Teeter, Sabrina R

    2015-01-01

    Despite the common lay assumption that males and females are profoundly different, Hyde (2005) used data from 46 meta-analyses to demonstrate that males and females are highly similar. Nonetheless, the gender similarities hypothesis has remained controversial. Since Hyde's provocative report, there has been an explosion of meta-analytic interest in psychological gender differences. We utilized this enormous collection of 106 meta-analyses and 386 individual meta-analytic effects to reevaluate the gender similarities hypothesis. Furthermore, we employed a novel data-analytic approach called metasynthesis (Zell & Krizan, 2014) to estimate the average difference between males and females and to explore moderators of gender differences. The average, absolute difference between males and females across domains was relatively small (d = 0.21, SD = 0.14), with the majority of effects being either small (46%) or very small (39%). Magnitude of differences fluctuated somewhat as a function of the psychological domain (e.g., cognitive variables, social and personality variables, well-being), but remained largely constant across age, culture, and generations. These findings provide compelling support for the gender similarities hypothesis, but also underscore conditions under which gender differences are most pronounced. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  18. Cross-kingdom similarities in microbiome functions

    NARCIS (Netherlands)

    Mendes, R.; Raaijmakers, J.M.

    2015-01-01

    Recent advances in medical research have revealed how humans rely on their microbiome for diverse traits and functions. Similarly, microbiomes of other higher organisms play key roles in disease, health, growth and development of their host. Exploring microbiome functions across kingdoms holds

  19. Measuring structural similarity in large online networks.

    Science.gov (United States)

    Shi, Yongren; Macy, Michael

    2016-09-01

    Structural similarity based on bipartite graphs can be used to detect meaningful communities, but the networks have been tiny compared to massive online networks. Scalability is important in applications involving tens of millions of individuals with highly skewed degree distributions. Simulation analysis holding underlying similarity constant shows that two widely used measures - Jaccard index and cosine similarity - are biased by the distribution of out-degree in web-scale networks. However, an alternative measure, the Standardized Co-incident Ratio (SCR), is unbiased. We apply SCR to members of Congress, musical artists, and professional sports teams to show how massive co-following on Twitter can be used to map meaningful affiliations among cultural entities, even in the absence of direct connections to one another. Our results show how structural similarity can be used to map cultural alignments and demonstrate the potential usefulness of social media data in the study of culture, politics, and organizations across the social and behavioral sciences. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Phonological Similarity in American Sign Language.

    Science.gov (United States)

    Hildebrandt, Ursula; Corina, David

    2002-01-01

    Investigates deaf and hearing subjects' ratings of American Sign Language (ASL) signs to assess whether linguistic experience shapes judgments of sign similarity. Findings are consistent with linguistic theories that posit movement and location as core structural elements of syllable structure in ASL. (Author/VWL)

  1. Structural similarity and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, Ian; Paulson, Olaf B

    2004-01-01

    It has been suggested that category-specific recognition disorders for natural objects may reflect that natural objects are more structurally (visually) similar than artefacts and therefore more difficult to recognize following brain damage. On this account one might expect a positive relationshi...

  2. Music Retrieval based on Melodic Similarity

    NARCIS (Netherlands)

    Typke, R.

    2007-01-01

    This thesis introduces a method for measuring melodic similarity for notated music such as MIDI files. This music search algorithm views music as sets of notes that are represented as weighted points in the two-dimensional space of time and pitch. Two point sets can be compared by calculating how

  3. Measurement of Similarity in Academic Contexts

    Directory of Open Access Journals (Sweden)

    Omid Mahian

    2017-06-01

    Full Text Available We propose some reflections, comments and suggestions about the measurement of similar and matched content in scientific papers and documents, and the need to develop appropriate tools and standards for an ethically fair and equitable treatment of authors.

  4. Appropriate Similarity Measures for Author Cocitation Analysis

    NARCIS (Netherlands)

    N.J.P. van Eck (Nees Jan); L. Waltman (Ludo)

    2007-01-01

    textabstractWe provide a number of new insights into the methodological discussion about author cocitation analysis. We first argue that the use of the Pearson correlation for measuring the similarity between authors’ cocitation profiles is not very satisfactory. We then discuss what kind of

  5. Similarity of Experience and Empathy in Preschoolers.

    Science.gov (United States)

    Barnett, Mark A.

    The present study examined the role of similarity of experience in young children's affective reactions to others. Some preschoolers played one of two games (Puzzle Board or Buckets) and were informed that they had either failed or succeeded; others merely observed the games being played and were given no evaluative feedback. Subsequently, each…

  6. Cultural Similarities and Differences on Idiom Translation

    Institute of Scientific and Technical Information of China (English)

    黄频频; 陈于全

    2010-01-01

    Both English and Chinese are abound with idioms. Idioms are an important part of the hnguage and culture of a society. English and Chinese idioms carved with cultural characteristics account for a great part in the tramlation. This paper studies the translation of idioms concerning their cultural similarities, cultural differences and transhtion principles.

  7. Learning by similarity in coordination problems

    Czech Academy of Sciences Publication Activity Database

    Steiner, Jakub; Stewart, C.

    -, č. 324 (2007), s. 1-40 ISSN 1211-3298 R&D Projects: GA MŠk LC542 Institutional research plan: CEZ:AV0Z70850503 Keywords : similarity * learning * case-based reasoning Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp324.pdf

  8. Outsourced similarity search on metric data assets

    KAUST Repository

    Yiu, Man Lung; Assent, Ira; Jensen, Christian Sø ndergaard; Kalnis, Panos

    2012-01-01

    for the most similar data objects to a query example. Outsourcing offers the data owner scalability and a low-initial investment. The need for privacy may be due to the data being sensitive (e.g., in medicine), valuable (e.g., in astronomy), or otherwise

  9. Understanding similarity of groundwater systems with empirical copulas

    Science.gov (United States)

    Haaf, Ezra; Kumar, Rohini; Samaniego, Luis; Barthel, Roland

    2016-04-01

    Within the classification framework for groundwater systems that aims for identifying similarity of hydrogeological systems and transferring information from a well-observed to an ungauged system (Haaf and Barthel, 2015; Haaf and Barthel, 2016), we propose a copula-based method for describing groundwater-systems similarity. Copulas are an emerging method in hydrological sciences that make it possible to model the dependence structure of two groundwater level time series, independently of the effects of their marginal distributions. This study is based on Samaniego et al. (2010), which described an approach calculating dissimilarity measures from bivariate empirical copula densities of streamflow time series. Subsequently, streamflow is predicted in ungauged basins by transferring properties from similar catchments. The proposed approach is innovative because copula-based similarity has not yet been applied to groundwater systems. Here we estimate the pairwise dependence structure of 600 wells in Southern Germany using 10 years of weekly groundwater level observations. Based on these empirical copulas, dissimilarity measures are estimated, such as the copula's lower- and upper corner cumulated probability, copula-based Spearman's rank correlation - as proposed by Samaniego et al. (2010). For the characterization of groundwater systems, copula-based metrics are compared with dissimilarities obtained from precipitation signals corresponding to the presumed area of influence of each groundwater well. This promising approach provides a new tool for advancing similarity-based classification of groundwater system dynamics. Haaf, E., Barthel, R., 2015. Methods for assessing hydrogeological similarity and for classification of groundwater systems on the regional scale, EGU General Assembly 2015, Vienna, Austria. Haaf, E., Barthel, R., 2016. An approach for classification of hydrogeological systems at the regional scale based on groundwater hydrographs EGU General Assembly

  10. Extending the Similarity-Attraction Effect : The effects of When-Similarity in mediated communication

    NARCIS (Netherlands)

    Kaptein, M.C.; Castaneda, D.; Fernandez, N.; Nass, C.

    2014-01-01

    The feeling of connectedness experienced in computer-mediated relationships can be explained by the similarity-attraction effect (SAE). Though SAE is well established in psychology, the effects of some types of similarity have not yet been explored. In 2 studies, we demonstrate similarity-attraction

  11. Investigating Correlation between Protein Sequence Similarity and Semantic Similarity Using Gene Ontology Annotations.

    Science.gov (United States)

    Ikram, Najmul; Qadir, Muhammad Abdul; Afzal, Muhammad Tanvir

    2018-01-01

    Sequence similarity is a commonly used measure to compare proteins. With the increasing use of ontologies, semantic (function) similarity is getting importance. The correlation between these measures has been applied in the evaluation of new semantic similarity methods, and in protein function prediction. In this research, we investigate the relationship between the two similarity methods. The results suggest absence of a strong correlation between sequence and semantic similarities. There is a large number of proteins with low sequence similarity and high semantic similarity. We observe that Pearson's correlation coefficient is not sufficient to explain the nature of this relationship. Interestingly, the term semantic similarity values above 0 and below 1 do not seem to play a role in improving the correlation. That is, the correlation coefficient depends only on the number of common GO terms in proteins under comparison, and the semantic similarity measurement method does not influence it. Semantic similarity and sequence similarity have a distinct behavior. These findings are of significant effect for future works on protein comparison, and will help understand the semantic similarity between proteins in a better way.

  12. Quantifying selectivity in spectrophotometric multicomponent analysis

    NARCIS (Netherlands)

    Faber, N.M.; Ferre, J.; Boque, R.; Kalivas, J.H.

    2003-01-01

    According to the latest recommendation of the International Union of Pure and Applied Chemistry, "selectivity refers to the extent to which the method can be used to determine particular analytes in mixtures or matrices without interferences from other components of similar behavior". Because of the

  13. Query-dependent banding (QDB for faster RNA similarity searches.

    Directory of Open Access Journals (Sweden)

    Eric P Nawrocki

    2007-03-01

    Full Text Available When searching sequence databases for RNAs, it is desirable to score both primary sequence and RNA secondary structure similarity. Covariance models (CMs are probabilistic models well-suited for RNA similarity search applications. However, the computational complexity of CM dynamic programming alignment algorithms has limited their practical application. Here we describe an acceleration method called query-dependent banding (QDB, which uses the probabilistic query CM to precalculate regions of the dynamic programming lattice that have negligible probability, independently of the target database. We have implemented QDB in the freely available Infernal software package. QDB reduces the average case time complexity of CM alignment from LN(2.4 to LN(1.3 for a query RNA of N residues and a target database of L residues, resulting in a 4-fold speedup for typical RNA queries. Combined with other improvements to Infernal, including informative mixture Dirichlet priors on model parameters, benchmarks also show increased sensitivity and specificity resulting from improved parameterization.

  14. Quantifying uncertainty in nuclear analytical measurements

    International Nuclear Information System (INIS)

    2004-07-01

    The lack of international consensus on the expression of uncertainty in measurements was recognised by the late 1970s and led, after the issuance of a series of rather generic recommendations, to the publication of a general publication, known as GUM, the Guide to the Expression of Uncertainty in Measurement. This publication, issued in 1993, was based on co-operation over several years by the Bureau International des Poids et Mesures, the International Electrotechnical Commission, the International Federation of Clinical Chemistry, the International Organization for Standardization (ISO), the International Union of Pure and Applied Chemistry, the International Union of Pure and Applied Physics and the Organisation internationale de metrologie legale. The purpose was to promote full information on how uncertainty statements are arrived at and to provide a basis for harmonized reporting and the international comparison of measurement results. The need to provide more specific guidance to different measurement disciplines was soon recognized and the field of analytical chemistry was addressed by EURACHEM in 1995 in the first edition of a guidance report on Quantifying Uncertainty in Analytical Measurements, produced by a group of experts from the field. That publication translated the general concepts of the GUM into specific applications for analytical laboratories and illustrated the principles with a series of selected examples as a didactic tool. Based on feedback from the actual practice, the EURACHEM publication was extensively reviewed in 1997-1999 under the auspices of the Co-operation on International Traceability in Analytical Chemistry (CITAC), and a second edition was published in 2000. Still, except for a single example on the measurement of radioactivity in GUM, the field of nuclear and radiochemical measurements was not covered. The explicit requirement of ISO standard 17025:1999, General Requirements for the Competence of Testing and Calibration

  15. Similarity, trust in institutions, affect, and populism

    DEFF Research Database (Denmark)

    Scholderer, Joachim; Finucane, Melissa L.

    -based evaluations are fundamental to human information processing, they can contribute significantly to other judgments (such as the risk, cost-effectiveness, trustworthiness) of the same stimulus object. Although deliberation and analysis are certainly important in some decision-making circumstances, reliance...... on affect is a quicker, easier, and a more efficient way of navigating in a complex and uncertain world. Hence, many theorists give affect a direct and primary role in motivating behavior. Taken together, the results provide uncannily strong support for the value-similarity hypothesis, strengthening...... types of information about gene technology. The materials were attributed to different institutions. The results indicated that participants' trust in an institution was a function of the similarity between the position advocated in the materials and participants' own attitudes towards gene technology...

  16. Contingency and similarity in response selection.

    Science.gov (United States)

    Prinz, Wolfgang

    2018-05-09

    This paper explores issues of task representation in choice reaction time tasks. How is it possible, and what does it take, to represent such a task in a way that enables a performer to do the task in line with the prescriptions entailed in the instructions? First, a framework for task representation is outlined which combines the implementation of task sets and their use for performance with different kinds of representational operations (pertaining to feature compounds for event codes and code assemblies for task sets, respectively). Then, in a second step, the framework is itself embedded in the bigger picture of the classical debate on the roles of contingency and similarity for the formation of associations. The final conclusion is that both principles are needed and that the operation of similarity at the level of task sets requires and presupposes the operation of contingency at the level of event codes. Copyright © 2018 The Author. Published by Elsevier Inc. All rights reserved.

  17. Similarity and Modeling in Science and Engineering

    CERN Document Server

    Kuneš, Josef

    2012-01-01

    The present text sets itself in relief to other titles on the subject in that it addresses the means and methodologies versus a narrow specific-task oriented approach. Concepts and their developments which evolved to meet the changing needs of applications are addressed. This approach provides the reader with a general tool-box to apply to their specific needs. Two important tools are presented: dimensional analysis and the similarity analysis methods. The fundamental point of view, enabling one to sort all models, is that of information flux between a model and an original expressed by the similarity and abstraction. Each chapter includes original examples and ap-plications. In this respect, the models can be divided into several groups. The following models are dealt with separately by chapter; mathematical and physical models, physical analogues, deterministic, stochastic, and cybernetic computer models. The mathematical models are divided into asymptotic and phenomenological models. The phenomenological m...

  18. Quantifying Urban Groundwater in Environmental Field Observatories

    Science.gov (United States)

    Welty, C.; Miller, A. J.; Belt, K.; Smith, J. A.; Band, L. E.; Groffman, P.; Scanlon, T.; Warner, J.; Ryan, R. J.; Yeskis, D.; McGuire, M. P.

    2006-12-01

    Despite the growing footprint of urban landscapes and their impacts on hydrologic and biogeochemical cycles, comprehensive field studies of urban water budgets are few. The cumulative effects of urban infrastructure (buildings, roads, culverts, storm drains, detention ponds, leaking water supply and wastewater pipe networks) on temporal and spatial patterns of groundwater stores, fluxes, and flowpaths are poorly understood. The goal of this project is to develop expertise and analytical tools for urban groundwater systems that will inform future environmental observatory planning and that can be shared with research teams working in urban environments elsewhere. The work plan for this project draws on a robust set of information resources in Maryland provided by ongoing monitoring efforts of the Baltimore Ecosystem Study (BES), USGS, and the U.S. Forest Service working together with university scientists and engineers from multiple institutions. A key concern is to bridge the gap between small-scale intensive field studies and larger-scale and longer-term hydrologic patterns using synoptic field surveys, remote sensing, numerical modeling, data mining and visualization tools. Using the urban water budget as a unifying theme, we are working toward estimating the various elements of the budget in order to quantify the influence of urban infrastructure on groundwater. Efforts include: (1) comparison of base flow behavior from stream gauges in a nested set of watersheds at four different spatial scales from 0.8 to 171 km2, with diverse patterns of impervious cover and urban infrastructure; (2) synoptic survey of well water levels to characterize the regional water table; (3) use of airborne thermal infrared imagery to identify locations of groundwater seepage into streams across a range of urban development patterns; (4) use of seepage transects and tracer tests to quantify the spatial pattern of groundwater fluxes to the drainage network in selected subwatersheds; (5

  19. Similarity solutions for phase-change problems

    Science.gov (United States)

    Canright, D.; Davis, S. H.

    1989-01-01

    A modification of Ivantsov's (1947) similarity solutions is proposed which can describe phase-change processes which are limited by diffusion. The method has application to systems that have n-components and possess cross-diffusion and Soret and Dufour effects, along with convection driven by density discontinuities at the two-phase interface. Local thermal equilibrium is assumed at the interface. It is shown that analytic solutions are possible when the material properties are constant.

  20. Stochastic self-similar and fractal universe

    International Nuclear Information System (INIS)

    Iovane, G.; Laserra, E.; Tortoriello, F.S.

    2004-01-01

    The structures formation of the Universe appears as if it were a classically self-similar random process at all astrophysical scales. An agreement is demonstrated for the present hypotheses of segregation with a size of astrophysical structures by using a comparison between quantum quantities and astrophysical ones. We present the observed segregated Universe as the result of a fundamental self-similar law, which generalizes the Compton wavelength relation. It appears that the Universe has a memory of its quantum origin as suggested by R. Penrose with respect to quasi-crystal. A more accurate analysis shows that the present theory can be extended from the astrophysical to the nuclear scale by using generalized (stochastically) self-similar random process. This transition is connected to the relevant presence of the electromagnetic and nuclear interactions inside the matter. In this sense, the presented rule is correct from a subatomic scale to an astrophysical one. We discuss the near full agreement at organic cell scale and human scale too. Consequently the Universe, with its structures at all scales (atomic nucleus, organic cell, human, planet, solar system, galaxy, clusters of galaxy, super clusters of galaxy), could have a fundamental quantum reason. In conclusion, we analyze the spatial dimensions of the objects in the Universe as well as space-time dimensions. The result is that it seems we live in an El Naschie's E-infinity Cantorian space-time; so we must seriously start considering fractal geometry as the geometry of nature, a type of arena where the laws of physics appear at each scale in a self-similar way as advocated long ago by the Swedish school of astrophysics

  1. Similarity-based Polymorphic Shellcode Detection

    Directory of Open Access Journals (Sweden)

    Denis Yurievich Gamayunov

    2013-02-01

    Full Text Available In the work the method for polymorphic shellcode dedection based on the set of known shellcodes is proposed. The method’s main idea is in sequential applying of deobfuscating transformations to a data analyzed and then recognizing similarity with malware samples. The method has been tested on the sets of shellcodes generated using Metasploit Framework v.4.1.0 and PELock Obfuscator and shows 87 % precision with zero false positives rate.

  2. The fluid similarity of the boiling crisis

    International Nuclear Information System (INIS)

    Katsaounis, A.

    1986-01-01

    Most of the measurements related to the boiling crisis have, until now, been undertaken for a wide parameter variation in the water, and were mainly related to the water-cooled reactor. This article investigates, whether or how the measuring results can be transferred to other fluids. Derived dimensionless similarity figures and those taken from literature are verified by measurements from complex geometries in water and freon 12. (orig.) [de

  3. The fluid similarity of the boiling crisis

    International Nuclear Information System (INIS)

    Katsaounis, A.

    1987-01-01

    Most of the measurements related to the boiling crisis have, until now, been undertaken for a wide parameter variation in the water, and were mainly related to the water-cooled reactor. This article investigates, whether or how the measuring results can be transferred to other fluids. Derived dimensionless similarity figures and those taken from literature are verified by measurements from complex geometries in water and freon 12. (orig./GL) [de

  4. Semantic Similarity between Web Documents Using Ontology

    Science.gov (United States)

    Chahal, Poonam; Singh Tomer, Manjeet; Kumar, Suresh

    2018-06-01

    The World Wide Web is the source of information available in the structure of interlinked web pages. However, the procedure of extracting significant information with the assistance of search engine is incredibly critical. This is for the reason that web information is written mainly by using natural language, and further available to individual human. Several efforts have been made in semantic similarity computation between documents using words, concepts and concepts relationship but still the outcome available are not as per the user requirements. This paper proposes a novel technique for computation of semantic similarity between documents that not only takes concepts available in documents but also relationships that are available between the concepts. In our approach documents are being processed by making ontology of the documents using base ontology and a dictionary containing concepts records. Each such record is made up of the probable words which represents a given concept. Finally, document ontology's are compared to find their semantic similarity by taking the relationships among concepts. Relevant concepts and relations between the concepts have been explored by capturing author and user intention. The proposed semantic analysis technique provides improved results as compared to the existing techniques.

  5. Semantic Similarity between Web Documents Using Ontology

    Science.gov (United States)

    Chahal, Poonam; Singh Tomer, Manjeet; Kumar, Suresh

    2018-03-01

    The World Wide Web is the source of information available in the structure of interlinked web pages. However, the procedure of extracting significant information with the assistance of search engine is incredibly critical. This is for the reason that web information is written mainly by using natural language, and further available to individual human. Several efforts have been made in semantic similarity computation between documents using words, concepts and concepts relationship but still the outcome available are not as per the user requirements. This paper proposes a novel technique for computation of semantic similarity between documents that not only takes concepts available in documents but also relationships that are available between the concepts. In our approach documents are being processed by making ontology of the documents using base ontology and a dictionary containing concepts records. Each such record is made up of the probable words which represents a given concept. Finally, document ontology's are compared to find their semantic similarity by taking the relationships among concepts. Relevant concepts and relations between the concepts have been explored by capturing author and user intention. The proposed semantic analysis technique provides improved results as compared to the existing techniques.

  6. THE SEGUE K GIANT SURVEY. III. QUANTIFYING GALACTIC HALO SUBSTRUCTURE

    Energy Technology Data Exchange (ETDEWEB)

    Janesh, William; Morrison, Heather L.; Ma, Zhibo; Harding, Paul [Department of Astronomy, Case Western Reserve University, Cleveland, OH 44106 (United States); Rockosi, Constance [UCO/Lick Observatory, University of California, Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States); Starkenburg, Else [Department of Physics and Astronomy, University of Victoria, P.O. Box 1700, STN CSC, Victoria BC V8W 3P6 (Canada); Xue, Xiang Xiang; Rix, Hans-Walter [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Beers, Timothy C. [Department of Physics and JINA Center for the Evolution of the Elements, University of Notre Dame, Notre Dame, IN 46556 (United States); Johnson, Jennifer [Department of Astronomy, Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Lee, Young Sun [Department of Astronomy and Space Science, Chungnam National University, Daejeon 34134 (Korea, Republic of); Schneider, Donald P. [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States)

    2016-01-10

    We statistically quantify the amount of substructure in the Milky Way stellar halo using a sample of 4568 halo K giant stars at Galactocentric distances ranging over 5–125 kpc. These stars have been selected photometrically and confirmed spectroscopically as K giants from the Sloan Digital Sky Survey’s Sloan Extension for Galactic Understanding and Exploration project. Using a position–velocity clustering estimator (the 4distance) and a model of a smooth stellar halo, we quantify the amount of substructure in the halo, divided by distance and metallicity. Overall, we find that the halo as a whole is highly structured. We also confirm earlier work using blue horizontal branch (BHB) stars which showed that there is an increasing amount of substructure with increasing Galactocentric radius, and additionally find that the amount of substructure in the halo increases with increasing metallicity. Comparing to resampled BHB stars, we find that K giants and BHBs have similar amounts of substructure over equivalent ranges of Galactocentric radius. Using a friends-of-friends algorithm to identify members of individual groups, we find that a large fraction (∼33%) of grouped stars are associated with Sgr, and identify stars belonging to other halo star streams: the Orphan Stream, the Cetus Polar Stream, and others, including previously unknown substructures. A large fraction of sample K giants (more than 50%) are not grouped into any substructure. We find also that the Sgr stream strongly dominates groups in the outer halo for all except the most metal-poor stars, and suggest that this is the source of the increase of substructure with Galactocentric radius and metallicity.

  7. Quantified social and aesthetic values in environmental decision making

    International Nuclear Information System (INIS)

    Burnham, J.B.; Maynard, W.S.; Jones, G.R.

    1975-01-01

    A method has been devised for quantifying the social criteria to be considered when selecting a nuclear design and/or site option. Community judgement of social values is measured directly and indirectly on eight siting factors. These same criteria are independently analysed by experts using techno-economic methods. The combination of societal and technical indices yields a weighted score for each alternative. The aesthetic impact was selected as the first to be quantified. A visual quality index was developed to measure the change in the visual quality of a viewscape caused by construction of a facility. Visual quality was measured by reducing it to its component parts - intactness, vividness and unity - and rating each part with and without the facility. Urban planners and landscape architects used the technique to analyse three viewscapes, testing three different methods on each viewscape. The three methods used the same aesthetic elements but varied in detail and depth. As expected, the technique with the greatest analytical detail (and least subjective judgement) was the most reliable method. Social value judgements were measured by social psychologists applying a questionnaire technique, using a number of design and site options to illustrate the range of criteria. Three groups of predictably different respondents - environmentalists, high-school students and businessmen - were selected. The three groups' response patterns were remarkably similar, though businessmen were consistently more biased towards nuclear power than were environmentalists. Correlational and multiple regression analyses provided indirect estimates of the relative importance of each impact category. Only the environmentalists showed a high correlation between the two methods. This is partially explained by their interest and knowledge. Also, the regression analysis encounters problems when small samples are used, and the environmental sample was considerably larger than the other two

  8. Quantifying and Mapping Global Data Poverty.

    Science.gov (United States)

    Leidig, Mathias; Teeuw, Richard M

    2015-01-01

    Digital information technologies, such as the Internet, mobile phones and social media, provide vast amounts of data for decision-making and resource management. However, access to these technologies, as well as their associated software and training materials, is not evenly distributed: since the 1990s there has been concern about a "Digital Divide" between the data-rich and the data-poor. We present an innovative metric for evaluating international variations in access to digital data: the Data Poverty Index (DPI). The DPI is based on Internet speeds, numbers of computer owners and Internet users, mobile phone ownership and network coverage, as well as provision of higher education. The datasets used to produce the DPI are provided annually for almost all the countries of the world and can be freely downloaded. The index that we present in this 'proof of concept' study is the first to quantify and visualise the problem of global data poverty, using the most recent datasets, for 2013. The effects of severe data poverty, particularly limited access to geoinformatic data, free software and online training materials, are discussed in the context of sustainable development and disaster risk reduction. The DPI highlights countries where support is needed for improving access to the Internet and for the provision of training in geoinfomatics. We conclude that the DPI is of value as a potential metric for monitoring the Sustainable Development Goals of the Sendai Framework for Disaster Risk Reduction.

  9. Stimfit: quantifying electrophysiological data with Python

    Directory of Open Access Journals (Sweden)

    Segundo Jose Guzman

    2014-02-01

    Full Text Available Intracellular electrophysiological recordings provide crucial insights into elementary neuronal signals such as action potentials and synaptic currents. Analyzing and interpreting these signals is essential for a quantitative understanding of neuronal information processing, and requires both fast data visualization and ready access to complex analysis routines. To achieve this goal, we have developed Stimfit, a free software package for cellular neurophysiology with a Python scripting interface and a built-in Python shell. The program supports most standard file formats for cellular neurophysiology and other biomedical signals through the Biosig library. To quantify and interpret the activity of single neurons and communication between neurons, the program includes algorithms to characterize the kinetics of presynaptic action potentials and postsynaptic currents, estimate latencies between pre- and postsynaptic events, and detect spontaneously occurring events. We validate and benchmark these algorithms, give estimation errors, and provide sample use cases, showing that Stimfit represents an efficient, accessible and extensible way to accurately analyze and interpret neuronal signals.

  10. Quantifying capital goods for waste incineration.

    Science.gov (United States)

    Brogaard, L K; Riber, C; Christensen, T H

    2013-06-01

    Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000-240,000 tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main material used amounting to 19,000-26,000 tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 4000-5000 MW h. In terms of the environmental burden of producing the materials used in the construction, steel for the building and the machinery contributed the most. The material and energy used for the construction corresponded to the emission of 7-14 kg CO2 per tonne of waste combusted throughout the lifetime of the incineration plant. The assessment showed that, compared to data reported in the literature on direct emissions from the operation of incinerators, the environmental impacts caused by the construction of buildings and machinery (capital goods) could amount to 2-3% with respect to kg CO2 per tonne of waste combusted. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Fluorescence imaging to quantify crop residue cover

    Science.gov (United States)

    Daughtry, C. S. T.; Mcmurtrey, J. E., III; Chappelle, E. W.

    1994-01-01

    Crop residues, the portion of the crop left in the field after harvest, can be an important management factor in controlling soil erosion. Methods to quantify residue cover are needed that are rapid, accurate, and objective. Scenes with known amounts of crop residue were illuminated with long wave ultraviolet (UV) radiation and fluorescence images were recorded with an intensified video camera fitted with a 453 to 488 nm band pass filter. A light colored soil and a dark colored soil were used as background for the weathered soybean stems. Residue cover was determined by counting the proportion of the pixels in the image with fluorescence values greater than a threshold. Soil pixels had the lowest gray levels in the images. The values of the soybean residue pixels spanned nearly the full range of the 8-bit video data. Classification accuracies typically were within 3(absolute units) of measured cover values. Video imaging can provide an intuitive understanding of the fraction of the soil covered by residue.

  12. Quantifying Anthropogenic Stress on Groundwater Resources.

    Science.gov (United States)

    Ashraf, Batool; AghaKouchak, Amir; Alizadeh, Amin; Mousavi Baygi, Mohammad; R Moftakhari, Hamed; Mirchi, Ali; Anjileli, Hassan; Madani, Kaveh

    2017-10-10

    This study explores a general framework for quantifying anthropogenic influences on groundwater budget based on normalized human outflow (h out ) and inflow (h in ). The framework is useful for sustainability assessment of groundwater systems and allows investigating the effects of different human water abstraction scenarios on the overall aquifer regime (e.g., depleted, natural flow-dominated, and human flow-dominated). We apply this approach to selected regions in the USA, Germany and Iran to evaluate the current aquifer regime. We subsequently present two scenarios of changes in human water withdrawals and return flow to the system (individually and combined). Results show that approximately one-third of the selected aquifers in the USA, and half of the selected aquifers in Iran are dominated by human activities, while the selected aquifers in Germany are natural flow-dominated. The scenario analysis results also show that reduced human withdrawals could help with regime change in some aquifers. For instance, in two of the selected USA aquifers, a decrease in anthropogenic influences by ~20% may change the condition of depleted regime to natural flow-dominated regime. We specifically highlight a trending threat to the sustainability of groundwater in northwest Iran and California, and the need for more careful assessment and monitoring practices as well as strict regulations to mitigate the negative impacts of groundwater overexploitation.

  13. Quantifying Supply Risk at a Cellulosic Biorefinery

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Jason K [Idaho National Laboratory; Jacobson, Jacob Jordan [Idaho National Laboratory; Cafferty, Kara Grace [Idaho National Laboratory; Lamers, Patrick [Idaho National Laboratory; Roni, MD S [Idaho National Laboratory

    2015-03-01

    In order to increase the sustainability and security of the nation’s energy supply, the U.S. Department of Energy through its Bioenergy Technology Office has set a vision for one billion tons of biomass to be processed for renewable energy and bioproducts annually by the year 2030. The Renewable Fuels Standard limits the amount of corn grain that can be used in ethanol conversion sold in the U.S, which is already at its maximum. Therefore making the DOE’s vision a reality requires significant growth in the advanced biofuels industry where currently three cellulosic biorefineries convert cellulosic biomass to ethanol. Risk mitigation is central to growing the industry beyond its infancy to a level necessary to achieve the DOE vision. This paper focuses on reducing the supply risk that faces a firm that owns a cellulosic biorefinery. It uses risk theory and simulation modeling to build a risk assessment model based on causal relationships of underlying, uncertain, supply driving variables. Using the model the paper quantifies supply risk reduction achieved by converting the supply chain from a conventional supply system (bales and trucks) to an advanced supply system (depots, pellets, and trains). Results imply that the advanced supply system reduces supply system risk, defined as the probability of a unit cost overrun, from 83% in the conventional system to 4% in the advanced system. Reducing cost risk in this nascent industry improves the odds of realizing desired growth.

  14. Quantifying Supply Risk at a Cellulosic Biorefinery

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Jason K.; Jacobson, Jacob J.; Cafferty, Kara G.; Lamers, Patrick; Roni, Mohammad S.

    2015-07-01

    In order to increase the sustainability and security of the nation’s energy supply, the U.S. Department of Energy through its Bioenergy Technology Office has set a vision for one billion tons of biomass to be processed for renewable energy and bioproducts annually by the year 2030. The Renewable Fuels Standard limits the amount of corn grain that can be used in ethanol conversion sold in the U.S, which is already at its maximum. Therefore making the DOE’s vision a reality requires significant growth in the advanced biofuels industry where currently three cellulosic biorefineries convert cellulosic biomass to ethanol. Risk mitigation is central to growing the industry beyond its infancy to a level necessary to achieve the DOE vision. This paper focuses on reducing the supply risk that faces a firm that owns a cellulosic biorefinery. It uses risk theory and simulation modeling to build a risk assessment model based on causal relationships of underlying, uncertain, supply driving variables. Using the model the paper quantifies supply risk reduction achieved by converting the supply chain from a conventional supply system (bales and trucks) to an advanced supply system (depots, pellets, and trains). Results imply that the advanced supply system reduces supply system risk, defined as the probability of a unit cost overrun, from 83% in the conventional system to 4% in the advanced system. Reducing cost risk in this nascent industry improves the odds of realizing desired growth.

  15. Quantifying and Mapping Global Data Poverty.

    Directory of Open Access Journals (Sweden)

    Mathias Leidig

    Full Text Available Digital information technologies, such as the Internet, mobile phones and social media, provide vast amounts of data for decision-making and resource management. However, access to these technologies, as well as their associated software and training materials, is not evenly distributed: since the 1990s there has been concern about a "Digital Divide" between the data-rich and the data-poor. We present an innovative metric for evaluating international variations in access to digital data: the Data Poverty Index (DPI. The DPI is based on Internet speeds, numbers of computer owners and Internet users, mobile phone ownership and network coverage, as well as provision of higher education. The datasets used to produce the DPI are provided annually for almost all the countries of the world and can be freely downloaded. The index that we present in this 'proof of concept' study is the first to quantify and visualise the problem of global data poverty, using the most recent datasets, for 2013. The effects of severe data poverty, particularly limited access to geoinformatic data, free software and online training materials, are discussed in the context of sustainable development and disaster risk reduction. The DPI highlights countries where support is needed for improving access to the Internet and for the provision of training in geoinfomatics. We conclude that the DPI is of value as a potential metric for monitoring the Sustainable Development Goals of the Sendai Framework for Disaster Risk Reduction.

  16. Stacking interactions between carbohydrate and protein quantified by combination of theoretical and experimental methods.

    Directory of Open Access Journals (Sweden)

    Michaela Wimmerová

    Full Text Available Carbohydrate-receptor interactions are an integral part of biological events. They play an important role in many cellular processes, such as cell-cell adhesion, cell differentiation and in-cell signaling. Carbohydrates can interact with a receptor by using several types of intermolecular interactions. One of the most important is the interaction of a carbohydrate's apolar part with aromatic amino acid residues, known as dispersion interaction or CH/π interaction. In the study presented here, we attempted for the first time to quantify how the CH/π interaction contributes to a more general carbohydrate-protein interaction. We used a combined experimental approach, creating single and double point mutants with high level computational methods, and applied both to Ralstonia solanacearum (RSL lectin complexes with α-L-Me-fucoside. Experimentally measured binding affinities were compared with computed carbohydrate-aromatic amino acid residue interaction energies. Experimental binding affinities for the RSL wild type, phenylalanine and alanine mutants were -8.5, -7.1 and -4.1 kcal x mol(-1, respectively. These affinities agree with the computed dispersion interaction energy between carbohydrate and aromatic amino acid residues for RSL wild type and phenylalanine, with values -8.8, -7.9 kcal x mol(-1, excluding the alanine mutant where the interaction energy was -0.9 kcal x mol(-1. Molecular dynamics simulations show that discrepancy can be caused by creation of a new hydrogen bond between the α-L-Me-fucoside and RSL. Observed results suggest that in this and similar cases the carbohydrate-receptor interaction can be driven mainly by a dispersion interaction.

  17. Quantifying the Relationship Between Curvature and Electric Potential in Lipid Bilayers

    DEFF Research Database (Denmark)

    Bruhn, Dennis Skjøth; Lomholt, Michael Andersen; Khandelia, Himanshu

    2016-01-01

    Cellular membranes mediate vital cellular processes by being subject to curvature and transmembrane electrical potentials. Here we build upon the existing theory for flexoelectricity in liquid crystals to quantify the coupling between lipid bilayer curvature and membrane potentials. Using molecular...... dynamics simulations, we show that head group dipole moments, the lateral pressure profile across the bilayer and spontaneous curvature all systematically change with increasing membrane potentials. In particu- lar, there is a linear dependence between the bending moment (the product of bending rigidity...

  18. A stochastic approach for quantifying immigrant integration: the Spanish test case

    Science.gov (United States)

    Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia

    2014-10-01

    We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999-2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build.

  19. A stochastic approach for quantifying immigrant integration: the Spanish test case

    International Nuclear Information System (INIS)

    Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia

    2014-01-01

    We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999–2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build. (paper)

  20. Quantifying Transmission Investment in Malaria Parasites.

    Directory of Open Access Journals (Sweden)

    Megan A Greischar

    2016-02-01

    Full Text Available Many microparasites infect new hosts with specialized life stages, requiring a subset of the parasite population to forgo proliferation and develop into transmission forms. Transmission stage production influences infectivity, host exploitation, and the impact of medical interventions like drug treatment. Predicting how parasites will respond to public health efforts on both epidemiological and evolutionary timescales requires understanding transmission strategies. These strategies can rarely be observed directly and must typically be inferred from infection dynamics. Using malaria as a case study, we test previously described methods for inferring transmission stage investment against simulated data generated with a model of within-host infection dynamics, where the true transmission investment is known. We show that existing methods are inadequate and potentially very misleading. The key difficulty lies in separating transmission stages produced by different generations of parasites. We develop a new approach that performs much better on simulated data. Applying this approach to real data from mice infected with a single Plasmodium chabaudi strain, we estimate that transmission investment varies from zero to 20%, with evidence for variable investment over time in some hosts, but not others. These patterns suggest that, even in experimental infections where host genetics and other environmental factors are controlled, parasites may exhibit remarkably different patterns of transmission investment.

  1. Emergent self-similarity of cluster coagulation

    Science.gov (United States)

    Pushkin, Dmtiri O.

    A wide variety of nonequilibrium processes, such as coagulation of colloidal particles, aggregation of bacteria into colonies, coalescence of rain drops, bond formation between polymerization sites, and formation of planetesimals, fall under the rubric of cluster coagulation. We predict emergence of self-similar behavior in such systems when they are 'forced' by an external source of the smallest particles. The corresponding self-similar coagulation spectra prove to be power laws. Starting from the classical Smoluchowski coagulation equation, we identify the conditions required for emergence of self-similarity and show that the power-law exponent value for a particular coagulation mechanism depends on the homogeneity index of the corresponding coagulation kernel only. Next, we consider the current wave of mergers of large American banks as an 'unorthodox' application of coagulation theory. We predict that the bank size distribution has propensity to become a power law, and verify our prediction in a statistical study of the available economical data. We conclude this chapter by discussing economically significant phenomenon of capital condensation and predicting emergence of power-law distributions in other economical and social data. Finally, we turn to apparent semblance between cluster coagulation and turbulence and conclude that it is not accidental: both of these processes are instances of nonlinear cascades. This class of processes also includes river network formation models, certain force-chain models in granular mechanics, fragmentation due to collisional cascades, percolation, and growing random networks. We characterize a particular cascade by three indicies and show that the resulting power-law spectrum exponent depends on the indicies values only. The ensuing algebraic formula is remarkable for its simplicity.

  2. FRESCO: Referential compression of highly similar sequences.

    Science.gov (United States)

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  3. Spherically symmetric self-similar universe

    Energy Technology Data Exchange (ETDEWEB)

    Dyer, C C [Toronto Univ., Ontario (Canada)

    1979-10-01

    A spherically symmetric self-similar dust-filled universe is considered as a simple model of a hierarchical universe. Observable differences between the model in parabolic expansion and the corresponding homogeneous Einstein-de Sitter model are considered in detail. It is found that an observer at the centre of the distribution has a maximum observable redshift and can in principle see arbitrarily large blueshifts. It is found to yield an observed density-distance law different from that suggested by the observations of de Vaucouleurs. The use of these solutions as central objects for Swiss-cheese vacuoles is discussed.

  4. Image magnification based on similarity analogy

    International Nuclear Information System (INIS)

    Chen Zuoping; Ye Zhenglin; Wang Shuxun; Peng Guohua

    2009-01-01

    Aiming at the high time complexity of the decoding phase in the traditional image enlargement methods based on fractal coding, a novel image magnification algorithm is proposed in this paper, which has the advantage of iteration-free decoding, by using the similarity analogy between an image and its zoom-out and zoom-in. A new pixel selection technique is also presented to further improve the performance of the proposed method. Furthermore, by combining some existing fractal zooming techniques, an efficient image magnification algorithm is obtained, which can provides the image quality as good as the state of the art while greatly decrease the time complexity of the decoding phase.

  5. Modeling Timbre Similarity of Short Music Clips.

    Science.gov (United States)

    Siedenburg, Kai; Müllensiefen, Daniel

    2017-01-01

    There is evidence from a number of recent studies that most listeners are able to extract information related to song identity, emotion, or genre from music excerpts with durations in the range of tenths of seconds. Because of these very short durations, timbre as a multifaceted auditory attribute appears as a plausible candidate for the type of features that listeners make use of when processing short music excerpts. However, the importance of timbre in listening tasks that involve short excerpts has not yet been demonstrated empirically. Hence, the goal of this study was to develop a method that allows to explore to what degree similarity judgments of short music clips can be modeled with low-level acoustic features related to timbre. We utilized the similarity data from two large samples of participants: Sample I was obtained via an online survey, used 16 clips of 400 ms length, and contained responses of 137,339 participants. Sample II was collected in a lab environment, used 16 clips of 800 ms length, and contained responses from 648 participants. Our model used two sets of audio features which included commonly used timbre descriptors and the well-known Mel-frequency cepstral coefficients as well as their temporal derivates. In order to predict pairwise similarities, the resulting distances between clips in terms of their audio features were used as predictor variables with partial least-squares regression. We found that a sparse selection of three to seven features from both descriptor sets-mainly encoding the coarse shape of the spectrum as well as spectrotemporal variability-best predicted similarities across the two sets of sounds. Notably, the inclusion of non-acoustic predictors of musical genre and record release date allowed much better generalization performance and explained up to 50% of shared variance ( R 2 ) between observations and model predictions. Overall, the results of this study empirically demonstrate that both acoustic features related

  6. Similar on the Inside (pre-grinding)

    Science.gov (United States)

    2004-01-01

    This approximate true-color image taken by the panoramic camera on the Mars Exploration Rover Opportunity show the rock called 'Pilbara' located in the small crater dubbed 'Fram.' The rock appears to be dotted with the same 'blueberries,' or spherules, found at 'Eagle Crater.' Spirit drilled into this rock with its rock abrasion tool. After analyzing the hole with the rover's scientific instruments, scientists concluded that Pilbara has a similar chemical make-up, and thus watery past, to rocks studied at Eagle Crater. This image was taken with the panoramic camera's 480-, 530- and 600-nanometer filters.

  7. Similar on the Inside (post-grinding)

    Science.gov (United States)

    2004-01-01

    This approximate true-color image taken by the panoramic camera on the Mars Exploration Rover Opportunity show the hole drilled into the rock called 'Pilbara,' which is located in the small crater dubbed 'Fram.' Spirit drilled into this rock with its rock abrasion tool. The rock appears to be dotted with the same 'blueberries,' or spherules, found at 'Eagle Crater.' After analyzing the hole with the rover's scientific instruments, scientists concluded that Pilbara has a similar chemical make-up, and thus watery past, to rocks studied at Eagle Crater. This image was taken with the panoramic camera's 480-, 530- and 600-nanometer filters.

  8. Self-similar magnetohydrodynamic boundary layers

    Energy Technology Data Exchange (ETDEWEB)

    Nunez, Manuel; Lastra, Alberto, E-mail: mnjmhd@am.uva.e [Departamento de Analisis Matematico, Universidad de Valladolid, 47005 Valladolid (Spain)

    2010-10-15

    The boundary layer created by parallel flow in a magnetized fluid of high conductivity is considered in this paper. Under appropriate boundary conditions, self-similar solutions analogous to the ones studied by Blasius for the hydrodynamic problem may be found. It is proved that for these to be stable, the size of the Alfven velocity at the outer flow must be smaller than the flow velocity, a fact that has a ready physical explanation. The process by which the transverse velocity and the thickness of the layer grow with the size of the Alfven velocity is detailed.

  9. Self-similar magnetohydrodynamic boundary layers

    International Nuclear Information System (INIS)

    Nunez, Manuel; Lastra, Alberto

    2010-01-01

    The boundary layer created by parallel flow in a magnetized fluid of high conductivity is considered in this paper. Under appropriate boundary conditions, self-similar solutions analogous to the ones studied by Blasius for the hydrodynamic problem may be found. It is proved that for these to be stable, the size of the Alfven velocity at the outer flow must be smaller than the flow velocity, a fact that has a ready physical explanation. The process by which the transverse velocity and the thickness of the layer grow with the size of the Alfven velocity is detailed.

  10. [Similarity system theory to evaluate similarity of chromatographic fingerprints of traditional Chinese medicine].

    Science.gov (United States)

    Liu, Yongsuo; Meng, Qinghua; Jiang, Shumin; Hu, Yuzhu

    2005-03-01

    The similarity evaluation of the fingerprints is one of the most important problems in the quality control of the traditional Chinese medicine (TCM). Similarity measures used to evaluate the similarity of the common peaks in the chromatogram of TCM have been discussed. Comparative studies were carried out among correlation coefficient, cosine of the angle and an improved extent similarity method using simulated data and experimental data. Correlation coefficient and cosine of the angle are not sensitive to the differences of the data set. They are still not sensitive to the differences of the data even after normalization. According to the similarity system theory, an improved extent similarity method was proposed. The improved extent similarity is more sensitive to the differences of the data sets than correlation coefficient and cosine of the angle. And the character of the data sets needs not to be changed compared with log-transformation. The improved extent similarity can be used to evaluate the similarity of the chromatographic fingerprints of TCM.

  11. Spatially quantifying the leadership effectiveness in collective behavior

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Haitao [State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074 (China); Wang Ning [Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China); Chen, Michael Z Q [Department of Mechanical Engineering, University of Hong Kong, Pok Fu Lam Road, Hong Kong (Hong Kong); Su Riqi; Zhou Tao [Department of Modern Physics, University of Science and Technology of China, Hefei 230026 (China); Zhou Changsong, E-mail: zht@mail.hust.edu.cn, E-mail: cszhou@hkbu.edu.hk, E-mail: zhutou@ustc.edu [Department of Physics, Centre for Nonlinear Studies, and Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Hong Kong Baptist University, Kowloon Tong (Hong Kong)

    2010-12-15

    Among natural biological flocks/swarms or mass social activities, when the collective behavior of the followers has been dominated by the direction or opinion of one leader group, it seems difficult for later-coming leaders to reverse the orientation of the mass followers, especially when they are in quantitative minority. This paper, however, reports a counter-intuitive phenomenon, i.e. Following the Later-coming Minority, provided that the later-comers obey a favorable distribution pattern that enables them to spread their influence to as many followers as possible within a given time and to be dense enough to govern these local followers they can influence directly from the beginning. We introduce a discriminant index to quantify the whole group's orientation under competing leaderships, with which the eventual orientation of the mass followers can be predicted before launching the real dynamical procedure. From the application point of view, this leadership effectiveness index also helps us to design an economical way for the minority later-coming leaders to defeat the dominating majority leaders solely by optimizing their spatial distribution pattern provided that the premeditated goal is available. Our investigation provides insights into effective leadership in biological systems with meaningful implications for social and industrial applications.

  12. Quantifying mechanical force in axonal growth and guidance

    Directory of Open Access Journals (Sweden)

    Ahmad Ibrahim Mahmoud Athamneh

    2015-09-01

    Full Text Available Mechanical force plays a fundamental role in neuronal development, physiology, and regeneration. In particular, research has shown that force is involved in growth cone-mediated axonal growth and guidance as well as stretch-induced elongation when an organism increases in size after forming initial synaptic connections. However, much of the details about the exact role of force in these fundamental processes remain unknown. In this review, we highlight (1 standing questions concerning the role of mechanical force in axonal growth and guidance and (2 different experimental techniques used to quantify forces in axons and growth cones. We believe that satisfying answers to these questions will require quantitative information about the relationship between elongation, forces, cytoskeletal dynamics, axonal transport, signaling, substrate adhesion, and stiffness contributing to directional growth advance. Furthermore, we address why a wide range of force values have been reported in the literature, and what these values mean in the context of neuronal mechanics. We hope that this review will provide a guide for those interested in studying the role of force in development and regeneration of neuronal networks.

  13. Spatially quantifying the leadership effectiveness in collective behavior

    International Nuclear Information System (INIS)

    Zhang Haitao; Wang Ning; Chen, Michael Z Q; Su Riqi; Zhou Tao; Zhou Changsong

    2010-01-01

    Among natural biological flocks/swarms or mass social activities, when the collective behavior of the followers has been dominated by the direction or opinion of one leader group, it seems difficult for later-coming leaders to reverse the orientation of the mass followers, especially when they are in quantitative minority. This paper, however, reports a counter-intuitive phenomenon, i.e. Following the Later-coming Minority, provided that the later-comers obey a favorable distribution pattern that enables them to spread their influence to as many followers as possible within a given time and to be dense enough to govern these local followers they can influence directly from the beginning. We introduce a discriminant index to quantify the whole group's orientation under competing leaderships, with which the eventual orientation of the mass followers can be predicted before launching the real dynamical procedure. From the application point of view, this leadership effectiveness index also helps us to design an economical way for the minority later-coming leaders to defeat the dominating majority leaders solely by optimizing their spatial distribution pattern provided that the premeditated goal is available. Our investigation provides insights into effective leadership in biological systems with meaningful implications for social and industrial applications.

  14. Quantifying the abnormal hemodynamics of sickle cell anemia

    Science.gov (United States)

    Lei, Huan; Karniadakis, George

    2012-02-01

    Sickle red blood cells (SS-RBC) exhibit heterogeneous morphologies and abnormal hemodynamics in deoxygenated states. A multi-scale model for SS-RBC is developed based on the Dissipative Particle Dynamics (DPD) method. Different cell morphologies (sickle, granular, elongated shapes) typically observed in deoxygenated states are constructed and quantified by the Asphericity and Elliptical shape factors. The hemodynamics of SS-RBC suspensions is studied in both shear and pipe flow systems. The flow resistance obtained from both systems exhibits a larger value than the healthy blood flow due to the abnormal cell properties. Moreover, SS-RBCs exhibit abnormal adhesive interactions with both the vessel endothelium cells and the leukocytes. The effect of the abnormal adhesive interactions on the hemodynamics of sickle blood is investigated using the current model. It is found that both the SS-RBC - endothelium and the SS-RBC - leukocytes interactions, can potentially trigger the vicious ``sickling and entrapment'' cycles, resulting in vaso-occlusion phenomena widely observed in micro-circulation experiments.

  15. Quantifying the Benefits of Combining Offshore Wind and Wave Energy

    Science.gov (United States)

    Stoutenburg, E.; Jacobson, M. Z.

    2009-12-01

    For many locations the offshore wind resource and the wave energy resource are collocated, which suggests a natural synergy if both technologies are combined into one offshore marine renewable energy plant. Initial meteorological assessments of the western coast of the United States suggest only a weak correlation in power levels of wind and wave energy at any given hour associated with the large ocean basin wave dynamics and storm systems of the North Pacific. This finding indicates that combining the two power sources could reduce the variability in electric power output from a combined wind and wave offshore plant. A combined plant is modeled with offshore wind turbines and Pelamis wave energy converters with wind and wave data from meteorological buoys operated by the US National Buoy Data Center off the coast of California, Oregon, and Washington. This study will present results of quantifying the benefits of combining wind and wave energy for the electrical power system to facilitate increased renewable energy penetration to support reductions in greenhouse gas emissions, and air and water pollution associated with conventional fossil fuel power plants.

  16. Parkinson's Law quantified: three investigations on bureaucratic inefficiency

    Science.gov (United States)

    Klimek, Peter; Hanel, Rudolf; Thurner, Stefan

    2009-03-01

    We formulate three famous, descriptive essays of Parkinson on bureaucratic inefficiency in a quantifiable and dynamical socio-physical framework. In the first model we show how the use of recent opinion formation models for small groups can be used to understand Parkinson's observation that decision-making bodies such as cabinets or boards become highly inefficient once their size exceeds a critical 'Coefficient of Inefficiency', typically around 20. A second observation of Parkinson—which is sometimes referred to as Parkinson's Law—is that the growth of bureaucratic or administrative bodies usually goes hand in hand with a drastic decrease of its overall efficiency. In our second model we view a bureaucratic body as a system of a flow of workers, who enter, become promoted to various internal levels within the system over time, and leave the system after having served for a certain time. Promotion usually is associated with an increase of subordinates. Within the proposed model it becomes possible to work out the phase diagram under which conditions of bureaucratic growth can be confined. In our last model we assign individual efficiency curves to workers throughout their life in administration, and compute the optimum time to give them the old age pension, in order to ensure a maximum of efficiency within the body—in Parkinson's words we compute the 'Pension Point'.

  17. A method for rapid similarity analysis of RNA secondary structures

    Directory of Open Access Journals (Sweden)

    Liu Na

    2006-11-01

    Full Text Available Abstract Background Owing to the rapid expansion of RNA structure databases in recent years, efficient methods for structure comparison are in demand for function prediction and evolutionary analysis. Usually, the similarity of RNA secondary structures is evaluated based on tree models and dynamic programming algorithms. We present here a new method for the similarity analysis of RNA secondary structures. Results Three sets of real data have been used as input for the example applications. Set I includes the structures from 5S rRNAs. Set II includes the secondary structures from RNase P and RNase MRP. Set III includes the structures from 16S rRNAs. Reasonable phylogenetic trees are derived for these three sets of data by using our method. Moreover, our program runs faster as compared to some existing ones. Conclusion The famous Lempel-Ziv algorithm can efficiently extract the information on repeated patterns encoded in RNA secondary structures and makes our method an alternative to analyze the similarity of RNA secondary structures. This method will also be useful to researchers who are interested in evolutionary analysis.

  18. Quantifying Riverscape Connectivity with Graph Theory

    Science.gov (United States)

    Carbonneau, P.; Milledge, D.; Sinha, R.; Tandon, S. K.

    2013-12-01

    Fluvial catchments convey fluxes of water, sediment, nutrients and aquatic biota. At continental scales, crustal topography defines the overall path of channels whilst at local scales depositional and/or erosional features generally determine the exact path of a channel. Furthermore, constructions such as dams, for either water abstraction or hydropower, often have a significant impact on channel networks.The concept of ';connectivity' is commonly invoked when conceptualising the structure of a river network.This concept is easy to grasp but there have been uneven efforts across the environmental sciences to actually quantify connectivity. Currently there have only been a few studies reporting quantitative indices of connectivity in river sciences, notably, in the study of avulsion processes. However, the majority of current work describing some form of environmental connectivity in a quantitative manner is in the field of landscape ecology. Driven by the need to quantify habitat fragmentation, landscape ecologists have returned to graph theory. Within this formal setting, landscape ecologists have successfully developed a range of indices which can model connectivity loss. Such formal connectivity metrics are currently needed for a range of applications in fluvial sciences. One of the most urgent needs relates to dam construction. In the developed world, hydropower development has generally slowed and in many countries, dams are actually being removed. However, this is not the case in the developing world where hydropower is seen as a key element to low-emissions power-security. For example, several dam projects are envisaged in Himalayan catchments in the next 2 decades. This region is already under severe pressure from climate change and urbanisation, and a better understanding of the network fragmentation which can be expected in this system is urgently needed. In this paper, we apply and adapt connectivity metrics from landscape ecology. We then examine the

  19. Dynamical Heterogeneity in Granular Fluids and Structural Glasses

    Science.gov (United States)

    Avila, Karina E.

    Our current understanding of the dynamics of supercooled liquids and other similar slowly evolving (glassy) systems is rather limited. One aspect that is particularly poorly understood is the origin and behavior of the strong non trivial fluctuations that appear in the relaxation process toward equilibrium. Glassy systems and granular systems both present regions of particles moving cooperatively and at different rates from other regions. This phenomenon is known as spatially heterogeneous dynamics. A detailed explanation of this phenomenon may lead to a better understanding of the slow relaxation process, and perhaps it could even help to explain the presence of the glass transition. This dissertation concentrates on studying dynamical heterogeneity by analyzing simulation data for models of granular materials and structural glasses. For dissipative granular fluids, the growing behavior of dynamical heterogeneities is studied for different densities and different degrees of inelasticity in the particle collisions. The correlated regions are found to grow rapidly as the system approaches dynamical arrest. Their geometry is conserved even when probing at different cutoff length in the correlation function or when the energy dissipation in the system is increased. For structural glasses, I test a theoretical framework that models dynamical heterogeneity as originated in the presence of Goldstone modes, which emerge from a broken continuous time reparametrization symmetry. This analysis is based on quantifying the size and the spatial correlations of fluctuations in the time variable and of other kinds of fluctuations. The results obtained here agree with the predictions of the hypothesis. In particular, the fluctuations associated to the time reparametrization invariance become stronger for low temperatures, long timescales, and large coarse graining lengths. Overall, this research points to dynamical heterogeneity to be described for granular systems similarly than

  20. Quantifying Sentiment and Influence in Blogspaces

    Energy Technology Data Exchange (ETDEWEB)

    Hui, Peter SY; Gregory, Michelle L.

    2010-07-25

    The weblog, or blog, has become a popular form of social media, through which authors can write posts, which can in turn generate feedback in the form of user comments. When considered in totality, a collection of blogs can thus be viewed as a sort of informal collection of mass sentiment and opinion. An obvious topic of interest might be to mine this collection to obtain some gauge of public sentiment over the wide variety of topics contained therein. However, the sheer size of the so-called blogosphere, combined with the fact that the subjects of posts can vary over a practically limitless number of topics poses some serious challenges when any meaningful analysis is attempted. Namely, the fact that largely anyone with access to the Internet can author their own blog, raises the serious issue of credibility— should some blogs be considered to be more influential than others, and consequently, when gauging sentiment with respect to a topic, should some blogs be weighted more heavily than others? In addition, as new posts and comments can be made on almost a constant basis, any blog analysis algorithm must be able to handle such updates efficiently. In this paper, we give a formalization of the blog model. We give formal methods of quantifying sentiment and influence with respect to a hierarchy of topics, with the specific aim of facilitating the computation of a per-topic, influence-weighted sentiment measure. Finally, as efficiency is a specific endgoal, we give upper bounds on the time required to update these values with new posts, showing that our analysis and algorithms are scalable.