WorldWideScience

Sample records for quantify dynamic similarity

  1. Quantifying Similarity in Seismic Polarizations

    Science.gov (United States)

    Eaton, D. W. S.; Jones, J. P.; Caffagni, E.

    2015-12-01

    Measuring similarity in seismic attributes can help identify tremor, low S/N signals, and converted or reflected phases, in addition to diagnosing site noise and sensor misalignment in arrays. Polarization analysis is a widely accepted method for studying the orientation and directional characteristics of seismic phases via. computed attributes, but similarity is ordinarily discussed using qualitative comparisons with reference values. Here we introduce a technique for quantitative polarization similarity that uses weighted histograms computed in short, overlapping time windows, drawing on methods adapted from the image processing and computer vision literature. Our method accounts for ambiguity in azimuth and incidence angle and variations in signal-to-noise (S/N) ratio. Using records of the Mw=8.3 Sea of Okhotsk earthquake from CNSN broadband sensors in British Columbia and Yukon Territory, Canada, and vertical borehole array data from a monitoring experiment at Hoadley gas field, central Alberta, Canada, we demonstrate that our method is robust to station spacing. Discrete wavelet analysis extends polarization similarity to the time-frequency domain in a straightforward way. Because histogram distance metrics are bounded by [0 1], clustering allows empirical time-frequency separation of seismic phase arrivals on single-station three-component records. Array processing for automatic seismic phase classification may be possible using subspace clustering of polarization similarity, but efficient algorithms are required to reduce the dimensionality.

  2. Similarity transformed semiclassical dynamics

    Science.gov (United States)

    Van Voorhis, Troy; Heller, Eric J.

    2003-12-01

    In this article, we employ a recently discovered criterion for selecting important contributions to the semiclassical coherent state propagator [T. Van Voorhis and E. J. Heller, Phys. Rev. A 66, 050501 (2002)] to study the dynamics of many dimensional problems. We show that the dynamics are governed by a similarity transformed version of the standard classical Hamiltonian. In this light, our selection criterion amounts to using trajectories generated with the untransformed Hamiltonian as approximate initial conditions for the transformed boundary value problem. We apply the new selection scheme to some multidimensional Henon-Heiles problems and compare our results to those obtained with the more sophisticated Herman-Kluk approach. We find that the present technique gives near-quantitative agreement with the the standard results, but that the amount of computational effort is less than Herman-Kluk requires even when sophisticated integral smoothing techniques are employed in the latter.

  3. Dynamic similarity in erosional processes

    Science.gov (United States)

    Scheidegger, A.E.

    1963-01-01

    A study is made of the dynamic similarity conditions obtaining in a variety of erosional processes. The pertinent equations for each type of process are written in dimensionless form; the similarity conditions can then easily be deduced. The processes treated are: raindrop action, slope evolution and river erosion. ?? 1963 Istituto Geofisico Italiano.

  4. Quantifying similarity of pore-geometry in nanoporous materials

    Science.gov (United States)

    Lee, Yongjin; Barthel, Senja D.; Dłotko, Paweł; Moosavi, S. Mohamad; Hess, Kathryn; Smit, Berend

    2017-05-01

    In most applications of nanoporous materials the pore structure is as important as the chemical composition as a determinant of performance. For example, one can alter performance in applications like carbon capture or methane storage by orders of magnitude by only modifying the pore structure. For these applications it is therefore important to identify the optimal pore geometry and use this information to find similar materials. However, the mathematical language and tools to identify materials with similar pore structures, but different composition, has been lacking. We develop a pore recognition approach to quantify similarity of pore structures and classify them using topological data analysis. This allows us to identify materials with similar pore geometries, and to screen for materials that are similar to given top-performing structures. Using methane storage as a case study, we also show that materials can be divided into topologically distinct classes requiring different optimization strategies.

  5. Quantifying the evolutionary dynamics of language.

    Science.gov (United States)

    Lieberman, Erez; Michel, Jean-Baptiste; Jackson, Joe; Tang, Tina; Nowak, Martin A

    2007-10-11

    Human language is based on grammatical rules. Cultural evolution allows these rules to change over time. Rules compete with each other: as new rules rise to prominence, old ones die away. To quantify the dynamics of language evolution, we studied the regularization of English verbs over the past 1,200 years. Although an elaborate system of productive conjugations existed in English's proto-Germanic ancestor, Modern English uses the dental suffix, '-ed', to signify past tense. Here we describe the emergence of this linguistic rule amidst the evolutionary decay of its exceptions, known to us as irregular verbs. We have generated a data set of verbs whose conjugations have been evolving for more than a millennium, tracking inflectional changes to 177 Old-English irregular verbs. Of these irregular verbs, 145 remained irregular in Middle English and 98 are still irregular today. We study how the rate of regularization depends on the frequency of word usage. The half-life of an irregular verb scales as the square root of its usage frequency: a verb that is 100 times less frequent regularizes 10 times as fast. Our study provides a quantitative analysis of the regularization process by which ancestral forms gradually yield to an emerging linguistic rule.

  6. Similarity of vegetation dynamics during interglacial periods

    Science.gov (United States)

    Cheddadi, Rachid; de Beaulieu, Jacques-Louis; Jouzel, Jean; Andrieu-Ponel, Valérie; Laurent, Jeanne-Marine; Reille, Maurice; Raynaud, Dominique; Bar-Hen, Avner

    2005-01-01

    The Velay sequence (France) provides a unique, continuous, palynological record spanning the last four climatic cycles. A pollen-based reconstruction of temperature and precipitation displays marked climatic cycles. An analysis of the climate and vegetation changes during the interglacial periods reveals comparable features and identical major vegetation successions. Although Marine Isotope Stage (MIS) 11.3 and the Holocene had similar earth precessional variations, their correspondence in terms of vegetation dynamics is low. MIS 9.5, 7.5, and especially 5.5 display closer correlation to the Holocene than MIS 11.3. Ecological factors, such as the distribution and composition of glacial refugia or postglacial migration patterns, may explain these discrepancies. Comparison of ecosystem dynamics during the past five interglacials suggests that vegetation development in the current interglacial has no analogue from the past 500,000 years. PMID:16162676

  7. Wind Turbine Experiments at Full Dynamic Similarity

    Science.gov (United States)

    Miller, Mark; Kiefer, Janik; Westergaard, Carsten; Hultmark, Marcus

    2015-11-01

    Performing experiments with scaled-down wind turbines has traditionally been difficult due to the matching requirements of the two driving non-dimensional parameters, the Tip Speed Ratio (TSR) and the Reynolds number. Typically, full-size turbines must be used to provide the baseline cases for engineering models and computer simulations where flow similarity is required. We present a new approach to investigating wind turbine aerodynamics at full dynamic similarity by employing a high-pressure wind tunnel at Princeton University known as the High Reynolds number Test Facility (or HRTF). This facility allows for Reynolds numbers of up to 3 million (based on chord and velocity at the tip) while still matching the TSR, on a geometrically similar, small-scale model. The background development of this project is briefly presented including the design and manufacture of a model turbine. Following this the power, thrust and wake data are discussed, in particular the scaling dependence on the Reynolds number. Supported under NSF grant CBET-1435254 (program manager Gregory Rorrer).

  8. Quantifying Similarity and Distance Measures for Vector-Based Datasets: Histograms, Signals, and Probability Distribution Functions

    Science.gov (United States)

    2017-02-01

    documents. Citation of manufacturer’s or trade names does not constitute an official endorse- ment or approval of the use thereof. Destroy this report when it...NUMBER (Include area code)   Standard Form 298 (Rev. 8/98)    Prescribed by ANSI Std. Z39.18 February 2017 Technical Note Quantifying Similarity and...datasets. There are a large number of different possible similarity and distance measures that can be applied to different datasets. In this technical

  9. Quantifying the Diversity and Similarity of Surgical Procedures Among Hospitals and Anesthesia Providers.

    Science.gov (United States)

    Dexter, Franklin; Ledolter, Johannes; Hindman, Bradley J

    2016-01-01

    In this Statistical Grand Rounds, we review methods for the analysis of the diversity of procedures among hospitals, the activities among anesthesia providers, etc. We apply multiple methods and consider their relative reliability and usefulness for perioperative applications, including calculations of SEs. We also review methods for comparing the similarity of procedures among hospitals, activities among anesthesia providers, etc. We again apply multiple methods and consider their relative reliability and usefulness for perioperative applications. The applications include strategic analyses (e.g., hospital marketing) and human resource analytics (e.g., comparisons among providers). Measures of diversity of procedures and activities (e.g., Herfindahl and Gini-Simpson index) are used for quantification of each facility (hospital) or anesthesia provider, one at a time. Diversity can be thought of as a summary measure. Thus, if the diversity of procedures for 48 hospitals is studied, the diversity (and its SE) is being calculated for each hospital. Likewise, the effective numbers of common procedures at each hospital can be calculated (e.g., by using the exponential of the Shannon index). Measures of similarity are pairwise assessments. Thus, if quantifying the similarity of procedures among cases with a break or handoff versus cases without a break or handoff, a similarity index represents a correlation coefficient. There are several different measures of similarity, and we compare their features and applicability for perioperative data. We rely extensively on sensitivity analyses to interpret observed values of the similarity index.

  10. Interlinguistic similarity and language death dynamics

    CERN Document Server

    Mira, J

    2005-01-01

    We analyze the time evolution of a system of two coexisting languages (Castillian Spanish and Galician, both spoken in northwest Spain) in the framework of a model given by Abrams and Strogatz [Nature 424, 900 (2003)]. It is shown that, contrary to the model's initial prediction, a stable bilingual situation is possible if the languages in competition are similar enough. Similarity is described with a simple parameter, whose value can be estimated from fits of the data.

  11. Quantifying the Dynamic Ocean Surface Using Underwater Radiometric Measurements

    Science.gov (United States)

    2015-03-31

    2. REPORT DATE 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER 6. AUTHOR(S) 7. PERFORMING ORGANIZATION NAME(S) AND...WORK UNIT NUMBER 1. REPORT DATE (DD-MM-YYYY) 16. SECURITY CLASSIFICATION OF: PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 31-03-2015...Final March 2013 -- February 2015 Quantifying the Dynamic Ocean Surface Using Underwater Radiometric Measurements N00014-13-1-0352 Yue, Dick K.P

  12. Quantifying the Dynamic Ocean Surface Using Underwater Radiometric Measurement

    Science.gov (United States)

    2013-09-30

    Radiometric Measurement Lian Shen Department of Mechanical Engineering & St. Anthony Falls Laboratory University of Minnesota Minneapolis, MN...information if it does not display a currently valid OMB control number. 1. REPORT DATE 30 SEP 2013 2. REPORT TYPE 3. DATES COVERED 00-00-2013 to 00-00...2013 4. TITLE AND SUBTITLE Quantifying the Dynamic Ocean Surface Using Underwater Radiometric Measurement 5a. CONTRACT NUMBER 5b. GRANT NUMBER

  13. Quantifying dynamic characteristics of human walking for comprehensive gait cycle.

    Science.gov (United States)

    Mummolo, Carlotta; Mangialardi, Luigi; Kim, Joo H

    2013-09-01

    Normal human walking typically consists of phases during which the body is statically unbalanced while maintaining dynamic stability. Quantifying the dynamic characteristics of human walking can provide better understanding of gait principles. We introduce a novel quantitative index, the dynamic gait measure (DGM), for comprehensive gait cycle. The DGM quantifies the effects of inertia and the static balance instability in terms of zero-moment point and ground projection of center of mass and incorporates the time-varying foot support region (FSR) and the threshold between static and dynamic walking. Also, a framework of determining the DGM from experimental data is introduced, in which the gait cycle segmentation is further refined. A multisegmental foot model is integrated into a biped system to reconstruct the walking motion from experiments, which demonstrates the time-varying FSR for different subphases. The proof-of-concept results of the DGM from a gait experiment are demonstrated. The DGM results are analyzed along with other established features and indices of normal human walking. The DGM provides a measure of static balance instability of biped walking during each (sub)phase as well as the entire gait cycle. The DGM of normal human walking has the potential to provide some scientific insights in understanding biped walking principles, which can also be useful for their engineering and clinical applications.

  14. Quantifying selective reporting and the Proteus phenomenon for multiple datasets with similar bias.

    Directory of Open Access Journals (Sweden)

    Thomas Pfeiffer

    Full Text Available Meta-analyses play an important role in synthesizing evidence from diverse studies and datasets that address similar questions. A major obstacle for meta-analyses arises from biases in reporting. In particular, it is speculated that findings which do not achieve formal statistical significance are less likely reported than statistically significant findings. Moreover, the patterns of bias can be complex and may also depend on the timing of the research results and their relationship with previously published work. In this paper, we present an approach that is specifically designed to analyze large-scale datasets on published results. Such datasets are currently emerging in diverse research fields, particularly in molecular medicine. We use our approach to investigate a dataset on Alzheimer's disease (AD that covers 1167 results from case-control studies on 102 genetic markers. We observe that initial studies on a genetic marker tend to be substantially more biased than subsequent replications. The chances for initial, statistically non-significant results to be published are estimated to be about 44% (95% CI, 32% to 63% relative to statistically significant results, while statistically non-significant replications have almost the same chance to be published as statistically significant replications (84%; 95% CI, 66% to 107%. Early replications tend to be biased against initial findings, an observation previously termed Proteus phenomenon: The chances for non-significant studies going in the same direction as the initial result are estimated to be lower than the chances for non-significant studies opposing the initial result (73%; 95% CI, 55% to 96%. Such dynamic patterns in bias are difficult to capture by conventional methods, where typically simple publication bias is assumed to operate. Our approach captures and corrects for complex dynamic patterns of bias, and thereby helps generating conclusions from published results that are more robust

  15. Quantifying chaotic dynamics from integrate-and-fire processes

    Energy Technology Data Exchange (ETDEWEB)

    Pavlov, A. N. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Saratov State Technical University, Politehnicheskaya Str. 77, 410054 Saratov (Russian Federation); Pavlova, O. N. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Mohammad, Y. K. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Tikrit University Salahudin, Tikrit Qadisiyah, University Str. P.O. Box 42, Tikrit (Iraq); Kurths, J. [Potsdam Institute for Climate Impact Research, Telegraphenberg A 31, 14473 Potsdam (Germany); Institute of Physics, Humboldt University Berlin, 12489 Berlin (Germany)

    2015-01-15

    Characterizing chaotic dynamics from integrate-and-fire (IF) interspike intervals (ISIs) is relatively easy performed at high firing rates. When the firing rate is low, a correct estimation of Lyapunov exponents (LEs) describing dynamical features of complex oscillations reflected in the IF ISI sequences becomes more complicated. In this work we discuss peculiarities and limitations of quantifying chaotic dynamics from IF point processes. We consider main factors leading to underestimated LEs and demonstrate a way of improving numerical determining of LEs from IF ISI sequences. We show that estimations of the two largest LEs can be performed using around 400 mean periods of chaotic oscillations in the regime of phase-coherent chaos. Application to real data is discussed.

  16. Simulating food web dynamics along a gradient: quantifying human influence.

    Directory of Open Access Journals (Sweden)

    Ferenc Jordán

    Full Text Available Realistically parameterized and dynamically simulated food-webs are useful tool to explore the importance of the functional diversity of ecosystems, and in particular relations between the dynamics of species and the whole community. We present a stochastic dynamical food web simulation for the Kelian River (Borneo. The food web was constructed for six different locations, arrayed along a gradient of increasing human perturbation (mostly resulting from gold mining activities along the river. Along the river, the relative importance of grazers, filterers and shredders decreases with increasing disturbance downstream, while predators become more dominant in governing eco-dynamics. Human activity led to increased turbidity and sedimentation which adversely impacts primary productivity. Since the main difference between the study sites was not the composition of the food webs (structure is quite similar but the strengths of interactions and the abundance of the trophic groups, a dynamical simulation approach seemed to be useful to better explain human influence. In the pristine river (study site 1, when comparing a structural version of our model with the dynamical model we found that structurally central groups such as omnivores and carnivores were not the most important ones dynamically. Instead, primary consumers such as invertebrate grazers and shredders generated a greater dynamical response. Based on the dynamically most important groups, bottom-up control is replaced by the predominant top-down control regime as distance downstream and human disturbance increased. An important finding, potentially explaining the poor structure to dynamics relationship, is that indirect effects are at least as important as direct ones during the simulations. We suggest that our approach and this simulation framework could serve systems-based conservation efforts. Quantitative indicators on the relative importance of trophic groups and the mechanistic modeling

  17. Simulating food web dynamics along a gradient: quantifying human influence.

    Science.gov (United States)

    Jordán, Ferenc; Gjata, Nerta; Mei, Shu; Yule, Catherine M

    2012-01-01

    Realistically parameterized and dynamically simulated food-webs are useful tool to explore the importance of the functional diversity of ecosystems, and in particular relations between the dynamics of species and the whole community. We present a stochastic dynamical food web simulation for the Kelian River (Borneo). The food web was constructed for six different locations, arrayed along a gradient of increasing human perturbation (mostly resulting from gold mining activities) along the river. Along the river, the relative importance of grazers, filterers and shredders decreases with increasing disturbance downstream, while predators become more dominant in governing eco-dynamics. Human activity led to increased turbidity and sedimentation which adversely impacts primary productivity. Since the main difference between the study sites was not the composition of the food webs (structure is quite similar) but the strengths of interactions and the abundance of the trophic groups, a dynamical simulation approach seemed to be useful to better explain human influence. In the pristine river (study site 1), when comparing a structural version of our model with the dynamical model we found that structurally central groups such as omnivores and carnivores were not the most important ones dynamically. Instead, primary consumers such as invertebrate grazers and shredders generated a greater dynamical response. Based on the dynamically most important groups, bottom-up control is replaced by the predominant top-down control regime as distance downstream and human disturbance increased. An important finding, potentially explaining the poor structure to dynamics relationship, is that indirect effects are at least as important as direct ones during the simulations. We suggest that our approach and this simulation framework could serve systems-based conservation efforts. Quantitative indicators on the relative importance of trophic groups and the mechanistic modeling of eco-dynamics

  18. A novel method for quantifying similarities between oscillatory neural responses in wavelet time-frequency power profiles.

    Science.gov (United States)

    Sato, Takaaki; Kajiwara, Riichi; Takashima, Ichiro; Iijima, Toshio

    2016-04-01

    Quantifying similarities and differences between neural response patterns is an important step in understanding neural coding in sensory systems. It is difficult, however, to compare the degree of similarity among transient oscillatory responses. We developed a novel method of wavelet correlation analysis for quantifying similarity between transient oscillatory responses, and tested the method with olfactory cortical responses. In the anterior piriform cortex (aPC), the largest area of the primary olfactory cortex, odors induce inhibitory activities followed by transient oscillatory local field potentials (osci-LFPs). Qualitatively, the resulting time courses of osci-LFPs for identical odors were modestly different. We then compared several methods for quantifying the similarity between osci-LFPs for identical or different odors. Using fast Fourier transform band-pass filters, a conventional method demonstrated high correlations of the 0-2Hz components for both identical and different odors. None of the conventional methods tested demonstrated a clear correlation between osci-LFPs. However, wavelet correlation analysis resolved a stimulus dependency of 2-45Hz osci-LFPs in the aPC output layer, and produced experience-dependent high correlations in the input layer between some of the identical or different odors. These results suggest that redundancy in the neural representation of sensory information may change in the aPC. This wavelet correlation analysis may be useful for quantifying the similarities of transient oscillatory neural responses.

  19. Predicting the evolution of complex networks via similarity dynamics

    Science.gov (United States)

    Wu, Tao; Chen, Leiting; Zhong, Linfeng; Xian, Xingping

    2017-01-01

    Almost all real-world networks are subject to constant evolution, and plenty of them have been investigated empirically to uncover the underlying evolution mechanism. However, the evolution prediction of dynamic networks still remains a challenging problem. The crux of this matter is to estimate the future network links of dynamic networks. This paper studies the evolution prediction of dynamic networks with link prediction paradigm. To estimate the likelihood of the existence of links more accurate, an effective and robust similarity index is presented by exploiting network structure adaptively. Moreover, most of the existing link prediction methods do not make a clear distinction between future links and missing links. In order to predict the future links, the networks are regarded as dynamic systems in this paper, and a similarity updating method, spatial-temporal position drift model, is developed to simulate the evolutionary dynamics of node similarity. Then the updated similarities are used as input information for the future links' likelihood estimation. Extensive experiments on real-world networks suggest that the proposed similarity index performs better than baseline methods and the position drift model performs well for evolution prediction in real-world evolving networks.

  20. A Self-Similar Dynamics in Viscous Spheres

    Science.gov (United States)

    Barreto, W.; Ovalle, J.; Rodríguez, B.

    1998-01-01

    We study the evolution of radiating and viscous fluid spheres assuming an additional homothetic symmetry on the spherically symmetric space-time. We match a very simple solution to the symmetry equations with the exterior one (Vaidya). We then obtain a system of two ordinary differential equations which rule the dynamics, and find a self-similar collapse which is shear-free and with a barotropic equation of state. Considering a huge set of initial self-similar dynamics states, we work out a model with an acceptable physical behavior.

  1. A self-similar dynamics in viscous spheres

    CERN Document Server

    Barreto, W; Rodríguez, B

    1998-01-01

    We study the evolution of radiating and viscous fluid spheres assuming an additional homothetic symmetry on the spherically simmetric space--time. We match a very simple solution to the symmetry equations with the exterior one (Vaidya). We then obtain a system of two ordinary differential equations which rule the dynamics, and find a self--similar collapse which is shear--free and with a barotropic equation of state. Considering a huge set of initial self--similar dynamics states, we work out a model with an acceptable physical behavior.

  2. Similarity theory based method for MEMS dynamics analysis

    Institute of Scientific and Technical Information of China (English)

    LI Gui-xian; PENG Yun-feng; ZHANG Xin

    2008-01-01

    A new method for MEMS dynamics analysis is presented, ased on the similarity theory. With this method, two systems' similarities can be captured in terms of physics quantities/governed-equations amongst different energy fields, and then the unknown dynamic characteristics of one of the systems can be analyzed ac-cording to the similar ones of the other system. The probability to establish a pair of similar systems among MEMS and other energy systems is also discussed based on the equivalent between mechanics and electrics, and then the feasibility of applying this method is proven by an example, in which the squeezed damping force in MEMS and the current of its equivalent circuit established by this method are compared.

  3. Generalized quantum similarity in atomic systems: A quantifier of relativistic effects

    Science.gov (United States)

    Martín, A. L.; Angulo, J. C.; Antolín, J.; López-Rosa, S.

    2017-02-01

    Quantum similarity between Hartree-Fock and Dirac-Fock electron densities reveals the depth of relativistic effects on the core and valence regions in atomic systems. The results emphasize the relevance of differences in the outermost subshells, as pointed out in recent studies by means of Shannon-like functionals. In this work, a generalized similarity functional allows us to go far beyond the Shannon-based analyses. The numerical results for systems throughout the Periodic Table show that discrepancies between the relativistic and non-relativistic descriptions are patently governed by shell-filling patterns.

  4. Investigation of Dynamics of Self-Similarly Evolving Magnetic Clouds

    CERN Document Server

    Dalakishvili, Giorgi; Lapenta, Giovanni; Poedts, Stefaan

    2010-01-01

    Magnetic clouds (MCs) are "magnetized plasma clouds" moving in the solar wind. MCs transport magnetic flux and helicity away from the Sun. These structures are not stationary but feature temporal evolution. Commonly, simplified MC models are considered. The goal of the present study is to investigate the dynamics of more general, radially expanding MCs. They are considered as cylindrically symmetric magnetic structures with low plasma {\\beta}. In order to study MC`evolution the self-similar approach method and a numerical approach are used. It is shown that the forces are balanced in the considered self-similarly evolving, cylindrically symmetric magnetic structures. Explicit analytical expressions for magnetic field, plasma velocity, density and pressure within MCs are derived. These solutions are characterized by conserved values of magnetic flux and helicity. We also investigate the dynamics of self-similarly evolving MCs by means of the numerical code "Graale". In addition, their expansion in a medium wit...

  5. Quantum process tomography quantifies coherence transfer dynamics in vibrational exciton.

    Science.gov (United States)

    Chuntonov, Lev; Ma, Jianqiang

    2013-10-31

    Quantum coherence has been a subject of great interest in many scientific disciplines. However, detailed characterization of the quantum coherence in molecular systems, especially its transfer and relaxation mechanisms, still remains a major challenge. The difficulties arise in part because the spectroscopic signatures of the coherence transfer are typically overwhelmed by other excitation-relaxation processes. We use quantum process tomography (QPT) via two-dimensional infrared spectroscopy to quantify the rate of the elusive coherence transfer between two vibrational exciton states. QPT retrieves the dynamics of the dissipative quantum system directly from the experimental observables. It thus serves as an experimental alternative to theoretical models of the system-bath interaction and can be used to validate these theories. Our results for coupled carbonyl groups of a diketone molecule in chloroform, used as a benchmark system, reveal the nonsecular nature of the interaction between the exciton and the Markovian bath and open the door for the systematic studies of the dissipative quantum systems dynamics in detail.

  6. Similarity

    Science.gov (United States)

    Apostol, Tom M. (Editor)

    1990-01-01

    In this 'Project Mathematics! series, sponsored by the California Institute for Technology (CalTech), the mathematical concept of similarity is presented. he history of and real life applications are discussed using actual film footage and computer animation. Terms used and various concepts of size, shape, ratio, area, and volume are demonstrated. The similarity of polygons, solids, congruent triangles, internal ratios, perimeters, and line segments using the previous mentioned concepts are shown.

  7. Waveform Similarity Analysis: A Simple Template Comparing Approach for Detecting and Quantifying Noisy Evoked Compound Action Potentials.

    Science.gov (United States)

    Potas, Jason Robert; de Castro, Newton Gonçalves; Maddess, Ted; de Souza, Marcio Nogueira

    2015-01-01

    Experimental electrophysiological assessment of evoked responses from regenerating nerves is challenging due to the typical complex response of events dispersed over various latencies and poor signal-to-noise ratio. Our objective was to automate the detection of compound action potential events and derive their latencies and magnitudes using a simple cross-correlation template comparison approach. For this, we developed an algorithm called Waveform Similarity Analysis. To test the algorithm, challenging signals were generated in vivo by stimulating sural and sciatic nerves, whilst recording evoked potentials at the sciatic nerve and tibialis anterior muscle, respectively, in animals recovering from sciatic nerve transection. Our template for the algorithm was generated based on responses evoked from the intact side. We also simulated noisy signals and examined the output of the Waveform Similarity Analysis algorithm with imperfect templates. Signals were detected and quantified using Waveform Similarity Analysis, which was compared to event detection, latency and magnitude measurements of the same signals performed by a trained observer, a process we called Trained Eye Analysis. The Waveform Similarity Analysis algorithm could successfully detect and quantify simple or complex responses from nerve and muscle compound action potentials of intact or regenerated nerves. Incorrectly specifying the template outperformed Trained Eye Analysis for predicting signal amplitude, but produced consistent latency errors for the simulated signals examined. Compared to the trained eye, Waveform Similarity Analysis is automatic, objective, does not rely on the observer to identify and/or measure peaks, and can detect small clustered events even when signal-to-noise ratio is poor. Waveform Similarity Analysis provides a simple, reliable and convenient approach to quantify latencies and magnitudes of complex waveforms and therefore serves as a useful tool for studying evoked compound

  8. Emergence of self-similarity in football dynamics

    CERN Document Server

    Kijima, Akifumi; Shima, Hiroyuki; Yamamoto, Yuji

    2014-01-01

    The multiplayer dynamics of a football game is analyzed to unveil self-similarities in the time evolution of player and ball positioning. Temporal fluctuations in both the team-turf boundary and the ball location are uncovered to follow the rules of fractional Brownian motion with a Hurst exponent of H=0.7. The persistence time below which self-similarity holds is found to be several tens of seconds, implying a characteristic time scale that governs far-from-equilibrium motion on a playing field.

  9. A novel similarity comparison approach for dynamic ECG series.

    Science.gov (United States)

    Yin, Hong; Zhu, Xiaoqian; Ma, Shaodong; Yang, Shuqiang; Chen, Liqian

    2015-01-01

    The heart sound signal is a reflection of heart and vascular system motion. Long-term continuous electrocardiogram (ECG) contains important information which can be helpful to prevent heart failure. A single piece of a long-term ECG recording usually consists of more than one hundred thousand data points in length, making it difficult to derive hidden features that may be reflected through dynamic ECG monitoring, which is also very time-consuming to analyze. In this paper, a Dynamic Time Warping based on MapReduce (MRDTW) is proposed to make prognoses of possible lesions in patients. Through comparison of a real-time ECG of a patient with the reference sets of normal and problematic cardiac waveforms, the experimental results reveal that our approach not only retains high accuracy, but also greatly improves the efficiency of the similarity measure in dynamic ECG series.

  10. A Dynamic Method for Quantifying Natural Warming in Urban Areas

    Institute of Scientific and Technical Information of China (English)

    HE Yu-Ting; JIA Gen-Suo

    2012-01-01

    In the study of global warming, one of the main issues is the quantification of the urbanization effect in climate records. Previous studies have contributed much to removing the impact of urbanization from surface air temperature by carefully selecting reference sta- tions. However, due to the insufficient number of stations free from the influence of urbanization and the different criteria used to select reference stations, there are still significant controversies about the intensity of the impact of urbanization on temperature records. This study proposes a dynamic method for quantifying natural warming using information on urbanization from every station acquired from remote sensing (RS) data instead of selecting reference stations. Two different spatial scales were ap- plied to examine the impact of urbanization, but little difference was found, indicating the stability of this method. The results showed a significant difference in original temperature data and the homogenized data--urban warming accounted for approximately 64% in the original temperature warming but only approximately 20% in the homogenized temperature records.

  11. Dynamic Similarities in Pathological Forms of α-Synuclein

    Science.gov (United States)

    Bradley, Ryan; Maranas, Janna

    2010-03-01

    The natively unstructured, membrane-bound protein α-synuclein is thought to play a role in vesicle trafficking. Its native function is subverted in the pathogenesis of Parkinson's disease, during which it forms fibrillar cytoplasmic aggregates in specific regions in the brain. It is believed that oligomers of α-synuclein are the toxic species, whereas sequestration into fibrils is neuroprotective. Evidence that α-synuclein changes shape as it interacts with membranes suggests that altered dynamics may drive the initial aggregation steps. To test this hypothesis, we conducted separate molecular dynamics simulations of native, mutated, and chemically-damaged forms of α-synuclein, representing the distinct genetic and sporadic causes of the disease. We measured the fractal dimension of individual amino-acid trajectories in order to identify differences in mobility between each simulated protein. Trajectories with higher fractal dimensions are space-filling, and thus correspond to more random, constrained motion; conversely, lower fractal dimensions indicate more directed motions. Although the disease-causing variants of α-synuclein are distinct, they show highly similar dynamical differences from the native form. This suggests that altered dynamics may facilitate oligomerization.

  12. Self-similar solutions of NLS-type dynamical systems

    CERN Document Server

    Boiti, M; Pempinelli, F; Shabat, A B

    1999-01-01

    We study self-similar solutions of NLS-type dynamical systems. Lagrangian approach is used to show that they can be reduced to three canonical forms, which are related by Miura transformations. The fourth Painleve equation (PIV) is central in our consideration - it connects Heisenberg model, Volterra model and Toda model to each other. The connection between the rational solutions of PIV and Coulomb gas in a parabolic potential is established. We discuss also the possibility to obtain an exact solution for optical soliton i.e. of the NLS equation with time-dependent dispersion.

  13. Dynamics and processing in finite self-similar networks.

    Science.gov (United States)

    DeDeo, Simon; Krakauer, David C

    2012-09-07

    A common feature of biological networks is the geometrical property of self-similarity. Molecular regulatory networks through to circulatory systems, nervous systems, social systems and ecological trophic networks show self-similar connectivity at multiple scales. We analyse the relationship between topology and signalling in contrasting classes of such topologies. We find that networks differ in their ability to contain or propagate signals between arbitrary nodes in a network depending on whether they possess branching or loop-like features. Networks also differ in how they respond to noise, such that one allows for greater integration at high noise, and this performance is reversed at low noise. Surprisingly, small-world topologies, with diameters logarithmic in system size, have slower dynamical time scales, and may be less integrated (more modular) than networks with longer path lengths. All of these phenomena are essentially mesoscopic, vanishing in the infinite limit but producing strong effects at sizes and time scales relevant to biology.

  14. Quantifying Salmonella population dynamics in water and biofilms.

    Science.gov (United States)

    Sha, Qiong; Vattem, Dhiraj A; Forstner, Michael R J; Hahn, Dittmar

    2013-01-01

    Members of the bacterial genus Salmonella are recognized worldwide as major zoonotic pathogens often found to persist in non-enteric environments including heterogeneous aquatic biofilms. In this study, Salmonella isolates that had been detected repeatedly over time in aquatic biofilms at different sites in Spring Lake, San Marcos, Texas, were identified as serovars Give, Thompson, Newport and -:z10:z39. Pathogenicity results from feeding studies with the nematode Caenorhabditis elegans as host confirmed that these strains were pathogenic, with Salmonella-fed C. elegans dying faster (mean survival time between 3 and 4 days) than controls, i.e., Escherichia coli-fed C. elegans (mean survival time of 9.5 days). Cells of these isolates inoculated into water at a density of up to 10(6) ml(-1) water declined numerically by 3 orders of magnitude within 2 days, reaching the detection limit of our quantitative polymerase chain reaction (qPCR)-based quantification technique (i.e., 10(3) cells ml(-1)). Similar patterns were obtained for cells in heterogeneous aquatic biofilms developed on tiles and originally free of Salmonella that were kept in the inoculated water. Cell numbers increased during the first days to more than 10(7) cells cm(-2), and then declined over time. Ten-fold higher cell numbers of Salmonella inoculated into water or into biofilm resulted in similar patterns of population dynamics, though cells in biofilms remained detectable with numbers around 10(4) cells cm(-2) after 4 weeks. Independent of detectability by qPCR, samples of all treatments harbored viable salmonellae that resembled the inoculated isolates after 4 weeks of incubation. These results demonstrate that pathogenic salmonellae were isolated from heterogeneous aquatic biofilms and that they could persist and stay viable in such biofilms in high numbers for some time.

  15. POSTFUNDOPLICATION DYSPHAGIA CAUSES SIMILAR WATER INGESTION DYNAMICS AS ACHALASIA.

    Science.gov (United States)

    Dantas, Roberto Oliveira; Santos, Carla Manfredi; Cassiani, Rachel Aguiar; Alves, Leda Maria Tavares; Nascimento, Weslania Viviane

    2016-01-01

    - After surgical treatment of gastroesophageal reflux disease dysphagia is a symptom in the majority of patients, with decrease in intensity over time. However, some patients may have persistent dysphagia. - The objective of this investigation was to evaluate the dynamics of water ingestion in patients with postfundoplication dysphagia compared with patients with dysphagia caused by achalasia, idiopathic or consequent to Chagas' disease, and controls. - Thirty-three patients with postfundoplication dysphagia, assessed more than one year after surgery, together with 50 patients with Chagas' disease, 27 patients with idiopathic achalasia and 88 controls were all evaluated by the water swallow test. They drunk, in triplicate, 50 mL of water without breaks while being precisely timed and the number of swallows counted. Also measured was: (a) inter-swallows interval - the time to complete the task, divided by the number of swallows during the task; (b) swallowing flow - volume drunk divided by the time taken; (c) volume of each swallow - volume drunk divided by the number of swallows. - Patients with postfundoplication dysphagia, Chagas' disease and idiopathic achalasia took longer to ingest all the volume, had an increased number of swallows, an increase in interval between swallows, a decrease in swallowing flow and a decrease in water volume of each swallow compared with the controls. There was no difference between the three groups of patients. There was no correlation between postfundoplication time and the results. - It was concluded that patients with postfundoplication dysphagia have similar water ingestion dynamics as patients with achalasia.

  16. Quantifying the dynamics of coupled networks of switches and oscillators.

    Directory of Open Access Journals (Sweden)

    Matthew R Francis

    Full Text Available Complex network dynamics have been analyzed with models of systems of coupled switches or systems of coupled oscillators. However, many complex systems are composed of components with diverse dynamics whose interactions drive the system's evolution. We, therefore, introduce a new modeling framework that describes the dynamics of networks composed of both oscillators and switches. Both oscillator synchronization and switch stability are preserved in these heterogeneous, coupled networks. Furthermore, this model recapitulates the qualitative dynamics for the yeast cell cycle consistent with the hypothesized dynamics resulting from decomposition of the regulatory network into dynamic motifs. Introducing feedback into the cell-cycle network induces qualitative dynamics analogous to limitless replicative potential that is a hallmark of cancer. As a result, the proposed model of switch and oscillator coupling provides the ability to incorporate mechanisms that underlie the synchronized stimulus response ubiquitous in biochemical systems.

  17. Similarity between humans and foams in aging dynamics

    Science.gov (United States)

    Weon, Byung Mook; Stewart, Peter S.

    2014-03-01

    Foams are cellular networks between two immiscible phases. Foams are initially unstable and finally evolve toward a state of lower energy through sequential coalescences of bubbles. In physics, foams are model systems for materials that minimize surface energy. We study coalescence dynamics of clean foams using numerical simulations with a network model. Initial clean foams consist of equally pressurized bubbles and a low fraction of liquid films without stabilizing agents. Aging of clean foams occurs with time as bubbles rapidly coalesce by film rupture and finally evolve toward a new quasi-equilibrium state. Here we find that foam aging is analogous to biological aging: the death rate of bubbles increases exponentially with time, which is similar to the Gompertz mortality law for biological populations. The coalescence evolution of foams is self-similar regardless of initial conditions. The population change of bubbles is well described by a Boltzmann sigmoidal function, indicating that the foam aging is a phase transition phenomenon. This result suggests that foams can be useful model systems for giving insights into biological aging. Suwon 440-746, South Korea.

  18. Unfolding the resident-invader dynamics of similar strategies.

    Science.gov (United States)

    Dercole, Fabio; Geritz, Stefan A H

    2016-04-01

    We investigate the competition between two groups of similar agents in the restricted, but classical context of unstructured populations varying in continuous time in an isolated, homogeneous, and constant abiotic environment. Individual behavioral and phenotypic traits are quantified by one-dimensional strategies and intra- as well as inter-specific interactions are described in the vicinity of a stationary regime. Some known results are revisited: invasion by a new strategy generically implies the substitution of the former resident; and resident-invader coexistence is possible close to singular strategies-the stationary points of the invasion fitness-and is generically protected-each of the two competing groups can invade the other. An (almost known) old conjecture is shown true: competition close to a singular strategy is "essentially Lotka-Volterra"-dominance of one strategy, protected coexistence at an intermediate equilibrium, and mutual exclusion are the generic outcomes. And the unfolding of the competition scenarios is completed with the analysis of three degenerate singular strategies-characterized by vanishing second-order fitness derivatives-near which resident-invader coexistence can be unprotected. Our approach is based on the series expansion of a generic demographic model, w.r.t. the small strategy difference between the two competing groups, and on known results on time-scale separation and bifurcation theories. The analysis is carried out up to third order and is extendable to any order. For each order, explicit genericity conditions under which higher orders can be neglected are derived and, interestingly, they are known prior to invasion. An important result is that degeneracies up to third-order are required to have more than one stable way of coexistence. Such degeneracies can be due to particular symmetries in the model formulation, and breaking the genericity conditions provides a direct way to draw biological interpretations. The developed

  19. Simulating Food Web Dynamics along a Gradient: Quantifying Human Influence

    OpenAIRE

    Ferenc Jordán; Nerta Gjata; Shu Mei; Yule, Catherine M.

    2012-01-01

    Realistically parameterized and dynamically simulated food-webs are useful tool to explore the importance of the functional diversity of ecosystems, and in particular relations between the dynamics of species and the whole community. We present a stochastic dynamical food web simulation for the Kelian River (Borneo). The food web was constructed for six different locations, arrayed along a gradient of increasing human perturbation (mostly resulting from gold mining activities) along the river...

  20. Beneath aggregate stability - quantifying thermodynamic properties that drive soil structure dynamics

    Science.gov (United States)

    Hallett, Paul; Ogden, Mike; Karim, Kamal; Schmidt, Sonja; Yoshida, Shuichiro

    2014-05-01

    Soil aggregates are a figment of your energy input and initial boundary conditions, so the basic thermodynamics that drive soil structure formation are needed to understand soil structure dynamics. Using approaches from engineering and materials science, it is possible quantify basic thermodynamic properties, but at present tests are generally limited to highly simplified, often remoulded, soil structures. Although this presents limitations, the understanding of underlying processes driving soil structure dynamics is poor, which could be argued is due to the enormity of the challenge of such an incredibly complex system. Other areas of soil science, particularly soil water physics, relied on simplified structures to develop theories that can now be applied to more complex pore structures. We argue that a similar approach needs to gain prominence in the study of soil aggregates. An overview will be provided of approaches adapted from other disciplines to quantify particle bonding, fracture resistance, rheology and capillary cohesion of soil that drive its aggregation and structure dynamics. All of the tests are limited as they require simplified soil structures, ranging from repacked soils to flat surfaces coated with mineral particles. A brief summary of the different approaches will demonstrate the benefits of collecting basic physical data relevant to soil structure dynamics, including examples where they are vital components of models. The soil treatments we have tested with these engineering and materials science approaches include field soils from a range of management practices with differing clay and organic matters contents, amendment and incubation of soils with a range of microorganisms and substrates in the laboratory, model clay-sand mixes and planar mineral surfaces with different topologies. In addition to advocating the wider adoption of these approaches, we will discuss limitations and hope to stimulate discussion on how approaches could be improved

  1. Quantifying sudden changes in dynamical systems using symbolic networks

    CERN Document Server

    Masoller, Cristina; Ayad, Sarah; Gustave, Francois; Barland, Stephane; Pons, Antonio J; Gómez, Sergio; Arenas, Alex

    2015-01-01

    We characterise the evolution of a dynamical system by combining two well-known complex systems' tools, namely, symbolic ordinal analysis and networks. From the ordinal representation of a time-series we construct a network in which every node weights represents the probability of an ordinal patterns (OPs) to appear in the symbolic sequence and each edges weight represents the probability of transitions between two consecutive OPs. Several network-based diagnostics are then proposed to characterize the dynamics of different systems: logistic, tent and circle maps. We show that these diagnostics are able to capture changes produced in the dynamics as a control parameter is varied. We also apply our new measures to empirical data from semiconductor lasers and show that they are able to anticipate the polarization switchings, thus providing early warning signals of abrupt transitions.

  2. Quantifying the Dynamical Complexity of Chaotic Time Series

    Science.gov (United States)

    Politi, Antonio

    2017-04-01

    A powerful approach is proposed for the characterization of chaotic signals. It is based on the combined use of two classes of indicators: (i) the probability of suitable symbolic sequences (obtained from the ordinal patterns of the corresponding time series); (ii) the width of the corresponding cylinder sets. This way, much information can be extracted and used to quantify the complexity of a given signal. As an example of the potentiality of the method, I introduce a modified permutation entropy which allows for quantitative estimates of the Kolmogorov-Sinai entropy in hyperchaotic models, where other methods would be unpractical. As a by-product, estimates of the fractal dimension of the underlying attractors are possible as well.

  3. Quantifying Chiral Magnetic Effect from Anomalous-Viscous Fluid Dynamics

    CERN Document Server

    Jiang, Yin; Yin, Yi; Liao, Jinfeng

    2016-01-01

    Chiral Magnetic Effect (CME) is the macroscopic manifestation of the fundamental chiral anomaly in a many-body system of chiral fermions, and emerges as anomalous transport current in the fluid dynamics framework. Experimental observation of CME is of great interest and has been reported in Dirac and Weyl semimetals. Significant efforts have also been made to search for CME in heavy ion collisions. Encouraging evidence of CME-induced charge separation in those collisions has been reported, albeit with ambiguity due to background contamination. Crucial for addressing such issue, is the need of quantitative predictions for CME signal with sophisticated modelings. In this paper we develop such a tool, the Anomalous Viscous Fluid Dynamics (AVFD) framework, which simulates the evolution of fermion currents in QGP on top of the data-validated VISHNU bulk hydrodynamic flow. With realistic initial conditions and magnetic field lifetime, the AVFD-predicted CME signal could be quantitatively consistent with measured ch...

  4. Schumpeterian economic dynamics as a quantifiable model of evolution

    Science.gov (United States)

    Thurner, Stefan; Klimek, Peter; Hanel, Rudolf

    2010-07-01

    We propose a simple quantitative model of Schumpeterian economic dynamics. New goods and services are endogenously produced through combinations of existing goods. As soon as new goods enter the market, they may compete against already existing goods. In other words, new products can have destructive effects on existing goods. As a result of this competition mechanism, existing goods may be driven out from the market—often causing cascades of secondary defects (Schumpeterian gales of destruction). The model leads to generic dynamics characterized by phases of relative economic stability followed by phases of massive restructuring of markets—which could be interpreted as Schumpeterian business 'cycles'. Model time series of product diversity and productivity reproduce several stylized facts of economics time series on long timescales, such as GDP or business failures, including non-Gaussian fat tailed distributions and volatility clustering. The model is phrased in an open, non-equilibrium setup which can be understood as a self-organized critical system. Its diversity dynamics can be understood by the time-varying topology of the active production networks.

  5. Quantifying App Store Dynamics: Longitudinal Tracking of Mental Health Apps

    Science.gov (United States)

    Nicholas, Jennifer; Christensen, Helen

    2016-01-01

    Background For many mental health conditions, mobile health apps offer the ability to deliver information, support, and intervention outside the clinical setting. However, there are difficulties with the use of a commercial app store to distribute health care resources, including turnover of apps, irrelevance of apps, and discordance with evidence-based practice. Objective The primary aim of this study was to quantify the longevity and rate of turnover of mental health apps within the official Android and iOS app stores. The secondary aim was to quantify the proportion of apps that were clinically relevant and assess whether the longevity of these apps differed from clinically nonrelevant apps. The tertiary aim was to establish the proportion of clinically relevant apps that included claims of clinical effectiveness. We performed additional subgroup analyses using additional data from the app stores, including search result ranking, user ratings, and number of downloads. Methods We searched iTunes (iOS) and the Google Play (Android) app stores each day over a 9-month period for apps related to depression, bipolar disorder, and suicide. We performed additional app-specific searches if an app no longer appeared within the main search Results On the Android platform, 50% of the search results changed after 130 days (depression), 195 days (bipolar disorder), and 115 days (suicide). Search results were more stable on the iOS platform, with 50% of the search results remaining at the end of the study period. Approximately 75% of Android and 90% of iOS apps were still available to download at the end of the study. We identified only 35.3% (347/982) of apps as being clinically relevant for depression, of which 9 (2.6%) claimed clinical effectiveness. Only 3 included a full citation to a published study. Conclusions The mental health app environment is volatile, with a clinically relevant app for depression becoming unavailable to download every 2.9 days. This poses

  6. Quantifying App Store Dynamics: Longitudinal Tracking of Mental Health Apps.

    Science.gov (United States)

    Larsen, Mark Erik; Nicholas, Jennifer; Christensen, Helen

    2016-08-09

    For many mental health conditions, mobile health apps offer the ability to deliver information, support, and intervention outside the clinical setting. However, there are difficulties with the use of a commercial app store to distribute health care resources, including turnover of apps, irrelevance of apps, and discordance with evidence-based practice. The primary aim of this study was to quantify the longevity and rate of turnover of mental health apps within the official Android and iOS app stores. The secondary aim was to quantify the proportion of apps that were clinically relevant and assess whether the longevity of these apps differed from clinically nonrelevant apps. The tertiary aim was to establish the proportion of clinically relevant apps that included claims of clinical effectiveness. We performed additional subgroup analyses using additional data from the app stores, including search result ranking, user ratings, and number of downloads. We searched iTunes (iOS) and the Google Play (Android) app stores each day over a 9-month period for apps related to depression, bipolar disorder, and suicide. We performed additional app-specific searches if an app no longer appeared within the main search On the Android platform, 50% of the search results changed after 130 days (depression), 195 days (bipolar disorder), and 115 days (suicide). Search results were more stable on the iOS platform, with 50% of the search results remaining at the end of the study period. Approximately 75% of Android and 90% of iOS apps were still available to download at the end of the study. We identified only 35.3% (347/982) of apps as being clinically relevant for depression, of which 9 (2.6%) claimed clinical effectiveness. Only 3 included a full citation to a published study. The mental health app environment is volatile, with a clinically relevant app for depression becoming unavailable to download every 2.9 days. This poses challenges for consumers and clinicians seeking relevant

  7. Different approaches of symbolic dynamics to quantify heart rate complexity.

    Science.gov (United States)

    Cysarz, Dirk; Porta, Alberto; Montano, Nicola; Van Leeuwen, Peter; Kurths, Jürgen; Wessel, Niels

    2013-01-01

    The analysis of symbolic dynamics applied to physiological time series is able to retrieve information about dynamical properties of the underlying system that cannot be gained with standard methods like e.g. spectral analysis. Different approaches for the transformation of the original time series to the symbolic time series have been proposed. Yet the differences between the approaches are unknown. In this study three different transformation methods are investigated: (1) symbolization according to the deviation from the average time series, (2) symbolization according to several equidistant levels between the minimum and maximum of the time series, (3) binary symbolization of the first derivative of the time series. Each method was applied to the cardiac interbeat interval series RR(i) and its difference ΔRR(I) of 17 healthy subjects obtained during head-up tilt testing. The symbolic dynamics of each method is analyzed by means of the occurrence of short sequences ('words') of length 3. The occurrence of words is grouped according to words without variations of the symbols (0V%), words with one variation (1V%), two like variations (2LV%) and two unlike variations (2UV%). Linear regression analysis showed that for method 1 0V%, 1V%, 2LV% and 2UV% changed with increasing tilt angle. For method 2 0V%, 2LV% and 2UV% changed with increasing tilt angle and method 3 showed changes for 0V% and 1V%. In conclusion, all methods are capable of reflecting changes of the cardiac autonomic nervous system during head-up tilt. All methods show that even the analysis of very short symbolic sequences is capable of tracking changes of the cardiac autonomic regulation during head-up tilt testing.

  8. Quantified Differential Dynamic Logic for Distributed Hybrid Systems

    Science.gov (United States)

    2010-05-01

    manual semantic reasoning. Other process- algebraic approaches, like χ [23], have been developed for modeling and sim- ulation. Verification is still...f(~s) using vectorial notation and we use ~s = ~t for element-wise equality. Formulas The formulas of QdL are defined as in first-order dynamic logic...like a Kleene algebra with tests [14]. QHPs are defined by the following grammar (α, β are QHPs, θ a term, i a variable of sort C, f is a function symbol

  9. Self-Similar Dynamics of a Magnetized Polytropic Gas

    CERN Document Server

    Wang, Wei-Gang

    2007-01-01

    In broad astrophysical contexts of large-scale gravitational collapses and outflows and as a basis for various further astrophysical applications, we formulate and investigate a theoretical problem of self-similar MHD for a non-rotating polytropic gas of quasi-spherical symmetry permeated by a completely random magnetic field. We derive two coupled nonlinear MHD ordinary differential equations (ODEs), examine properties of the magnetosonic critical curve, obtain various asymptotic and global semi-complete similarity MHD solutions, and qualify the applicability of our results. Unique to a magnetized gas cloud, a novel asymptotic MHD solution for a collapsing core is established. Physically, the similarity MHD inflow towards the central dense core proceeds in characteristic manners before the gas material eventually encounters a strong radiating MHD shock upon impact onto the central compact object. Sufficiently far away from the central core region enshrouded by such an MHD shock, we derive regular asymptotic ...

  10. Self-similar dynamics of a magnetized polytropic gas

    Science.gov (United States)

    Wang, Wei-Gang; Lou, Yu-Qing

    2007-10-01

    In broad astrophysical contexts of large-scale gravitational collapses and outflows and as a basis for various further astrophysical applications, we formulate and investigate a theoretical problem of self-similar magnetohydrodynamics (MHD) for a non-rotating polytropic gas of quasi-spherical symmetry permeated by a completely random magnetic field. Within this framework, we derive two coupled nonlinear MHD ordinary differential equations (ODEs), examine properties of the magnetosonic critical curve, obtain various asymptotic and global semi-complete similarity MHD solutions, and qualify the applicability of our results. Unique to a magnetized gas cloud, a novel asymptotic MHD solution for a collapsing core is established. Physically, the similarity MHD inflow towards the central dense core proceeds in characteristic manners before the gas material eventually encounters a strong radiating MHD shock upon impact onto the central compact object. Sufficiently far away from the central core region enshrouded by such an MHD shock, we derive regular asymptotic behaviours. We study asymptotic solution behaviours in the vicinity of the magnetosonic critical curve and determine smooth MHD eigensolutions across this curve. Numerically, we construct global semi-complete similarity MHD solutions that cross the magnetosonic critical curve zero, one, and two times. For comparison, counterpart solutions in the case of an isothermal unmagnetized and magnetized gas flows are demonstrated in the present MHD framework at nearly isothermal and weakly magnetized conditions. For a polytropic index γ=1.25 or a strong magnetic field, different solution behaviours emerge. With a strong magnetic field, there exist semi-complete similarity solutions crossing the magnetosonic critical curve only once, and the MHD counterpart of expansion-wave collapse solution disappears. Also in the polytropic case of γ=1.25, we no longer observe the trend in the speed-density phase diagram of finding

  11. Dynamics and applicability of the similarity renormalization group

    Energy Technology Data Exchange (ETDEWEB)

    Launey, K D; Dytrych, T; Draayer, J P [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803 (United States); Popa, G, E-mail: kristina@baton.phys.lsu.edu [Department of Physics and Astronomy, Ohio University, Zanesville, OH 43701 (United States)

    2012-01-13

    The similarity renormalization group (SRG) concept (or flow equations methodology) is studied with a view toward the renormalization of nucleon-nucleon interactions for ab initio shell-model calculations. For a general flow, we give quantitative measures, in the framework of spectral distribution theory, for the strength of the SRG-induced higher order (many-body) terms of an evolved interaction. Specifically, we show that there is a hierarchy among the terms, with those of the lowest particle rank being the most important. This feature is crucial for maintaining the unitarity of SRG transformations and key to the method's applicability. (paper)

  12. Quantifying the dynamic of OSA brain using multifractal formalism: A novel measure for sleep fragmentation.

    Science.gov (United States)

    Raiesdana, Somayeh

    2017-01-01

    It is thought that the critical brain dynamics in sleep is modulated during frequent periods of wakefulness. This paper utilizes the capacity of EEG based scaling analysis to quantify sleep fragmentation in patients with obstructive sleep apnea. The scale-free (fractal) behavior refers to a state where no characteristic scale dominates the dynamics of the underlying process which is evident as long range correlations in a time series. Here, Multiscaling (multifractal) spectrum is utilized to quantify the disturbed dynamic of an OSA brain with fragmented sleep. The whole night multichannel sleep EEG recordings of 18 subjects were employed to compute and quantify variable power-law long-range correlations and singularity spectra. Based on this characteristic, a new marker for sleep fragmentation named ``scaling based sleep fragmentation'' was introduced. This measure takes into account the sleep run length and stage transition quality within a fuzzy inference system to improve decisions made on sleep fragmentation. The proposed index was implemented, validated with sleepiness parameters and compared to some common indexes including sleep fragmentation index, arousal index, sleep diversity index, and sleep efficiency index. Correlations were almost significant suggesting that the sleep characterizing measure, based on singularity spectra range, could properly detect fragmentations and quantify their rate. This method can be an alternative for quantifying the sleep fragmentation in clinical practice after being approved experimentally. Control of sleep fragmentation and, subsequently, suppression of excessive daytime sleepiness will be a promising outlook of this kind of researches.

  13. Market dynamics immediately before and after financial shocks: Quantifying the Omori, productivity, and Bath laws.

    Science.gov (United States)

    Petersen, Alexander M; Wang, Fengzhong; Havlin, Shlomo; Stanley, H Eugene

    2010-09-01

    We study the cascading dynamics immediately before and immediately after 219 market shocks. We define the time of a market shock T{c} to be the time for which the market volatility V(T{c}) has a peak that exceeds a predetermined threshold. The cascade of high volatility "aftershocks" triggered by the "main shock" is quantitatively similar to earthquakes and solar flares, which have been described by three empirical laws-the Omori law, the productivity law, and the Bath law. We analyze the most traded 531 stocks in U.S. markets during the 2 yr period of 2001-2002 at the 1 min time resolution. We find quantitative relations between the main shock magnitude M≡log{10} V(T{c}) and the parameters quantifying the decay of volatility aftershocks as well as the volatility preshocks. We also find that stocks with larger trading activity react more strongly and more quickly to market shocks than stocks with smaller trading activity. Our findings characterize the typical volatility response conditional on M , both at the market and the individual stock scale. We argue that there is potential utility in these three statistical quantitative relations with applications in option pricing and volatility trading.

  14. Market dynamics immediately before and after financial shocks: Quantifying the Omori, productivity, and Bath laws

    Science.gov (United States)

    Petersen, Alexander M.; Wang, Fengzhong; Havlin, Shlomo; Stanley, H. Eugene

    2010-09-01

    We study the cascading dynamics immediately before and immediately after 219 market shocks. We define the time of a market shock Tc to be the time for which the market volatility V(Tc) has a peak that exceeds a predetermined threshold. The cascade of high volatility “aftershocks” triggered by the “main shock” is quantitatively similar to earthquakes and solar flares, which have been described by three empirical laws—the Omori law, the productivity law, and the Bath law. We analyze the most traded 531 stocks in U.S. markets during the 2 yr period of 2001-2002 at the 1 min time resolution. We find quantitative relations between the main shock magnitude M≡log10V(Tc) and the parameters quantifying the decay of volatility aftershocks as well as the volatility preshocks. We also find that stocks with larger trading activity react more strongly and more quickly to market shocks than stocks with smaller trading activity. Our findings characterize the typical volatility response conditional on M , both at the market and the individual stock scale. We argue that there is potential utility in these three statistical quantitative relations with applications in option pricing and volatility trading.

  15. Dynamic community detection based on network structural perturbation and topological similarity

    Science.gov (United States)

    Wang, Peizhuo; Gao, Lin; Ma, Xiaoke

    2017-01-01

    Community detection in dynamic networks has been extensively studied since it sheds light on the structure-function relation of the overall complex systems. Recently, it has been demonstrated that the structural perturbation in static networks is excellent in characterizing the topology. In order to investigate the perturbation structural theory in dynamic networks, we extend the theory by considering the dynamic variation information between networks of consecutive time. Then a novel similarity is proposed by combing structural perturbation and topological features. Finally, we present an evolutionary clustering algorithm to detect dynamic communities under the temporal smoothness framework. Experimental results on both artificial and real dynamic networks demonstrate that the proposed similarity is promising in dynamic community detection since it improves the clustering accuracy compared with state-of-the-art methods, indicating the superiority of the presented similarity measure.

  16. Quantifying dynamic sensitivity of optimization algorithm parameters to improve hydrological model calibration

    Science.gov (United States)

    Qi, Wei; Zhang, Chi; Fu, Guangtao; Zhou, Huicheng

    2016-02-01

    It is widely recognized that optimization algorithm parameters have significant impacts on algorithm performance, but quantifying the influence is very complex and difficult due to high computational demands and dynamic nature of search parameters. The overall aim of this paper is to develop a global sensitivity analysis based framework to dynamically quantify the individual and interactive influence of algorithm parameters on algorithm performance. A variance decomposition sensitivity analysis method, Analysis of Variance (ANOVA), is used for sensitivity quantification, because it is capable of handling small samples and more computationally efficient compared with other approaches. The Shuffled Complex Evolution method developed at the University of Arizona algorithm (SCE-UA) is selected as an optimization algorithm for investigation, and two criteria, i.e., convergence speed and success rate, are used to measure the performance of SCE-UA. Results show the proposed framework can effectively reveal the dynamic sensitivity of algorithm parameters in the search processes, including individual influences of parameters and their interactive impacts. Interactions between algorithm parameters have significant impacts on SCE-UA performance, which has not been reported in previous research. The proposed framework provides a means to understand the dynamics of algorithm parameter influence, and highlights the significance of considering interactive parameter influence to improve algorithm performance in the search processes.

  17. Quantifying the interplay between environmental and social effects on aggregated-fish dynamics

    CERN Document Server

    Capello, Manuela; Cotel, Pascal; Deneubourg, Jean-Louis; Dagorn, Laurent

    2011-01-01

    Demonstrating and quantifying the respective roles of social interactions and external stimuli governing fish dynamics is key to understanding fish spatial distribution. If seminal studies have contributed to our understanding of fish spatial organization in schools, little experimental information is available on fish in their natural environment, where aggregations often occur in the presence of spatial heterogeneities. Here, we applied novel modeling approaches coupled to accurate acoustic tracking for studying the dynamics of a group of gregarious fish in a heterogeneous environment. To this purpose, we acoustically tracked with submeter resolution the positions of twelve small pelagic fish (Selar crumenophthalmus) in the presence of an anchored floating object, constituting a point of attraction for several fish species. We constructed a field-based model for aggregated-fish dynamics, deriving effective interactions for both social and external stimuli from experiments. We tuned the model parameters that...

  18. TMS-evoked changes in brain-state dynamics quantified by using EEG data.

    Science.gov (United States)

    Mutanen, Tuomas; Nieminen, Jaakko O; Ilmoniemi, Risto J

    2013-01-01

    To improve our understanding of the combined transcranial magnetic stimulation (TMS) and electroencephalography (EEG) method in general, it is important to study how the dynamics of the TMS-modulated brain activity differs from the dynamics of spontaneous activity. In this paper, we introduce two quantitative measures based on EEG data, called mean state shift (MSS) and state variance (SV), for evaluating the TMS-evoked changes in the brain-state dynamics. MSS quantifies the immediate TMS-elicited change in the brain state, whereas SV shows whether the rate at which the brain state changes is modulated by TMS. We report a statistically significant increase for a period of 100-200 ms after the TMS pulse in both MSS and SV at the group level. This indicates that the TMS-modulated brain state differs from the spontaneous one. Moreover, the TMS-modulated activity is more vigorous than the natural activity.

  19. Quantifying the dynamics of emotional expressions in family therapy of patients with anorexia nervosa.

    Science.gov (United States)

    Pezard, Laurent; Doba, Karyn; Lesne, Annick; Nandrino, Jean-Louis

    2017-03-23

    Emotional interactions have been considered dynamical processes involved in the affective life of humans and their disturbances may induce mental disorders. Most studies of emotional interactions have focused on dyadic behaviors or self-reports of emotional states but neglected the dynamical processes involved in family therapy. The main objective of this study is to quantify the dynamics of emotional expressions and their changes using the family therapy of patients with anorexia nervosa as an example. Nonlinear methods characterize the variability of the dynamics at the level of the whole therapeutic system and reciprocal influence between the participants during family therapy. Results show that the variability of the dynamics is higher at the end of the therapy than at the beginning. The reciprocal influences between therapist and each member of the family and between mother and patient decrease with the course of family therapy. Our results support the development of new interpersonal strategies of emotion regulation during family therapy. The quantification of emotional dynamics can help understanding the emotional processes underlying psychopathology and evaluating quantitatively the changes achieved by the therapeutic intervention.

  20. Linking Ventilation Heterogeneity Quantified via Hyperpolarized 3He MRI to Dynamic Lung Mechanics and Airway Hyperresponsiveness.

    Science.gov (United States)

    Lui, Justin K; Parameswaran, Harikrishnan; Albert, Mitchell S; Lutchen, Kenneth R

    2015-01-01

    Advancements in hyperpolarized helium-3 MRI (HP 3He-MRI) have introduced the ability to render and quantify ventilation patterns throughout the anatomic regions of the lung. The goal of this study was to establish how ventilation heterogeneity relates to the dynamic changes in mechanical lung function and airway hyperresponsiveness in asthmatic subjects. In four healthy and nine mild-to-moderate asthmatic subjects, we measured dynamic lung resistance and lung elastance from 0.1 to 8 Hz via a broadband ventilation waveform technique. We quantified ventilation heterogeneity using a recently developed coefficient of variation method from HP 3He-MRI imaging. Dynamic lung mechanics and imaging were performed at baseline, post-challenge, and after a series of five deep inspirations. AHR was measured via the concentration of agonist that elicits a 20% decrease in the subject's forced expiratory volume in one second compared to baseline (PC20) dose. The ventilation coefficient of variation was correlated to low-frequency lung resistance (R = 0.647, P ventilation heterogeneity. Also, the degree of AHR appears to be dependent on the degree to which baseline airway constriction creates baseline ventilation heterogeneity. HP 3He-MRI imaging may be a powerful predictor of the degree of AHR and in tracking the efficacy of therapy.

  1. Quantifying humpback whale song sequences to understand the dynamics of song exchange at the ocean basin scale.

    Science.gov (United States)

    Garland, Ellen C; Noad, Michael J; Goldizen, Anne W; Lilley, Matthew S; Rekdahl, Melinda L; Garrigue, Claire; Constantine, Rochelle; Daeschler Hauser, Nan; Poole, M Michael; Robbins, Jooke

    2013-01-01

    Humpback whales have a continually evolving vocal sexual display, or "song," that appears to undergo both evolutionary and "revolutionary" change. All males within a population adhere to the current content and arrangement of the song. Populations within an ocean basin share similarities in their songs; this sharing is complex as multiple variations of the song (song types) may be present within a region at any one time. To quantitatively investigate the similarity of song types, songs were compared at both the individual singer and population level using the Levenshtein distance technique and cluster analysis. The highly stereotyped sequences of themes from the songs of 211 individuals from populations within the western and central South Pacific region from 1998 through 2008 were grouped together based on the percentage of song similarity, and compared to qualitatively assigned song types. The analysis produced clusters of highly similar songs that agreed with previous qualitative assignments. Each cluster contained songs from multiple populations and years, confirming the eastward spread of song types and their progressive evolution through the study region. Quantifying song similarity and exchange will assist in understanding broader song dynamics and contribute to the use of vocal displays as population identifiers.

  2. Dynamic similarity in titanosaur sauropods: ichnological evidence from the Fumanya dinosaur tracksite (southern Pyrenees.

    Directory of Open Access Journals (Sweden)

    Bernat Vila

    Full Text Available The study of a small sauropod trackway from the Late Cretaceous Fumanya tracksite (southern Pyrenees, Catalonia and further comparisons with larger trackways from the same locality suggest a causative relationship between gait, gauge, and body proportions of the respective titanosaur trackmakers. This analysis, conducted in the context of scaling predictions and using geometric similarity and dynamic similarity hypotheses, reveals similar Froude numbers and relative stride lengths for both small and large trackmakers from Fumanya. Evidence for geometric similarity in these trackways suggests that titanosaurs of different sizes moved in a dynamically similar way, probably using an amble gait. The wide gauge condition reported in trackways of small and large titanosaurs implies that they possessed similar body (trunk and limbs proportions despite large differences in body size. These results strengthen the hypothesis that titanosaurs possessed a distinctive suite of anatomical characteristics that are well reflected in their tracks and trackways.

  3. Dynamic Similarity in Titanosaur Sauropods: Ichnological Evidence from the Fumanya Dinosaur Tracksite (Southern Pyrenees)

    Science.gov (United States)

    Vila, Bernat; Oms, Oriol; Galobart, Àngel; Bates, Karl T.; Egerton, Victoria M.; Manning, Phillip L.

    2013-01-01

    The study of a small sauropod trackway from the Late Cretaceous Fumanya tracksite (southern Pyrenees, Catalonia) and further comparisons with larger trackways from the same locality suggest a causative relationship between gait, gauge, and body proportions of the respective titanosaur trackmakers. This analysis, conducted in the context of scaling predictions and using geometric similarity and dynamic similarity hypotheses, reveals similar Froude numbers and relative stride lengths for both small and large trackmakers from Fumanya. Evidence for geometric similarity in these trackways suggests that titanosaurs of different sizes moved in a dynamically similar way, probably using an amble gait. The wide gauge condition reported in trackways of small and large titanosaurs implies that they possessed similar body (trunk and limbs) proportions despite large differences in body size. These results strengthen the hypothesis that titanosaurs possessed a distinctive suite of anatomical characteristics that are well reflected in their tracks and trackways. PMID:23451221

  4. Quantifying sediment dynamics on alluvial fans, Iglesia basin, south Central Argentine Andes

    Science.gov (United States)

    Harries, Rebekah; Kirstein, Linda; Whittaker, Alex; Attal, Mikael; Peralta, Silvio

    2017-04-01

    considering factors such as climate storminess and degree of glacial cover in having a dominant control on the variance of sediment released. These findings have significant implications for our ability to invert the fluvial stratigraphy for climatically driven changes in discharge and highlight a need to quantify the impact of sediment dynamics on modern systems so that we may better understand the limitations in applying quantitative models to ancient stratigraphy.

  5. Quantifying terrestrial ecosystem carbon dynamics in the Jinsha watershed, Upper Yangtze, China from 1975 to 2000

    Science.gov (United States)

    Zhao, Shuqing

    2010-01-01

    Quantifying the spatial and temporal dynamics of carbon stocks in terrestrial ecosystems and carbon fluxes between the terrestrial biosphere and the atmosphere is critical to our understanding of regional patterns of carbon budgets. Here we use the General Ensemble biogeochemical Modeling System to simulate the terrestrial ecosystem carbon dynamics in the Jinsha watershed of China’s upper Yangtze basin from 1975 to 2000, based on unique combinations of spatial and temporal dynamics of major driving forces, such as climate, soil properties, nitrogen deposition, and land use and land cover changes. Our analysis demonstrates that the Jinsha watershed ecosystems acted as a carbon sink during the period of 1975–2000, with an average rate of 0.36 Mg/ha/yr, primarily resulting from regional climate variation and local land use and land cover change. Vegetation biomass accumulation accounted for 90.6% of the sink, while soil organic carbon loss before 1992 led to a lower net gain of carbon in the watershed, and after that soils became a small sink. Ecosystem carbon sink/source patterns showed a high degree of spatial heterogeneity. Carbon sinks were associated with forest areas without disturbances, whereas carbon sources were primarily caused by stand-replacing disturbances. It is critical to adequately represent the detailed fast-changing dynamics of land use activities in regional biogeochemical models to determine the spatial and temporal evolution of regional carbon sink/source patterns.

  6. Natural entropy fluctuations discriminate similar looking electric signals emitted from systems of different dynamics

    CERN Document Server

    Varotsos, P A; Skordas, E S; Lazaridou, M S

    2005-01-01

    Complexity measures are introduced, that quantify the change of the natural entropy fluctuations at different length scales in time-series emitted from systems operating far from equilibrium. They identify impending sudden cardiac death (SD) by analyzing fifteen minutes electrocardiograms, and comparing to those of truly healthy humans (H). These measures seem to be complementary to the ones suggested recently [Phys. Rev. E {\\bf 70}, 011106 (2004)] and altogether enable the classification of individuals into three categories: H, heart disease patients and SD. All the SD individuals, who exhibit critical dynamics, result in a common behavior.

  7. Using GPS technology to quantify human mobility, dynamic contacts and infectious disease dynamics in a resource-poor urban environment.

    Directory of Open Access Journals (Sweden)

    Gonzalo M Vazquez-Prokopec

    Full Text Available Empiric quantification of human mobility patterns is paramount for better urban planning, understanding social network structure and responding to infectious disease threats, especially in light of rapid growth in urbanization and globalization. This need is of particular relevance for developing countries, since they host the majority of the global urban population and are disproportionally affected by the burden of disease. We used Global Positioning System (GPS data-loggers to track the fine-scale (within city mobility patterns of 582 residents from two neighborhoods from the city of Iquitos, Peru. We used ∼2.3 million GPS data-points to quantify age-specific mobility parameters and dynamic co-location networks among all tracked individuals. Geographic space significantly affected human mobility, giving rise to highly local mobility kernels. Most (∼80% movements occurred within 1 km of an individual's home. Potential hourly contacts among individuals were highly irregular and temporally unstructured. Only up to 38% of the tracked participants showed a regular and predictable mobility routine, a sharp contrast to the situation in the developed world. As a case study, we quantified the impact of spatially and temporally unstructured routines on the dynamics of transmission of an influenza-like pathogen within an Iquitos neighborhood. Temporally unstructured daily routines (e.g., not dominated by a single location, such as a workplace, where an individual repeatedly spent significant amount of time increased an epidemic's final size and effective reproduction number by 20% in comparison to scenarios modeling temporally structured contacts. Our findings provide a mechanistic description of the basic rules that shape human mobility within a resource-poor urban center, and contribute to the understanding of the role of fine-scale patterns of individual movement and co-location in infectious disease dynamics. More generally, this study

  8. Using GPS technology to quantify human mobility, dynamic contacts and infectious disease dynamics in a resource-poor urban environment.

    Science.gov (United States)

    Vazquez-Prokopec, Gonzalo M; Bisanzio, Donal; Stoddard, Steven T; Paz-Soldan, Valerie; Morrison, Amy C; Elder, John P; Ramirez-Paredes, Jhon; Halsey, Eric S; Kochel, Tadeusz J; Scott, Thomas W; Kitron, Uriel

    2013-01-01

    Empiric quantification of human mobility patterns is paramount for better urban planning, understanding social network structure and responding to infectious disease threats, especially in light of rapid growth in urbanization and globalization. This need is of particular relevance for developing countries, since they host the majority of the global urban population and are disproportionally affected by the burden of disease. We used Global Positioning System (GPS) data-loggers to track the fine-scale (within city) mobility patterns of 582 residents from two neighborhoods from the city of Iquitos, Peru. We used ∼2.3 million GPS data-points to quantify age-specific mobility parameters and dynamic co-location networks among all tracked individuals. Geographic space significantly affected human mobility, giving rise to highly local mobility kernels. Most (∼80%) movements occurred within 1 km of an individual's home. Potential hourly contacts among individuals were highly irregular and temporally unstructured. Only up to 38% of the tracked participants showed a regular and predictable mobility routine, a sharp contrast to the situation in the developed world. As a case study, we quantified the impact of spatially and temporally unstructured routines on the dynamics of transmission of an influenza-like pathogen within an Iquitos neighborhood. Temporally unstructured daily routines (e.g., not dominated by a single location, such as a workplace, where an individual repeatedly spent significant amount of time) increased an epidemic's final size and effective reproduction number by 20% in comparison to scenarios modeling temporally structured contacts. Our findings provide a mechanistic description of the basic rules that shape human mobility within a resource-poor urban center, and contribute to the understanding of the role of fine-scale patterns of individual movement and co-location in infectious disease dynamics. More generally, this study emphasizes the need for

  9. Dynamic Time Warping Distance Method for Similarity Test of Multipoint Ground Motion Field

    Directory of Open Access Journals (Sweden)

    Yingmin Li

    2010-01-01

    Full Text Available The reasonability of artificial multi-point ground motions and the identification of abnormal records in seismic array observations, are two important issues in application and analysis of multi-point ground motion fields. Based on the dynamic time warping (DTW distance method, this paper discusses the application of similarity measurement in the similarity analysis of simulated multi-point ground motions and the actual seismic array records. Analysis results show that the DTW distance method not only can quantitatively reflect the similarity of simulated ground motion field, but also offers advantages in clustering analysis and singularity recognition of actual multi-point ground motion field.

  10. ON THE SELF-SIMILAR SOLUTIONS OF THE MAGNETO-HYDRO-DYNAMIC EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    He Cheng; Xin Zhouping

    2009-01-01

    In this paper, we show that, for the three dimensional incompressible magneto-hydro-dynamic equations, there exists only trivial backward self-similar solution in LP(R3)for p > 3, under some smallness assumption on either the kinetic energy of the self-similar solution related to the velocity field, or the magnetic field.Second, we construct a class of global unique forward self-similar solutions to the three-dimensional MHD equations with small initial data in some sense, being homogeneous of degree -1 and belonging to some Besov space, or the Lorentz space or pseudo-measure space, as motivated by the work in [5].

  11. Identifying Differences and Similarities in Static and Dynamic Contact Angles between Nanoscale and Microscale Textured Surfaces Using Molecular Dynamics Simulations.

    Science.gov (United States)

    Slovin, Mitchell R; Shirts, Michael R

    2015-07-28

    We quantify some of the effects of patterned nanoscale surface texture on static contact angles, dynamic contact angles, and dynamic contact angle hysteresis using molecular dynamics simulations of a moving Lennard-Jones droplet in contact with a solid surface. We observe static contact angles that change with the introduction of surface texture in a manner consistent with theoretical and experimental expectations. However, we find that the introduction of nanoscale surface texture at the length scale of 5-10 times the fluid particle size does not affect dynamic contact angle hysteresis even though it changes both the advancing and receding contact angles significantly. This result differs significantly from microscale experimental results where dynamic contact angle hysteresis decreases with the addition of surface texture due to an increase in the receding contact angle. Instead, we find that molecular-kinetic theory, previously applied only to nonpatterned surfaces, accurately describes dynamic contact angle and dynamic contact angle hysteresis behavior as a function of terminal fluid velocity. Therefore, at length scales of tens of nanometers, the kinetic phenomena such as contact line pinning observed at larger scales become insignificant in comparison to the effects of molecular fluctuations for moving droplets, even though the static properties are essentially scale-invariant. These findings may have implications for the design of highly hierarchical structures with particular wetting properties. We also find that quantitatively determining the trends observed in this article requires the careful selection of system and analysis parameters in order to achieve sufficient accuracy and precision in calculated contact angles. Therefore, we provide a detailed description of our two-surface, circular-fit approach to calculating static and dynamic contact angles on surfaces with nanoscale texturing.

  12. Quantifying the Relationship between Dynamical Cores and Physical Parameterizations by Geostatistical Methods

    Science.gov (United States)

    Yorgun, M. S.; Rood, R. B.

    2010-12-01

    The behavior of atmospheric models is sensitive to the algorithms that are used to represent the equations of motion. Typically, comprehensive models are conceived in terms of the resolved fluid dynamics (i.e. the dynamical core) and subgrid, unresolved physics represented by parameterizations. Deterministic weather predictions are often validated with feature-by-feature comparison. Probabilistic weather forecasts and climate projects are evaluated with statistical methods. We seek to develop model evaluation strategies that identify like “objects” - coherent systems with an associated set of measurable parameters. This makes it possible to evaluate processes in models without needing to reproduce the time and location of, for example, a particular observed cloud system. Process- and object-based evaluation preserves information in the observations by avoiding the need for extensive spatial and temporal averaging. As a concrete example, we focus on analyzing how the choice of dynamical core impacts the representation of precipitation in the Pacific Northwest of the United States, Western Canada, and Alaska; this brings attention to the interaction of the resolved and the parameterized components of the model. Two dynamical cores are considered within the Community Atmosphere Model. These are the Spectral (Eulerian), which relies on global basis functions and the Finite Volume (FV), which uses only local information. We introduce the concept of "meteorological realism" that is, do local representations of large-scale phenomena, for example, fronts and orographic precipitation, look like the observations? A follow on question is, does the representation of these phenomena improve with resolution? Our approach to quantify meteorological realism starts with methods of geospatial statistics. Specifically, we employ variography, which is a geostatistical method which is used to measure the spatial continuity of a regionalized variable, and principle component

  13. Quasi-similar decameter emission features appearing in the solar and jovian dynamic spectra

    Science.gov (United States)

    Litvinenko, G. V.; Shaposhnikov, V. E.; Konovalenko, A. A.; Zakharenko, V. V.; Panchenko, M.; Dorovsky, V. V.; Brazhenko, A. I.; Rucker, H. O.; Vinogradov, V. V.; Melnik, V. N.

    2016-07-01

    We investigate the dynamic spectra of the Sun and jovian decametric radiation obtained by the authors with the radio telescopes UTR-2 and URAN-2 (Kharkov, Poltava, Ukraine). We focus on the similar structures that appear on the dynamic spectra of those objects: S-bursts, drifting pairs, absorption bursts and zebra patterns. Similarity in structures allows us to assume that the plasma processes in the solar corona and in the jovian magnetosphere might have similar properties. We analyze and compare the main parameters of those structures and describe briefly some mechanisms of their generation that have already discussed in publications. We selected the mechanisms which, in our opinion, most completely and consistently explain the properties of the structures under consideration.

  14. Self-similar solutions for the dynamical condensation of a radiative gas layer

    Science.gov (United States)

    Iwasaki, Kazunari; Tsuribe, Toru

    2008-07-01

    A new self-similar solution describing the dynamical condensation of a radiative gas is investigated under a plane-parallel geometry. The dynamical condensation is caused by thermal instability. The solution is applicable to generic flow with a net cooling rate per unit volume and time ~ ρ2Tα, where ρ,T and α are the density, temperature and a free parameter, respectively. Given α, a family of self-similar solutions with one parameter η is found in which the central density and pressure evolve as follows: ρ(x = 0, t) ~ (tc - t)-η/(2-α) and P(x = 0, t) ~ (tc - t)(1-η)/(1-α), where tc is the epoch at which the central density becomes infinite. For η ~ 0 the solution describes the isochoric mode, whereas for η ~ 1 the solution describes the isobaric mode. The self-similar solutions exist in the range between the two limits; that is, for 0 1. We compare the obtained self-similar solutions with the results of one-dimensional hydrodynamical simulations. In a converging flow, the results of the numerical simulations agree well with the self-similar solutions in the high-density limit. Our self-similar solutions are applicable to the formation of interstellar clouds (HI clouds and molecular clouds) by thermal instability.

  15. Recent advances quantifying the large wood dynamics in river basins: New methods and remaining challenges

    Science.gov (United States)

    Ruiz-Villanueva, Virginia; Piégay, Hervé; Gurnell, Angela A.; Marston, Richard A.; Stoffel, Markus

    2016-09-01

    Large wood is an important physical component of woodland rivers and significantly influences river morphology. It is also a key component of stream ecosystems. However, large wood is also a source of risk for human activities as it may damage infrastructure, block river channels, and induce flooding. Therefore, the analysis and quantification of large wood and its mobility are crucial for understanding and managing wood in rivers. As the amount of large-wood-related studies by researchers, river managers, and stakeholders increases, documentation of commonly used and newly available techniques and their effectiveness has also become increasingly relevant as well. Important data and knowledge have been obtained from the application of very different approaches and have generated a significant body of valuable information representative of different environments. This review brings a comprehensive qualitative and quantitative summary of recent advances regarding the different processes involved in large wood dynamics in fluvial systems including wood budgeting and wood mechanics. First, some key definitions and concepts are introduced. Second, advances in quantifying large wood dynamics are reviewed; in particular, how measurements and modeling can be combined to integrate our understanding of how large wood moves through and is retained within river systems. Throughout, we present a quantitative and integrated meta-analysis compiled from different studies and geographical regions. Finally, we conclude by highlighting areas of particular research importance and their likely future trajectories, and we consider a particularly underresearched area so as to stress the future challenges for large wood research.

  16. Quantifying the size-resolved dynamics of indoor bioaerosol transport and control.

    Science.gov (United States)

    Kunkel, S A; Azimi, P; Zhao, H; Stark, B C; Stephens, B

    2017-09-01

    Understanding the bioaerosol dynamics of droplets and droplet nuclei emitted during respiratory activities is important for understanding how infectious diseases are transmitted and potentially controlled. To this end, we conducted experiments to quantify the size-resolved dynamics of indoor bioaerosol transport and control in an unoccupied apartment unit operating under four different HVAC particle filtration conditions. Two model organisms (Escherichia coli K12 and bacteriophage T4) were aerosolized under alternating low and high flow rates to roughly represent constant breathing and periodic coughing. Size-resolved aerosol sampling and settle plate swabbing were conducted in multiple locations. Samples were analyzed by DNA extraction and quantitative polymerase chain reaction (qPCR). DNA from both organisms was detected during all test conditions in all air samples up to 7 m away from the source, but decreased in magnitude with the distance from the source. A greater fraction of T4 DNA was recovered from the aerosol size fractions smaller than 1 μm than E. coli K12 at all air sampling locations. Higher efficiency HVAC filtration also reduced the amount of DNA recovered in air samples and on settle plates located 3-7 m from the source. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. Quantifying effects of abiotic and biotic drivers on community dynamics with multivariate autoregressive (MAR) models.

    Science.gov (United States)

    Hampton, Stephanie E; Holmes, Elizabeth E; Scheef, Lindsay P; Scheuerell, Mark D; Katz, Stephen L; Pendleton, Daniel E; Ward, Eric J

    2013-12-01

    Long-term ecological data sets present opportunities for identifying drivers of community dynamics and quantifying their effects through time series analysis. Multivariate autoregressive (MAR) models are well known in many other disciplines, such as econometrics, but widespread adoption of MAR methods in ecology and natural resource management has been much slower despite some widely cited ecological examples. Here we review previous ecological applications of MAR models and highlight their ability to identify abiotic and biotic drivers of population dynamics, as well as community-level stability metrics, from long-term empirical observations. Thus far, MAR models have been used mainly with data from freshwater plankton communities; we examine the obstacles that may be hindering adoption in other systems and suggest practical modifications that will improve MAR models for broader application. Many of these modifications are already well known in other fields in which MAR models are common, although they are frequently described under different names. In an effort to make MAR models more accessible to ecologists, we include a worked example using recently developed R packages (MAR1 and MARSS), freely available and open-access software.

  18. Stretching of Dynamic Mathematical Symbols Taking Care of Similarity and Geometric Characterization

    Directory of Open Access Journals (Sweden)

    Abdelouahad Bayar

    2017-01-01

    Full Text Available The scientific document industry has undergone an important progress in all steps especially in processing and presentation. However, it is still confronted with major obstacles when it has to manipulate documents containing mathematical formulas which are based on dynamic or variable-sized mathematical symbols and taking care of optical scaling. Normally, in processing documents, the composition of mathematical formulas is based on static fonts. The support of dynamic mathematical symbols requires, in addition to the choice of adequate type of fonts and text formatting tools, the development of a mathematical model allowing parametrizing the symbols to support optical scaling. In this paper, we present a method to supply stretching of Bézier curves representing symbols without losing the aspect of similarity and the geometric characteristics of the concerned symbols. The curves of dynamic symbols to be parametrized are designed on the basis of a new method that allows measuring similarity. The model can be extended later for the development of dynamic fonts respecting the rules of Arabic calligraphy. The method to evaluate the similarity can also be easily adapted in other fields such as image processing.

  19. Predicting the similarity between expressive performances of music from measurements of tempo and dynamics

    Science.gov (United States)

    Timmers, Renee

    2005-01-01

    Measurements of tempo and dynamics from audio files or MIDI data are frequently used to get insight into a performer's contribution to music. The measured variations in tempo and dynamics are often represented in different formats by different authors. Few systematic comparisons have been made between these representations. Moreover, it is unknown what data representation comes closest to subjective perception. The reported study tests the perceptual validity of existing data representations by comparing their ability to explain the subjective similarity between pairs of performances. In two experiments, 40 participants rated the similarity between performances of a Chopin prelude and a Mozart sonata. Models based on different representations of the tempo and dynamics of the performances were fitted to these similarity ratings. The results favor other data representations of performances than generally used, and imply that comparisons between performances are made perceptually in a different way than often assumed. For example, the best fit was obtained with models based on absolute tempo and absolute tempo times loudness, while conventional models based on normalized variations, or on correlations between tempo profiles and loudness profiles, did not explain the similarity ratings well. .

  20. Quantifying the Relationship between Dynamical Cores and Physical Parameterizations by Object-Based Methods

    Science.gov (United States)

    Yorgun, M. S.; Rood, R. B.

    2011-12-01

    The behavior of atmospheric models is sensitive to the algorithms that are used to represent the equations of motion. Typically, comprehensive models are conceived in terms of the resolved fluid dynamics (i.e. the dynamical core) and subgrid, unresolved physics represented by parameterizations. Deterministic weather predictions are often validated with feature-by-feature comparison. Probabilistic weather forecasts and climate projects are evaluated with statistical methods. We seek to develop model evaluation strategies that identify like "objects" - coherent systems with an associated set of measurable parameters. This makes it possible to evaluate processes in models without needing to reproduce the time and location of, for example, a particular observed cloud system. Process- and object-based evaluation preserves information in the observations by avoiding the need for extensive spatial and temporal averaging. As a concrete example, we focus on analyzing how the choice of dynamical core impacts the representation of precipitation in the Pacific Northwest of the United States, Western Canada, and Alaska; this brings attention to the interaction of the resolved and the parameterized components of the model. Two dynamical cores are considered within the Community Atmosphere Model. These are the Spectral (Eulerian), which relies on global basis functions and the Finite Volume (FV), which uses only local information. We introduce the concept of "meteorological realism" that is, do local representations of large-scale phenomena, for example, fronts and orographic precipitation, look like the observations? A follow on question is, does the representation of these phenomena improve with resolution? Our approach to quantify meteorological realism starts with identification and isolation of key features of orographic precipitation that are represented differently by Spectral and FV models, using objective pattern recognition methods. Then we aim to quantitatively compare

  1. A new look at the Dynamic Similarity Hypothesis: the importance of swing phase

    Directory of Open Access Journals (Sweden)

    David A. Raichlen

    2013-08-01

    The Dynamic Similarity Hypothesis (DSH suggests that when animals of different size walk at similar Froude numbers (equal ratios of inertial and gravitational forces they will use similar size-corrected gaits. This application of similarity theory to animal biomechanics has contributed to fundamental insights in the mechanics and evolution of a diverse set of locomotor systems. However, despite its popularity, many mammals fail to walk with dynamically similar stride lengths, a key element of gait that determines spontaneous speed and energy costs. Here, we show that the applicability of the DSH is dependent on the inertial forces examined. In general, the inertial forces are thought to be the centripetal force of the inverted pendulum model of stance phase, determined by the length of the limb. If instead we model inertial forces as the centripetal force of the limb acting as a suspended pendulum during swing phase (determined by limb center of mass position, the DSH for stride length variation is fully supported. Thus, the DSH shows that inter-specific differences in spatial kinematics are tied to the evolution of limb mass distribution patterns. Selection may act on morphology to produce a given stride length, or alternatively, stride length may be a “spandrel” of selection acting on limb mass distribution.

  2. Visual Analysis of Nonlinear Dynamical Systems: Chaos, Fractals, Self-Similarity and the Limits of Prediction

    Directory of Open Access Journals (Sweden)

    Geoff Boeing

    2016-11-01

    Full Text Available Nearly all nontrivial real-world systems are nonlinear dynamical systems. Chaos describes certain nonlinear dynamical systems that have a very sensitive dependence on initial conditions. Chaotic systems are always deterministic and may be very simple, yet they produce completely unpredictable and divergent behavior. Systems of nonlinear equations are difficult to solve analytically, and scientists have relied heavily on visual and qualitative approaches to discover and analyze the dynamics of nonlinearity. Indeed, few fields have drawn as heavily from visualization methods for their seminal innovations: from strange attractors, to bifurcation diagrams, to cobweb plots, to phase diagrams and embedding. Although the social sciences are increasingly studying these types of systems, seminal concepts remain murky or loosely adopted. This article has three aims. First, it argues for several visualization methods to critically analyze and understand the behavior of nonlinear dynamical systems. Second, it uses these visualizations to introduce the foundations of nonlinear dynamics, chaos, fractals, self-similarity and the limits of prediction. Finally, it presents Pynamical, an open-source Python package to easily visualize and explore nonlinear dynamical systems’ behavior.

  3. Fluctuation of similarity to detect transitions between distinct dynamical regimes in short time series.

    Science.gov (United States)

    Malik, Nishant; Marwan, Norbert; Zou, Yong; Mucha, Peter J; Kurths, Jürgen

    2014-06-01

    A method to identify distinct dynamical regimes and transitions between those regimes in a short univariate time series was recently introduced [N. Malik et al., Europhys. Lett. 97, 40009 (2012)], employing the computation of fluctuations in a measure of nonlinear similarity based on local recurrence properties. In this work, we describe the details of the analytical relationships between this newly introduced measure and the well-known concepts of attractor dimensions and Lyapunov exponents. We show that the new measure has linear dependence on the effective dimension of the attractor and it measures the variations in the sum of the Lyapunov spectrum. To illustrate the practical usefulness of the method, we identify various types of dynamical transitions in different nonlinear models. We present testbed examples for the new method's robustness against noise and missing values in the time series. We also use this method to analyze time series of social dynamics, specifically an analysis of the US crime record time series from 1975 to 1993. Using this method, we find that dynamical complexity in robberies was influenced by the unemployment rate until the late 1980s. We have also observed a dynamical transition in homicide and robbery rates in the late 1980s and early 1990s, leading to increase in the dynamical complexity of these rates.

  4. Self-Similar Collapse Solutions for Cylindrical Cloud Geometries and Dynamic Equations of State

    CERN Document Server

    Holden, Lisa; Baxter, Benjamin; Fatuzzo, Marco

    2009-01-01

    A self-similar formalism for the study of the gravitational collapse of molecular gas provides an important theoretical framework from which to explore the dynamics of star formation. Motivated by the presence of elongated and filamentary structures observed in giant molecular clouds, we build upon the existing body of work on cylindrical self-similar collapse flows by including dynamic equations of state that are different from the effective equation of state that produces the initial density distribution. We focus primarily on the collapse of initial states for which the gas is at rest and everywhere overdense from its corresponding hydrostatic equilibrium profile by a factor $\\Lambda$, and apply our results toward the analysis of star formation within dense, elongated molecular cores. An important aspect of this work is the determination of the mass infall rates over a range of the parameters which define the overall state of the gas -- the overdensity parameter $\\Lambda$, the index $\\Gamma$ of the static ...

  5. Parallelized TCSPC for dynamic intravital fluorescence lifetime imaging: quantifying neuronal dysfunction in neuroinflammation.

    Directory of Open Access Journals (Sweden)

    Jan Leo Rinnenthal

    Full Text Available Two-photon laser-scanning microscopy has revolutionized our view on vital processes by revealing motility and interaction patterns of various cell subsets in hardly accessible organs (e.g. brain in living animals. However, current technology is still insufficient to elucidate the mechanisms of organ dysfunction as a prerequisite for developing new therapeutic strategies, since it renders only sparse information about the molecular basis of cellular response within tissues in health and disease. In the context of imaging, Förster resonant energy transfer (FRET is one of the most adequate tools to probe molecular mechanisms of cell function. As a calibration-free technique, fluorescence lifetime imaging (FLIM is superior for quantifying FRET in vivo. Currently, its main limitation is the acquisition speed in the context of deep-tissue 3D and 4D imaging. Here we present a parallelized time-correlated single-photon counting point detector (p-TCSPC (i for dynamic single-beam scanning FLIM of large 3D areas on the range of hundreds of milliseconds relevant in the context of immune-induced pathologies as well as (ii for ultrafast 2D FLIM in the range of tens of milliseconds, a scale relevant for cell physiology. We demonstrate its power in dynamic deep-tissue intravital imaging, as compared to multi-beam scanning time-gated FLIM suitable for fast data acquisition and compared to highly sensitive single-channel TCSPC adequate to detect low fluorescence signals. Using p-TCSPC, 256×256 pixel FLIM maps (300×300 µm(2 are acquired within 468 ms while 131×131 pixel FLIM maps (75×75 µm(2 can be acquired every 82 ms in 115 µm depth in the spinal cord of CerTN L15 mice. The CerTN L15 mice express a FRET-based Ca-biosensor in certain neuronal subsets. Our new technology allows us to perform time-lapse 3D intravital FLIM (4D FLIM in the brain stem of CerTN L15 mice affected by experimental autoimmune encephalomyelitis and, thereby, to truly quantify

  6. Quantifying the effect of heat stress on daily milk yield and monitoring dynamic changes using an adaptive dynamic model.

    Science.gov (United States)

    André, G; Engel, B; Berentsen, P B M; Vellinga, Th V; Lansink, A G J M Oude

    2011-09-01

    Automation and use of robots are increasingly being used within dairy farming and result in large amounts of real time data. This information provides a base for the new management concept of precision livestock farming. From 2003 to 2006, time series of herd mean daily milk yield were collected on 6 experimental research farms in the Netherlands. These time series were analyzed with an adaptive dynamic model following a Bayesian method to quantify the effect of heat stress. The effect of heat stress was quantified in terms of critical temperature above which heat stress occurred, duration of heat stress periods, and resulting loss in milk yield. In addition, dynamic changes in level and trend were monitored, including the estimation of a weekly pattern. Monitoring comprised detection of potential outliers and other deteriorations. The adaptive dynamic model fitted the data well; the root mean squared error of the forecasts ranged from 0.55 to 0.99 kg of milk/d. The percentages of potential outliers and signals for deteriorations ranged from 5.5 to 9.7%. The Bayesian procedure for time series analysis and monitoring provided a useful tool for process control. Online estimates (based on past and present only) and retrospective estimates (determined afterward from all data) of level and trend in daily milk yield showed an almost yearly cycle that was in agreement with the calving pattern: most cows calved in winter and early spring versus summer and autumn. Estimated weekly patterns in terms of weekday effects could be related to specific management actions, such as change of pasture during grazing. For the effect of heat stress, the mean estimated critical temperature above which heat stress was expected was 17.8±0.56°C. The estimated duration of the heat stress periods was 5.5±1.03 d, and the estimated loss was 31.4±12.2 kg of milk/cow per year. Farm-specific estimates are helpful to identify management factors like grazing, housing and feeding, that affect the

  7. Self-similar solutions for the dynamical condensation of a radiative gas layer

    CERN Document Server

    Iwasaki, Kazunari

    2008-01-01

    A new self-similar solution describing the dynamical condensation of a radiative gas is investigated under a plane-parallel geometry. The dynamical condensation is caused by thermal instability. The solution is applicable to generic flow with a net cooling rate per unit volume and time $\\propto \\rho^2 T^\\alpha$, where $\\rho$, $T$ and $\\alpha$ are density, temperature and a free parameter, respectively. Given $\\alpha$, a family of self-similar solutions with one parameter $\\eta$ is found in which the central density and pressure evolve as follows: $\\rho(x=0,t)\\propto (t_\\mathrm{c}-t)^{-\\eta/(2-\\alpha)}$ and $P(x=0,t)\\propto (t_\\mathrm{c}-t)^{(1-\\eta)/(1-\\alpha)}$, where $t_\\mathrm{c}$ is an epoch when the central density becomes infinite. For $\\eta\\sim 0$, the solution describes the isochoric mode, whereas for $\\eta\\sim1$, the solution describes the isobaric mode. The self-similar solutions exist in the range between the two limits; that is, for $01$. We compare the obtained self-similar solutions with the res...

  8. Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean

    Science.gov (United States)

    Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.

    2011-12-01

    Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling

  9. Utilizing dynamic tensiometry to quantify contact angle hysteresis and wetting state transitions on nonwetting surfaces.

    Science.gov (United States)

    Kleingartner, Justin A; Srinivasan, Siddarth; Mabry, Joseph M; Cohen, Robert E; McKinley, Gareth H

    2013-11-05

    Goniometric techniques traditionally quantify two parameters, the advancing and receding contact angles, that are useful for characterizing the wetting properties of a solid surface; however, dynamic tensiometry, which measures changes in the net force on a surface during the repeated immersion and emersion of a solid into a probe liquid, can provide further insight into the wetting properties of a surface. We detail a framework for analyzing tensiometric results that allows for the determination of wetting hysteresis, wetting state transitions, and characteristic topographical length scales on textured, nonwetting surfaces, in addition to the more traditional measurement of apparent advancing and receding contact angles. Fluorodecyl POSS, a low-surface-energy material, was blended with commercially available poly(methyl methacrylate) (PMMA) and then dip- or spray-coated onto glass substrates. These surfaces were probed with a variety of liquids to illustrate the effects of probe liquid surface tension, solid surface chemistry, and surface texture on the apparent contact angles and wetting hysteresis of nonwetting surfaces. Woven meshes were then used as model structured substrates to add a second, larger length scale for the surface texture. When immersed into a probe liquid, these spray-coated mesh surfaces can form a metastable, solid-liquid-air interface on the largest length scale of surface texture. The increasing hydrostatic pressure associated with progressively greater immersion depths disrupts this metastable, composite interface and forces penetration of the probe liquid into the mesh structure. This transition is marked by a sudden change in the wetting hysteresis, which can be systematically probed using spray-coated, woven meshes of varying wire radius and spacing. We also show that dynamic tensiometry can accurately and quantitatively characterize topographical length scales that are present on microtextured surfaces.

  10. Land cover change and remote sensing: Examples of quantifying spatiotemporal dynamics in tropical forests

    Energy Technology Data Exchange (ETDEWEB)

    Krummel, J.R.; Su, Haiping [Argonne National Lab., IL (United States); Fox, J. [East-West Center, Honolulu, HI (United States); Yarnasan, S.; Ekasingh, M. [Chiang Mai Univ. (Thailand)

    1995-06-01

    Research on human impacts or natural processes that operate over broad geographic areas must explicitly address issues of scale and spatial heterogeneity. While the tropical forests of Southeast Asia and Mexico have been occupied and used to meet human needs for thousands of years, traditional forest management systems are currently being transformed by rapid and far-reaching demographic, political, economic, and environmental changes. The dynamics of population growth, migration into the remaining frontiers, and responses to national and international market forces result in a demand for land to produce food and fiber. These results illustrate some of the mechanisms that drive current land use changes, especially in the tropical forest frontiers. By linking the outcome of individual land use decisions and measures of landscape fragmentation and change, the aggregated results shows the hierarchy of temporal and spatial events that in summation result in global changes to the most complex and sensitive biome -- tropical forests. By quantifying the spatial and temporal patterns of tropical forest change, researchers can assist policy makers by showing how landscape systems in these tropical forests are controlled by physical, biological, social, and economic parameters.

  11. Practical technique to quantify small, dense low-density lipoprotein cholesterol using dynamic light scattering

    Science.gov (United States)

    Trirongjitmoah, Suchin; Iinaga, Kazuya; Sakurai, Toshihiro; Chiba, Hitoshi; Sriyudthsak, Mana; Shimizu, Koichi

    2016-04-01

    Quantification of small, dense low-density lipoprotein (sdLDL) cholesterol is clinically significant. We propose a practical technique to estimate the amount of sdLDL cholesterol using dynamic light scattering (DLS). An analytical solution in a closed form has newly been obtained to estimate the weight fraction of one species of scatterers in the DLS measurement of two species of scatterers. Using this solution, we can quantify the sdLDL cholesterol amount from the amounts of the low-density lipoprotein cholesterol and the high-density lipoprotein (HDL) cholesterol, which are commonly obtained through clinical tests. The accuracy of the proposed technique was confirmed experimentally using latex spheres with known size distributions. The applicability of the proposed technique was examined using samples of human blood serum. The possibility of estimating the sdLDL amount using the HDL data was demonstrated. These results suggest that the quantitative estimation of sdLDL amounts using DLS is feasible for point-of-care testing in clinical practice.

  12. Quantifying microstructural dynamics and electrochemical activity of graphite and silicon-graphite lithium ion battery anodes

    Science.gov (United States)

    Pietsch, Patrick; Westhoff, Daniel; Feinauer, Julian; Eller, Jens; Marone, Federica; Stampanoni, Marco; Schmidt, Volker; Wood, Vanessa

    2016-01-01

    Despite numerous studies presenting advances in tomographic imaging and analysis of lithium ion batteries, graphite-based anodes have received little attention. Weak X-ray attenuation of graphite and, as a result, poor contrast between graphite and the other carbon-based components in an electrode pore space renders data analysis challenging. Here we demonstrate operando tomography of weakly attenuating electrodes during electrochemical (de)lithiation. We use propagation-based phase contrast tomography to facilitate the differentiation between weakly attenuating materials and apply digital volume correlation to capture the dynamics of the electrodes during operation. After validating that we can quantify the local electrochemical activity and microstructural changes throughout graphite electrodes, we apply our technique to graphite-silicon composite electrodes. We show that microstructural changes that occur during (de)lithiation of a pure graphite electrode are of the same order of magnitude as spatial inhomogeneities within it, while strain in composite electrodes is locally pronounced and introduces significant microstructural changes. PMID:27671269

  13. Quantifying cardiac sympathetic and parasympathetic nervous activities using principal dynamic modes analysis of heart rate variability.

    Science.gov (United States)

    Zhong, Yuru; Jan, Kung-Ming; Ju, Ki Hwan; Chon, Ki H

    2006-09-01

    The ratio between low-frequency (LF) and high-frequency (HF) spectral power of heart rate has been used as an approximate index for determining the autonomic nervous system (ANS) balance. An accurate assessment of the ANS balance can only be achieved if clear separation of the dynamics of the sympathetic and parasympathetic nervous activities can be obtained, which is a daunting task because they are nonlinear and have overlapping dynamics. In this study, a promising nonlinear method, termed the principal dynamic mode (PDM) method, is used to separate dynamic components of the sympathetic and parasympathetic nervous activities on the basis of ECG signal, and the results are compared with the power spectral approach to assessing the ANS balance. The PDM analysis based on the 28 subjects consistently resulted in a clear separation of the two nervous systems, which have similar frequency characteristics for parasympathetic and sympathetic activities as those reported in the literature. With the application of atropine, in 13 of 15 supine subjects there was an increase in the sympathetic-to-parasympathetic ratio (SPR) due to a greater decrease of parasympathetic than sympathetic activity (P=0.003), and all 13 subjects in the upright position had a decrease in SPR due to a greater decrease of sympathetic than parasympathetic activity (Pparasympathetic and sympathetic nervous systems. The culprit is equivalent decreases in both the sympathetic and parasympathetic activities irrespective of the pharmacological blockades. These findings suggest that the PDM shows promise as a noninvasive and quantitative marker of ANS imbalance, which has been shown to be a factor in many cardiac and stress-related diseases.

  14. Cholera and shigellosis in Bangladesh: similarities and differences in population dynamics under climate forcing

    Science.gov (United States)

    Pascual, M.; Cash, B.; Reiner, R.; King, A.; Emch, M.; Yunus, M.; Faruque, A. S.

    2012-12-01

    The influence of climate variability on the population dynamics of infectious diseases is considered a large scale, regional, phenomenon, and as such, has been previously addressed for cholera with temporal models that do not incorporate fine-scale spatial structure. In our previous work, evidence for a role of ENSO (El Niño Southern Oscillation) on cholera in Bangladesh was elucidated, and shown to influence the regional climate through precipitation. With a probabilistic spatial model for cholera dynamics in the megacity of Dhaka, we found that the action of climate variability (ENSO and flooding) is localized: there is a climate-sensitive urban core that acts to propagate risk to the rest of the city. Here, we consider long-term surveillance data for shigellosis, another diarrheal disease that coexists with cholera in Bangladesh. We compare the patterns of association with climate variables for these two diseases in a rural setting, as well as the spatial structure in their spatio-temporal dynamics in an urban one. Evidence for similar patterns is presented, and discussed in the context of the differences in the routes of transmission of the two diseases and the proposed role of an environmental reservoir in cholera. The similarities provide evidence for a more general influence of hydrology and of socio-economic factors underlying human susceptibility and sanitary conditions.

  15. Similar features that appear both on the dynamic spectra of the Sun and Jupiter

    Science.gov (United States)

    Litvinenko, G.; Konovalenko, A.; Zakharenko, V.; Vinogradov, V.; Dorovsky, V.; Melnik, V.; Shaposhnikov, V.; Rucker, H. O.; Zarka, Ph.

    2013-09-01

    At present, the physical nature of the basic components of the solar sporadic radiation has been well studied and reliably identified non-equilibrium particle emission mechanisms responsible for their origin [1, 2, and references therein]. Jupiter decameter emission (DAM) represents an extraordinary astrophysical phenomenon which is characterized by an unusual complexity of the frequency-temporal structure of its dynamic spectra. It should be noted that since of its discovering many problems in the theory of the Jovian decameter emission have been successfully investigated and solved [3, and references therein]. Nevertheless, a great number of physical features of this phenomenon still remain unclear. It should be noted that the quasi-similar in shape features appear in the dynamic spectra both in the Sun and the Jovian radio emission. We hope that future research of the similar properties in the emission spectra of Jupiter and the Sun and analogy between the plasma processes in the solar corona and magnetosphere of Jupiter can allow also define the similar features in the radiation mechanisms of these cosmic objects. One of the promising approaches to investigating features of the Jovian DAM emission and the decameter solar radiation is application of novel experimental techniques with a further detailed analysis of the obtained data.

  16. A dynamic procedure based on the scale-similarity hypotheses for large-eddy simulation

    Institute of Scientific and Technical Information of China (English)

    ZHOU Bing; CUI Guixiang; CHEN Naixiang

    2007-01-01

    Current dynamic procedures in large-eddy simulation treat the two subgrid-scale stresses in the Germano identity with the same subgrid base model.Thus to get the base model coefficient,the coefficient must be assumed to be constant for test filter operation.However,since the coefficient has sharp fluctuations,this assumption causes some inconsistence.A new dynamic procedure was developed in which these two stresses are modeled by the base model and the scale-similarity hypotheses respectively.Thus the need for the assumption is removed and consistence is restored.The new procedure is tested in the large-eddy simulation of a lid-driven cavity flow at Reynolds number of 10,000.The results show that the new procedure can both improve the prediction of statistics of the flow and effectively relieve the singularity of subgrid-scale (SGS) model coefficient.

  17. Stable Self-Similar Blow-Up Dynamics for Slightly {L^2}-Supercritical Generalized KDV Equations

    Science.gov (United States)

    Lan, Yang

    2016-07-01

    In this paper we consider the slightly {L^2}-supercritical gKdV equations {partial_t u+(u_{xx}+u|u|^{p-1})_x=0}, with the nonlinearity {5 < p < 5+\\varepsilon} and {0 < \\varepsilon≪ 1}. We will prove the existence and stability of a blow-up dynamics with self-similar blow-up rate in the energy space {H^1} and give a specific description of the formation of the singularity near the blow-up time.

  18. Inferring Characteristics of Sensorimotor Behavior by Quantifying Dynamics of Animal Locomotion

    Science.gov (United States)

    Leung, KaWai

    Locomotion is one of the most well-studied topics in animal behavioral studies. Many fundamental and clinical research make use of the locomotion of an animal model to explore various aspects in sensorimotor behavior. In the past, most of these studies focused on population average of a specific trait due to limitation of data collection and processing power. With recent advance in computer vision and statistical modeling techniques, it is now possible to track and analyze large amounts of behavioral data. In this thesis, I present two projects that aim to infer the characteristics of sensorimotor behavior by quantifying the dynamics of locomotion of nematode Caenorhabditis elegans and fruit fly Drosophila melanogaster, shedding light on statistical dependence between sensing and behavior. In the first project, I investigate the possibility of inferring noxious sensory information from the behavior of Caenorhabditis elegans. I develop a statistical model to infer the heat stimulus level perceived by individual animals from their stereotyped escape responses after stimulation by an IR laser. The model allows quantification of analgesic-like effects of chemical agents or genetic mutations in the worm. At the same time, the method is able to differentiate perturbations of locomotion behavior that are beyond affecting the sensory system. With this model I propose experimental designs that allows statistically significant identification of analgesic-like effects. In the second project, I investigate the relationship of energy budget and stability of locomotion in determining the walking speed distribution of Drosophila melanogaster during aging. The locomotion stability at different age groups is estimated from video recordings using Floquet theory. I calculate the power consumption of different locomotion speed using a biomechanics model. In conclusion, the power consumption, not stability, predicts the locomotion speed distribution at different ages.

  19. Heart rate variability and blood pressure during dynamic and static exercise at similar heart rate levels.

    Science.gov (United States)

    Weippert, Matthias; Behrens, Kristin; Rieger, Annika; Stoll, Regina; Kreuzfeld, Steffi

    2013-01-01

    Aim was to elucidate autonomic responses to dynamic and static (isometric) exercise of the lower limbs eliciting the same moderate heart rate (HR) response. 23 males performed two kinds of voluntary exercise in a supine position at similar heart rates: static exercise (SE) of the lower limbs (static leg press) and dynamic exercise (DE) of the lower limbs (cycling). Subjective effort, systolic (SBP) and diastolic blood pressure (DBP), mean arterial pressure (MAP), rate pressure product (RPP) and the time between consecutive heart beats (RR-intervals) were measured. Time-domain (SDNN, RMSSD), frequency-domain (power in the low and high frequency band (LFP, HFP)) and geometric measures (SD1, SD2) as well as non-linear measures of regularity (approximate entropy (ApEn), sample entropy (SampEn) and correlation dimension D2) were calculated. Although HR was similar during both exercise conditions (88±10 bpm), subjective effort, SBP, DBP, MAP and RPP were significantly enhanced during SE. HRV indicators representing overall variability (SDNN, SD 2) and vagal modulated variability (RMSSD, HFP, SD 1) were increased. LFP, thought to be modulated by both autonomic branches, tended to be higher during SE. ApEn and SampEn were decreased whereas D2 was enhanced during SE. It can be concluded that autonomic control processes during SE and DE were qualitatively different despite similar heart rate levels. The differences were reflected by blood pressure and HRV indices. HRV-measures indicated a stronger vagal cardiac activity during SE, while blood pressure response indicated a stronger sympathetic efferent activity to the vessels. The elevated vagal cardiac activity during SE might be a response mechanism, compensating a possible co-activation of sympathetic cardiac efferents, as HR and LF/HF was similar and LFP tended to be higher. However, this conclusion must be drawn cautiously as there is no HRV-marker reflecting "pure" sympathetic cardiac activity.

  20. Relativistic Self-similar Dynamic Collapses of Black Holes in General Polytropic Spherical Clouds

    CERN Document Server

    Lian, Biao

    2013-01-01

    We study the hydrodynamic self-similar mass collapses of general polytropic (GP) spherical clouds to central Schwarzschild black holes and void evolution with or without shocks. In order to grossly capture characteristic effects of general relativity (GR) outside yet close to the event horizon of a Schwarzschild black hole and to avoid mathematical complexity, we adopt the approximation of the Paczynski-Wiita gravity to replace the simple Newtonian gravity in our model formulation. A new dimensionless parameter s appears with the physical meaning of the square of the ratio of the sound speed to the speed of light $c$. Various self-similar dynamic solutions are constructed for a polytropic index $\\gamma>4/3$. Two (for small enough $s4/3$, representing the collapse of static singular GP spheres towards the central singularity of spacetime. Such GP spherical dynamic mass collapse is shown to be highly efficient for the rapid formation of supermassive black holes (SMBHs; mass range of $10^6-10^{10}M_{\\odot}$) in ...

  1. Accretion disk dynamics. α-viscosity in self-similar self-gravitating models

    Science.gov (United States)

    Kubsch, Marcus; Illenseer, Tobias F.; Duschl, Wolfgang J.

    2016-04-01

    Aims: We investigate the suitability of α-viscosity in self-similar models for self-gravitating disks with a focus on active galactic nuclei (AGN) disks. Methods: We use a self-similar approach to simplify the partial differential equations arising from the evolution equation, which are then solved using numerical standard procedures. Results: We find a self-similar solution for the dynamical evolution of self-gravitating α-disks and derive the significant quantities. In the Keplerian part of the disk our model is consistent with standard stationary α-disk theory, and self-consistent throughout the self-gravitating regime. Positive accretion rates throughout the disk demand a high degree of self-gravitation. Combined with the temporal decline of the accretion rate and its low amount, the model prohibits the growth of large central masses. Conclusions: α-viscosity cannot account for the evolution of the whole mass spectrum of super-massive black holes (SMBH) in AGN. However, considering the involved scales it seems suitable for modelling protoplanetary disks.

  2. Accretion disk dynamics: {\\alpha}-viscosity in self-similar self-gravitating models

    CERN Document Server

    Kubsch, Marcus; Duschl, W J

    2016-01-01

    Aims: We investigate the suitability of {\\alpha}-viscosity in self-similar models for self-gravitating disks with a focus on active galactic nuclei (AGN) disks. Methods: We use a self-similar approach to simplify the partial differential equations arising from the evolution equation, which are then solved using numerical standard procedures. Results: We find a self-similar solution for the dynamical evolution of self-gravitating {\\alpha}-disks and derive the significant quantities. In the Keplerian part of the disk our model is consistent with standard stationary {\\alpha}-disk theory, and self-consistent throughout the self-gravitating regime. Positive accretion rates throughout the disk demand a high degree of self-gravitation. Combined with the temporal decline of the accretion rate and its low amount, the model prohibits the growth of large central masses. Conclusions: {\\alpha}-viscosity cannot account for the evolution of the whole mass spectrum of super-massive black holes (SMBH) in AGN. However, conside...

  3. Quantify Water Extraction by TBP/Dodecane via Molecular Dynamics Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Khomami, Bamin [Univ. of Tennessee, Knoxville, TN (United States); Cui, Shengting [Univ. of Tennessee, Knoxville, TN (United States); de Almeida, Valmor F. [Oak Ridge National Lab., Oak Ridge, TN (United States); Felker, Kevin [Oak Ridge National Lab., Oak Ridge, TN (United States)

    2013-05-16

    The purpose of this project is to quantify the interfacial transport of water into the most prevalent nuclear reprocessing solvent extractant mixture, namely tri-butyl- phosphate (TBP) and dodecane, via massively parallel molecular dynamics simulations on the most powerful machines available for open research. Specifically, we will accomplish this objective by evolving the water/TBP/dodecane system up to 1 ms elapsed time, and validate the simulation results by direct comparison with experimentally measured water solubility in the organic phase. The significance of this effort is to demonstrate for the first time that the combination of emerging simulation tools and state-of-the-art supercomputers can provide quantitative information on par to experimental measurements for solvent extraction systems of relevance to the nuclear fuel cycle. Results: Initially, the isolated single component, and single phase systems were studied followed by the two-phase, multicomponent counterpart. Specifically, the systems we studied were: pure TBP; pure n-dodecane; TBP/n-dodecane mixture; and the complete extraction system: water-TBP/n-dodecane two phase system to gain deep insight into the water extraction process. We have completely achieved our goal of simulating the molecular extraction of water molecules into the TBP/n-dodecane mixture up to the saturation point, and obtained favorable comparison with experimental data. Many insights into fundamental molecular level processes and physics were obtained from the process. Most importantly, we found that the dipole moment of the extracting agent is crucially important in affecting the interface roughness and the extraction rate of water molecules into the organic phase. In addition, we have identified shortcomings in the existing OPLS-AA force field potential for long-chain alkanes. The significance of this force field is that it is supposed to be optimized for molecular liquid simulations. We found that it failed for dodecane and

  4. Quantifying nitrate dynamics in an oligotrophic lake using Δ17O

    Directory of Open Access Journals (Sweden)

    A. Tanaka

    2011-03-01

    Full Text Available The stable isotopic compositions of nitrate, including the 17O anomalies (Δ17O, were determined twice in 1 yr (June and August 2007 in the oligotrophic water column of Lake Mashu, Japan. These data were then used to quantify the geochemical dynamics of nitrate in the lake, by using the deposition rate of the atmospheric nitrate onto the entire catchment area of the lake. The total amount of nitrate in the lake water decreased from 4.2 to 2.1 Mmol during the period between the observations, while the average Δ17O values remained uniform at +2.5‰. The Δ17O values corresponded to an small and uniform mixing ratio of atmospheric nitrate to total nitrate of 9.7 ± 0.8%. These results indicate that 0.52 ± 0.34 Mmol of the remineralized nitrate was fed into the water column through nitrification, while 2.6 ± 0.4 Mmol of nitrate was simultaneously removed from the water column by assimilation, during the period between the observations. The lake water dissolved nitrate was characterized by rapid removal through assimilation during summer until it was almost completely removed from the euphotic layer, as well as continuous feeding into the lake through nitrification (3.2 ± 0.3 Mmol a−1 and deposition (0.35 ± 0.2 Mmol a−1, regardless of the seasons. The 15N-depleted nitrogen isotopic compositions of nitrate were as low as −6.5‰ in June, which also indicates that in-lake nitrification is the major source of nitrate in the lake and suggests that there is low potential for denitrification in and around the lake. Atmospheric nitrate deposited into the lake will be assimilated quickly, having a mean residence time of 1.2 ± 0.1 yr. In addition, more than 90% of the assimilated nitrate will be remineralized to nitrate and re-assimilated via active nitrogen cycling in the lake.

  5. Similarity solutions of a replicator dynamics equation associated with a continuum of pure strategies

    Directory of Open Access Journals (Sweden)

    Vassilis G. Papanicolaou

    2015-09-01

    Full Text Available We introduce a nonlinear degenerate parabolic equation containing a nonlocal term. The equation serves as a replicator dynamics model where the set of strategies is a continuum. In our model the payoff operator (which is the continuous analog of the payoff matrix is nonsymmetric and, also, evolves with time. We are interested in solutions u(t, x of our equation which are positive and their integral (with respect to x over the whole space is 1, for any t > 0. These solutions, being probability densities, can serve as time-evolving mixed strategies of a player. We show that for our model there is an one-parameter family of self-similar such solutions $u(t, x$, all approaching the Dirac delta function $\\delta(x$ as $t \\to 0^+$.

  6. Towards an Experimental Investigation of Wind Turbine Aerodynamics at Full Dynamic Similarity

    Science.gov (United States)

    Miller, Mark A.; Hultmark, Marcus

    2014-11-01

    As horizontal axis wind turbines continue to increase in size (with the largest approaching 200 meters in diameter) it becomes progressively more difficult to test new designs without high computational power or extensive experimental effort using conventional tools. Therefore, compromises are often made between the important non-dimensional parameters (Reynolds number and Strouhal number, or tip speed ratio) so that reasonable engineering insight can be gained. Using the unique facilities available at Princeton University, we aim to match both non-dimensional parameters and thus achieve full dynamic similarity at realistic conditions. This is accomplished by using the High Reynolds number Test Facility (or HRTF), which is a high pressure (200 atmospheres) wind tunnel. We present the design, manufacture, and testing of an apparatus suited to the unique environment of a high-pressure facility as well as future plans for investigating the underlying aerodynamics of large-scale wind turbines.

  7. Quantified and applied sea-bed dynamics of the Netherlands Continental Shelf and the Wadden Sea

    NARCIS (Netherlands)

    van Dijk, T.A.G.P.; Kleuskens, M.H.P.; Dorst, L.L.; Van der Tak, C.; Doornenbal, P.J.; Van der Spek, A.J.F.; Hoogendoorn, R.M.; Rodriguez Aguilera, D.; Menninga, P.J.; Noorlandt, R.P.

    2012-01-01

    Sedimentary coasts and shallow-sea beds may be dynamic. The large-scaled spatial variation in these dynamics and the smaller-scaled behaviour of individual marine bedforms are largely unknown. Sea-bed dynamics are relevant for the safety of shipping, and therefore for monitoring strategies, and for

  8. Achieving Full Dynamic Similarity with Small-Scale Wind Turbine Models

    Science.gov (United States)

    Miller, Mark; Kiefer, Janik; Westergaard, Carsten; Hultmark, Marcus

    2016-11-01

    Power and thrust data as a function of Reynolds number and Tip Speed Ratio are presented at conditions matching those of a full scale turbine. Such data has traditionally been very difficult to acquire due to the large length-scales of wind turbines, and the limited size of conventional wind tunnels. Ongoing work at Princeton University employs a novel, high-pressure wind tunnel (up to 220 atmospheres of static pressure) which uses air as the working fluid. This facility allows adjustment of the Reynolds number (via the fluid density) independent of the Tip Speed Ratio, up to a Reynolds number (based on chord and velocity at the tip) of over 3 million. Achieving dynamic similarity using this approach implies very high power and thrust loading, which results in mechanical loads greater than 200 times those experienced by a similarly sized model in a conventional wind tunnel. In order to accurately report the power coefficients, a series of tests were carried out on a specially designed model turbine drive-train using an external testing bench to replicate tunnel loading. An accurate map of the drive-train performance at various operating conditions was determined. Finally, subsequent corrections to the power coefficient are discussed in detail. Supported by: National Science Foundation Grant CBET-1435254 (program director Gregory Rorrer).

  9. Quantifying mercury isotope dynamics in captive Pacific bluefin tuna (Thunnus orientalis

    Directory of Open Access Journals (Sweden)

    Sae Yun Kwon

    2016-02-01

    Full Text Available Abstract Analyses of mercury (Hg isotope ratios in fish tissues are used increasingly to infer sources and biogeochemical processes of Hg in natural aquatic ecosystems. Controlled experiments that can couple internal Hg isotope behavior with traditional isotope tracers (δ13C, δ15N can improve the applicability of Hg isotopes as natural ecological tracers. In this study, we investigated changes in Hg isotope ratios (δ202Hg, Δ199Hg during bioaccumulation of natural diets in the pelagic Pacific bluefin tuna (Thunnus orientalis; PBFT. Juvenile PBFT were fed a mixture of natural prey and a dietary supplement (60% Loligo opalescens, 31% Sardinops sagax, 9% gel supplement in captivity for 2914 days, and white muscle tissues were analyzed for Hg isotope ratios and compared to time in captivity and internal turnover of δ13C and δ15N. PBFT muscle tissues equilibrated to Hg isotope ratios of the dietary mixture within ∼700 days, after which we observed a cessation in further shifts in Δ199Hg, and small but significant negative δ202Hg shifts from the dietary mixture. The internal behavior of Δ199Hg is consistent with previous fish studies, which showed an absence of Δ199Hg fractionation during Hg bioaccumulation. The negative δ202Hg shifts can be attributed to either preferential excretion of Hg with higher δ202Hg values or individual variability in captive PBFT feeding preferences and/or consumption rates. The overall internal behavior of Hg isotopes is similar to that described for δ13C and δ15N, though observed Hg turnover was slower compared to carbon and nitrogen. This improved understanding of internal dynamics of Hg isotopes in relation to δ13C and δ15N enhances the applicability of Hg isotope ratios in fish tissues for tracing Hg sources in natural ecosystems.

  10. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers.

    Science.gov (United States)

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D; Hauhs, Michael; Bodesheim, Paul; Kaminski, Thomas; Gans, Fabian; Rosso, Osvaldo A

    2016-01-01

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time

  11. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    Science.gov (United States)

    Sippel, Sebastian; Mahecha, Miguel D.; Hauhs, Michael; Bodesheim, Paul; Kaminski, Thomas; Gans, Fabian; Rosso, Osvaldo A.

    2016-01-01

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time

  12. Relativistic self-similar dynamic gravitational collapses of a quasi-spherical general polytropic magnetofluid

    Science.gov (United States)

    Lou, Yu-Qing; Xia, Yu-Kai

    2017-05-01

    We study magnetohydrodynamic (MHD) self-similar collapses and void evolution, with or without shocks, of a general polytropic quasi-spherical magnetofluid permeated by random transverse magnetic fields under the Paczynski-Wiita gravity that captures essential general relativistic effects of a Schwarzschild black hole (BH) with a growing mass. Based on the derived set of non-linear MHD ordinary differential equations, we obtain various asymptotic MHD solutions, the geometric and analytical properties of the magnetosonic critical curve (MSCC) and MHD shock jump conditions. Novel asymptotic MHD solution behaviours near the rim of central expanding voids are derived analytically. By exploring numerical global MHD solutions, we identify allowable boundary conditions at large radii that accommodate a smooth solution and show that a reasonable amount of magnetization significantly increases the mass accretion rate in the expansion-wave-collapse solution scenario. We also construct the counterparts of envelope-expansion-core-collapse solutions that cross the MSCC twice, which are found to be closely paired with a sequence of global smooth solutions satisfying a novel type of central MHD behaviours. MHD shocks with static outer and various inner flow profiles are also examined. Astrophysical applications include dynamic core collapses of magnetized massive stars and compact objects as well as formation of supermassive, hypermassive, dark matter and mixed matter BHs in the Universe, including the early Universe. Such gigantic BHs can be detected in X-ray/gamma-ray sources, quasars, ultraluminous infrared galaxies or extremely luminous infrared galaxies and dark matter overwhelmingly dominated elliptical galaxies as well as massive dark matter halos, etc. Gravitational waves and electromagnetic wave emissions in broad band (including e.g., gamma-ray bursts and fast radio bursts) can result from this type of dynamic collapses of forming BHs involving magnetized media.

  13. A Statistical Physics Characterization of the Complex Systems Dynamics: Quantifying Complexity from Spatio-Temporal Interactions

    OpenAIRE

    Hana Koorehdavoudi; Paul Bogdan

    2016-01-01

    Biological systems are frequently categorized as complex systems due to their capabilities of generating spatio-temporal structures from apparent random decisions. In spite of research on analyzing biological systems, we lack a quantifiable framework for measuring their complexity. To fill this gap, in this paper, we develop a new paradigm to study a collective group of N agents moving and interacting in a three-dimensional space. Our paradigm helps to identify the spatio-temporal states of t...

  14. Self-Similarity in Population Dynamics: Surname Distributions and Genealogical Trees

    Directory of Open Access Journals (Sweden)

    Paolo Rossi

    2015-01-01

    Full Text Available The frequency distribution of surnames turns out to be a relevant issue not only in historical demography but also in population biology, and especially in genetics, since surnames tend to behave like neutral genes and propagate like Y chromosomes. The stochastic dynamics leading to the observed scale-invariant distributions has been studied as a Yule process, as a branching phenomenon and also by field-theoretical renormalization group techniques. In the absence of mutations the theoretical models are in good agreement with empirical evidence, but when mutations are present a discrepancy between the theoretical and the experimental exponents is observed. Hints for the possible origin of the mismatch are discussed, with some emphasis on the difference between the asymptotic frequency distribution of a full population and the frequency distributions observed in its samples. A precise connection is established between surname distributions and the statistical properties of genealogical trees. Ancestors tables, being obviously self-similar, may be investigated theoretically by renormalization group techniques, but they can also be studied empirically by exploiting the large online genealogical databases concerning European nobility.

  15. Initial virtual flight test for a dynamically similar aircraft model with control augmentation system

    Directory of Open Access Journals (Sweden)

    Linliang Guo

    2017-04-01

    Full Text Available To satisfy the validation requirements of flight control law for advanced aircraft, a wind tunnel based virtual flight testing has been implemented in a low speed wind tunnel. A 3-degree-of-freedom gimbal, ventrally installed in the model, was used in conjunction with an actively controlled dynamically similar model of aircraft, which was equipped with the inertial measurement unit, attitude and heading reference system, embedded computer and servo-actuators. The model, which could be rotated around its center of gravity freely by the aerodynamic moments, together with the flow field, operator and real time control system made up the closed-loop testing circuit. The model is statically unstable in longitudinal direction, and it can fly stably in wind tunnel with the function of control augmentation of the flight control laws. The experimental results indicate that the model responds well to the operator’s instructions. The response of the model in the tests shows reasonable agreement with the simulation results. The difference of response of angle of attack is less than 0.5°. The effect of stability augmentation and attitude control law was validated in the test, meanwhile the feasibility of virtual flight test technique treated as preliminary evaluation tool for advanced flight vehicle configuration research was also verified.

  16. Perspective: Defining and quantifying the role of dynamics in enzyme catalysis

    Science.gov (United States)

    Warshel, Arieh; Bora, Ram Prasad

    2016-05-01

    Enzymes control chemical reactions that are key to life processes, and allow them to take place on the time scale needed for synchronization between the relevant reaction cycles. In addition to general interest in their biological roles, these proteins present a fundamental scientific puzzle, since the origin of their tremendous catalytic power is still unclear. While many different hypotheses have been put forward to rationalize this, one of the proposals that has become particularly popular in recent years is the idea that dynamical effects contribute to catalysis. Here, we present a critical review of the dynamical idea, considering all reasonable definitions of what does and does not qualify as a dynamical effect. We demonstrate that no dynamical effect (according to these definitions) has ever been experimentally shown to contribute to catalysis. Furthermore, the existence of non-negligible dynamical contributions to catalysis is not supported by consistent theoretical studies. Our review is aimed, in part, at readers with a background in chemical physics and biophysics, and illustrates that despite a substantial body of experimental effort, there has not yet been any study that consistently established a connection between an enzyme's conformational dynamics and a significant increase in the catalytic contribution of the chemical step. We also make the point that the dynamical proposal is not a semantic issue but a well-defined scientific hypothesis with well-defined conclusions.

  17. A generalized framework for quantifying the dynamics of EEG event-related desynchronization.

    Directory of Open Access Journals (Sweden)

    Steven Lemm

    2009-08-01

    Full Text Available Brains were built by evolution to react swiftly to environmental challenges. Thus, sensory stimuli must be processed ad hoc, i.e., independent--to a large extent--from the momentary brain state incidentally prevailing during stimulus occurrence. Accordingly, computational neuroscience strives to model the robust processing of stimuli in the presence of dynamical cortical states. A pivotal feature of ongoing brain activity is the regional predominance of EEG eigenrhythms, such as the occipital alpha or the pericentral mu rhythm, both peaking spectrally at 10 Hz. Here, we establish a novel generalized concept to measure event-related desynchronization (ERD, which allows one to model neural oscillatory dynamics also in the presence of dynamical cortical states. Specifically, we demonstrate that a somatosensory stimulus causes a stereotypic sequence of first an ERD and then an ensuing amplitude overshoot (event-related synchronization, which at a dynamical cortical state becomes evident only if the natural relaxation dynamics of unperturbed EEG rhythms is utilized as reference dynamics. Moreover, this computational approach also encompasses the more general notion of a "conditional ERD," through which candidate explanatory variables can be scrutinized with regard to their possible impact on a particular oscillatory dynamics under study. Thus, the generalized ERD represents a powerful novel analysis tool for extending our understanding of inter-trial variability of evoked responses and therefore the robust processing of environmental stimuli.

  18. Quantifying the Use of Dynamics in Western Keyboard Music: Lessons and Problems

    Directory of Open Access Journals (Sweden)

    Dorottya Fabian

    2011-01-01

    Full Text Available Ladinig and Huron’s (2010 investigation of the relationship between mode (major-minor and dynamics in Classical and Romantic piano music indicated higher levels of dynamics for compositions from the Classical period but only in major-mode pieces. This was contrary to the expectation that minor mode pieces from the Romantic era would be louder because romantic composers may have intended to convey seriousness, passion or even aggression, rather than sadness. Although the methodology was carefully crafted to enable necessary control for a quantitative study, it also contributed to the questionable relevance of the results. It is arguable whether the chosen repertoire is typical, whether initial markings in the score have a true bearing on the dynamic characteristics of a piece and whether notated dynamics are reliable data due to historical notation conventions and later editorial practices.

  19. Quantifying non-ergodic dynamics of force-free granular gases

    OpenAIRE

    Bodrova, Anna; Chechkin, Aleksei V.; Cherstvy, Andrey G.; Metzler, Ralf

    2015-01-01

    Brownianmotion is ergodic in the Boltzmann–Khinchin sense that long time averages of physical observables such as the mean squared displacement provide the same information as the corresponding ensemble average, even at out-of-equilibrium conditions. This property is the fundamental prerequisite for single particle tracking and its analysis in simple liquids. We study analytically and by event-driven molecular dynamics simulations the dynamics of force-free cooling granular gases and reveal a...

  20. Quantifying the Type of Urban Sprawl and Dynamic Changes in Shenzhen

    OpenAIRE

    Hao, Ruifang; Su, Wei; Yu, Deyong

    2012-01-01

    International audience; Urban sprawl is a hot topic of global concern with increasing dramatically and impact on ecological environment. But there is not a mature method to explain the detail of process of urban sprawl and dynamic changes. In this paper, the dynamic changes of urban is studied based on 1980, 1988, 1994, 2000 and 2005 five remote sensing images of Shenzhen. It is distinguished three types including infilling, edge-expansion and outlying using six landscape metrics related to c...

  1. The positive group affect spiral : a dynamic model of the emergence of positive affective similarity in work groups

    NARCIS (Netherlands)

    Walter, F.; Bruch, H.

    2008-01-01

    This conceptual paper seeks to clarify the process of the emergence of positive collective affect. Specifically, it develops a dynamic model of the emergence of positive affective similarity in work groups. It is suggested that positive group affective similarity and within-group relationship qualit

  2. The positive group affect spiral : a dynamic model of the emergence of positive affective similarity in work groups

    NARCIS (Netherlands)

    Walter, F.; Bruch, H.

    This conceptual paper seeks to clarify the process of the emergence of positive collective affect. Specifically, it develops a dynamic model of the emergence of positive affective similarity in work groups. It is suggested that positive group affective similarity and within-group relationship

  3. World Climate Classification and Search: Data Mining Approach Utilizing Dynamic Time Warping Similarity Function

    Science.gov (United States)

    Stepinski, T. F.; Netzel, P.; Jasiewicz, J.

    2014-12-01

    We have developed a novel method for classification and search of climate over the global land surface excluding Antarctica. Our method classifies climate on the basis of the outcome of time series segmentation and clustering. We use WorldClim 30 arc sec. (approx. 1 km) resolution grid data which is based on 50 years of climatic observations. Each cell in a grid is assigned a 12 month series consisting of 50-years monthly averages of mean, maximum, and minimum temperatures as well as the total precipitation. The presented method introduces several innovations with comparison to existing data-driven methods of world climate classifications. First, it uses only climatic rather than bioclimatic data. Second, it employs object-oriented methodology - the grid is first segmented before climatic segments are classified. Third, and most importantly, the similarity between climates in two given cells is performed using the dynamic time warping (DTW) measure instead of the Euclidean distance. The DTW is known to be superior to Euclidean distance for time series, but has not been utilized before in classification of global climate. To account for computational expense of DTW we use highly efficient GeoPAT software (http://sil.uc.edu/gitlist/) that, in the first step, segments the grid into local regions of uniform climate. In the second step, the segments are classified. We also introduce a climate search - a GeoWeb-based method for interactive presentation of global climate information in the form of query-and-retrieval. A user selects a geographical location and the system returns a global map indicating level of similarity between local climates and a climate in the selected location. The results of the search for location: "University of Cincinnati, Main Campus" are presented on attached map. The results of the search for location: "University of Cincinnati, Main Campus" are presented on the map. We have compared the results of our method to Koeppen classification scheme

  4. A Statistical Physics Characterization of the Complex Systems Dynamics: Quantifying Complexity from Spatio-Temporal Interactions

    Science.gov (United States)

    Koorehdavoudi, Hana; Bogdan, Paul

    2016-06-01

    Biological systems are frequently categorized as complex systems due to their capabilities of generating spatio-temporal structures from apparent random decisions. In spite of research on analyzing biological systems, we lack a quantifiable framework for measuring their complexity. To fill this gap, in this paper, we develop a new paradigm to study a collective group of N agents moving and interacting in a three-dimensional space. Our paradigm helps to identify the spatio-temporal states of the motion of the group and their associated transition probabilities. This framework enables the estimation of the free energy landscape corresponding to the identified states. Based on the energy landscape, we quantify missing information, emergence, self-organization and complexity for a collective motion. We show that the collective motion of the group of agents evolves to reach the most probable state with relatively lowest energy level and lowest missing information compared to other possible states. Our analysis demonstrates that the natural group of animals exhibit a higher degree of emergence, self-organization and complexity over time. Consequently, this algorithm can be integrated into new frameworks to engineer collective motions to achieve certain degrees of emergence, self-organization and complexity.

  5. Using seismic arrays to quantify the physics of a glacial outburst flood and its legacy on upland river dynamics

    Science.gov (United States)

    Gimbert, Florent; Cook, Kristen; Andermann, Christoff; Hovius, Niels; Turowski, Jens

    2017-04-01

    In the Himalayas fluvial erosion is thought to be controlled by the intense annual Indian Summer Monsoon precipitation. However, this region is also exposed to catastrophic floods generated by the sudden failure of landslides or moraine dams. These floods are rare and particularly devastating. Thus they have a strong impact on rivers and adjacent hillslopes, and they represent a hazard for local populations. Due to the difficulties to observe these floods and quantify their physics using traditional methods, their importance for the long-term evolution of Himalayan Rivers remains largely unknown, and no consistent early warning system exists to anticipate these events, especially in trans-boundary regions. Here we show that seismic arrays can be used to (i) reliably anticipate outburst floods and to (ii) quantify multiple and key fluvial processes associated with their propagation and their lasting impacts on upland river dynamics. We report unique seismic observations of a glacial lake outburst flood event that occurred the 5th of July 2016 in the Bhote Koshi River (Central Nepal). Precursory seismic signals are identified from the onset of the lake drainage event such that an early warning alarm may be turned on about an hour before the outburst flood wave reaches areas with an exposed population. Using our network of stations we observe for the first time that the outburst flood wave is in fact made of two distinct waves, namely a water flow wave and a bedload sediment wave. As expected these two waves travel at different speeds. We find that the ratio between the two wave speeds matches with that previously found at much smaller scales in flume laboratory experiments. Based on the physical modelling of both water-flow- and bedload- induced seismic noise we provide estimates of flow depth and bedload transport characteristics (flux, moving grains sizes) prior, during and after the flood. In particular we show that bedload sediment flux is enhanced by up to a

  6. Coordinated approaches to quantify long-term ecosystem dynamics in response to global change

    DEFF Research Database (Denmark)

    Liu, Y.; Melillo, J.; Niu, S.

    2011-01-01

    Many serious ecosystem consequences of climate change will take decades or even centuries to emerge. Long-term ecological responses to global change are strongly regulated by slow processes, such as changes in species composition, carbon dynamics in soil and by long-lived plants, and accumulation...

  7. Quantifying sediment dynamics within the Dutch Wadden Sea using bathymetric monitoring series

    NARCIS (Netherlands)

    Vonhögen-Peeters, L.M.; Heteren, S. van; Wiersma, A.P.; Kleine, M.P.E. de; Marges, V.C.

    2013-01-01

    During the last millennium, human intervention has had an increasing impact on the bathymetry of the Wadden Sea. The significance of these human-induced changes for the decadal-scale development of the Wadden Sea in light of natural sediment dynamics is still unknown. We compared a series of 20 th-c

  8. Quantifying collective effervescence: Heart-rate dynamics at a fire-walking ritual

    DEFF Research Database (Denmark)

    Xygalatas, Dimitris; Konvalinka, Ivana; Roepstorff, Andreas

    2011-01-01

    solidarity, yet quantitative evidence for these conjectures is scarce. Our recent study measured the physiological effects of a highly arousing Spanish fire-walking ritual, revealing shared patterns in heart-rate dynamics between participants and related spectators. We briefly describe our results...

  9. Real-Time G-Protein-Coupled Receptor Imaging to Understand and Quantify Receptor Dynamics

    Directory of Open Access Journals (Sweden)

    María S. Aymerich

    2011-01-01

    Full Text Available Understanding the trafficking of G-protein-coupled receptors (GPCRs and their regulation by agonists and antagonists is fundamental to develop more effective drugs. Optical methods using fluorescent-tagged receptors and spinning disk confocal microscopy are useful tools to investigate membrane receptor dynamics in living cells. The aim of this study was to develop a method to characterize receptor dynamics using this system which offers the advantage of very fast image acquisition with minimal cell perturbation. However, in short-term assays photobleaching was still a problem. Thus, we developed a procedure to perform a photobleaching-corrected image analysis. A study of short-term dynamics of the long isoform of the dopamine type 2 receptor revealed an agonist-induced increase in the mobile fraction of receptors with a rate of movement of 0.08 μm/s For long-term assays, the ratio between the relative fluorescence intensity at the cell surface versus that in the intracellular compartment indicated that receptor internalization only occurred in cells co-expressing G protein-coupled receptor kinase 2. These results indicate that the lateral movement of receptors and receptor internalization are not directly coupled. Thus, we believe that live imaging of GPCRs using spinning disk confocal image analysis constitutes a powerful tool to study of receptor dynamics.

  10. Agent-Based Model to Study and Quantify the Evolution Dynamics of Android Malware Infection

    Directory of Open Access Journals (Sweden)

    Juan Alegre-Sanahuja

    2014-01-01

    Full Text Available In the last years the number of malware Apps that the users download to their devices has risen. In this paper, we propose an agent-based model to quantify the Android malware infection evolution, modeling the behavior of the users and the different markets where the users may download Apps. The model predicts the number of infected smartphones depending on the type of malware. Additionally, we will estimate the cost that the users should afford when the malware is in their devices. We will be able to analyze which part is more critical: the users, giving indiscriminate permissions to the Apps or not protecting their devices with antivirus software, or the Android platform, due to the vulnerabilities of the Android devices that permit their rooted. We focus on the community of Valencia, Spain, although the obtained results can be extrapolated to other places where the number of Android smartphones remains fairly stable.

  11. EpiTools: An Open-Source Image Analysis Toolkit for Quantifying Epithelial Growth Dynamics.

    Science.gov (United States)

    Heller, Davide; Hoppe, Andreas; Restrepo, Simon; Gatti, Lorenzo; Tournier, Alexander L; Tapon, Nicolas; Basler, Konrad; Mao, Yanlan

    2016-01-11

    Epithelia grow and undergo extensive rearrangements to achieve their final size and shape. Imaging the dynamics of tissue growth and morphogenesis is now possible with advances in time-lapse microscopy, but a true understanding of their complexities is limited by automated image analysis tools to extract quantitative data. To overcome such limitations, we have designed a new open-source image analysis toolkit called EpiTools. It provides user-friendly graphical user interfaces for accurately segmenting and tracking the contours of cell membrane signals obtained from 4D confocal imaging. It is designed for a broad audience, especially biologists with no computer-science background. Quantitative data extraction is integrated into a larger bioimaging platform, Icy, to increase the visibility and usability of our tools. We demonstrate the usefulness of EpiTools by analyzing Drosophila wing imaginal disc growth, revealing previously overlooked properties of this dynamic tissue, such as the patterns of cellular rearrangements.

  12. Exploiting Oceanic Residual Depth to Quantify Present-day Dynamic Topography at the Earth's Surface

    Science.gov (United States)

    Hoggard, Mark; White, Nicky

    2014-05-01

    Convective circulation within the mantle causes vertical motions at the Earth's surface. This dynamic topography is time dependent and occurs on wavelengths of 1000s km with maximum amplitudes of ±2 km. Convective simulation models have been used extensively to make predictions of dynamic topography and have thus far out-paced observational constraints. Here, the well-established relationship between seafloor subsidence and age is used to produce a global map of residual depth anomalies in the oceanic realm. Care is taken to remove other causes of topography, including an isostatic correction for sedimentary loading that takes compaction into account, a correction for variable oceanic crustal thickness, and lithospheric thickening with age away from mid-ocean ridge spreading centres. A dataset including over 1000 seismic reflection profiles and 300 modern wide-angle refraction experiments has been amassed, primarily on old ocean floor adjacent to the continents. Calculation of residual depth yields a map of present-day dynamic topography with amplitudes significantly larger than the errors associated with the corrections. One of the most interesting results occurs along the west coast of Africa, where two full 2000 km wavelengths of dynamic topography have been captured with amplitudes ±1 km that correlate well with the long-wavelength free air gravity anomaly. Comparison with predictive models reveal poor to moderate correlations. This is a direct result of the limited resolution of the mantle tomography models used to set-up convection simulations and also the currently poor understanding of viscosity structure within the Earth. It is hoped that this residual depth dataset should provide an excellent surface boundary constraint for future convective simulation.

  13. Quantifying non-ergodic dynamics of force-free granular gases.

    Science.gov (United States)

    Bodrova, Anna; Chechkin, Aleksei V; Cherstvy, Andrey G; Metzler, Ralf

    2015-09-14

    Brownian motion is ergodic in the Boltzmann-Khinchin sense that long time averages of physical observables such as the mean squared displacement provide the same information as the corresponding ensemble average, even at out-of-equilibrium conditions. This property is the fundamental prerequisite for single particle tracking and its analysis in simple liquids. We study analytically and by event-driven molecular dynamics simulations the dynamics of force-free cooling granular gases and reveal a violation of ergodicity in this Boltzmann-Khinchin sense as well as distinct ageing of the system. Such granular gases comprise materials such as dilute gases of stones, sand, various types of powders, or large molecules, and their mixtures are ubiquitous in Nature and technology, in particular in Space. We treat-depending on the physical-chemical properties of the inter-particle interaction upon their pair collisions-both a constant and a velocity-dependent (viscoelastic) restitution coefficient ε. Moreover we compare the granular gas dynamics with an effective single particle stochastic model based on an underdamped Langevin equation with time dependent diffusivity. We find that both models share the same behaviour of the ensemble mean squared displacement (MSD) and the velocity correlations in the limit of weak dissipation. Qualitatively, the reported non-ergodic behaviour is generic for granular gases with any realistic dependence of ε on the impact velocity of particles.

  14. Quantifying Network Dynamics and Information Flow Across Chinese Social Media During the African Ebola Outbreak.

    Science.gov (United States)

    Feng, Shihui; Hossain, Liaquat; Crawford, John W; Bossomaier, Terry

    2017-08-01

    Social media provides us with a new platform on which to explore how the public responds to disasters and, of particular importance, how they respond to the emergence of infectious diseases such as Ebola. Provided it is appropriately informed, social media offers a potentially powerful means of supporting both early detection and effective containment of communicable diseases, which is essential for improving disaster medicine and public health preparedness. The 2014 West African Ebola outbreak is a particularly relevant contemporary case study on account of the large number of annual arrivals from Africa, including Chinese employees engaged in projects in Africa. Weibo (Weibo Corp, Beijing, China) is China's most popular social media platform, with more than 2 billion users and over 300 million daily posts, and offers great opportunity to monitor early detection and promotion of public health awareness. We present a proof-of-concept study of a subset of Weibo posts during the outbreak demonstrating potential and identifying priorities for improving the efficacy and accuracy of information dissemination. We quantify the evolution of the social network topology within Weibo relating to the efficacy of information sharing. We show how relatively few nodes in the network can have a dominant influence over both the quality and quantity of the information shared. These findings make an important contribution to disaster medicine and public health preparedness from theoretical and methodological perspectives for dealing with epidemics. (Disaster Med Public Health Preparedness. 2017;page 1 of 12).

  15. The past, present, and future of English dialects: Quantifying convergence, divergence, and dynamic equilibrium

    OpenAIRE

    Maguire, W.; McMahon, A; Heggarty, P.; Dediu, D.

    2010-01-01

    This article reports on research which seeks to compare and measure the similarities between phonetic transcriptions in the analysis of relationships between varieties of English. It addresses the question of whether these varieties have been converging, diverging, or maintaining equilibrium as a result of endogenous and exogenous phonetic and phonological changes. We argue that it is only possible to identify such patterns of change by the simultaneous comparison of a wide range of varieties...

  16. Quantifying the behavior of price dynamics at opening time in stock market

    Science.gov (United States)

    Ochiai, Tomoshiro; Takada, Hideyuki; Nacher, Jose C.

    2014-11-01

    The availability of huge volume of financial data has offered the possibility for understanding the markets as a complex system characterized by several stylized facts. Here we first show that the time evolution of the Japan’s Nikkei stock average index (Nikkei 225) futures follows the resistance and breaking-acceleration effects when the complete time series data is analyzed. However, in stock markets there are periods where no regular trades occur between the close of the market on one day and the next day’s open. To examine these time gaps we decompose the time series data into opening time and intermediate time. Our analysis indicates that for the intermediate time, both the resistance and the breaking-acceleration effects are still observed. However, for the opening time there are almost no resistance and breaking-acceleration effects, and volatility is always constantly high. These findings highlight unique dynamic differences between stock markets and forex market and suggest that current risk management strategies may need to be revised to address the absence of these dynamic effects at the opening time.

  17. Quantifying dynamic changes in plantar pressure gradient in diabetics with peripheral neuropathy

    Directory of Open Access Journals (Sweden)

    Chi-Wen Lung

    2016-07-01

    Full Text Available Diabetic foot ulcers remain one of the most serious complications of diabetes. Peak plantar pressure (PPP and peak pressure gradient (PPG during walking have been shown to be associated with the development of diabetic foot ulcers. To gain further insight into the mechanical etiology of diabetic foot ulcers, examination of the pressure gradient angle (PGA has been recently proposed. The PGA quantifies directional variation or orientation of the pressure gradient during walking, and provides a measure of whether pressure gradient patterns are concentrated or dispersed along the plantar surface. We hypothesized that diabetics at risk of foot ulceration would have smaller PGA in key plantar regions, suggesting less movement of the pressure gradient over time. A total of 27 participants were studied, including 19 diabetics with peripheral neuropathy and 8 non-diabetic control subjects. A foot pressure measurement system was used to measure plantar pressures during walking. PPP, PPG and PGA were calculated for four foot regions - 1st toe (T1, 1st metatarsal head (M1, 2nd metatarsal head (M2, and heel (HL. Consistent with prior studies, PPP and PPG were significantly larger in the diabetic group compared to non-diabetic controls in the T1 and M1 regions, but not M2 or HL. For example, PPP was 165% (P=0.02 and PPG was 214% (P<0.001 larger in T1. PGA was found to be significantly smaller in the diabetic group in T1 (46%, P=0.04, suggesting a more concentrated pressure gradient pattern under the toe. The proposed PGA may improve our understanding of the role of pressure gradient on the risk of diabetic foot ulcers.

  18. Quantifying Dynamic Changes in Plantar Pressure Gradient in Diabetics with Peripheral Neuropathy

    Science.gov (United States)

    Lung, Chi-Wen; Hsiao-Wecksler, Elizabeth T.; Burns, Stephanie; Lin, Fang; Jan, Yih-Kuen

    2016-01-01

    Diabetic foot ulcers remain one of the most serious complications of diabetes. Peak plantar pressure (PPP) and peak pressure gradient (PPG) during walking have been shown to be associated with the development of diabetic foot ulcers. To gain further insight into the mechanical etiology of diabetic foot ulcers, examination of the pressure gradient angle (PGA) has been recently proposed. The PGA quantifies directional variation or orientation of the pressure gradient during walking and provides a measure of whether pressure gradient patterns are concentrated or dispersed along the plantar surface. We hypothesized that diabetics at risk of foot ulceration would have smaller PGA in key plantar regions, suggesting less movement of the pressure gradient over time. A total of 27 participants were studied, including 19 diabetics with peripheral neuropathy and 8 non-diabetic control subjects. A foot pressure measurement system was used to measure plantar pressures during walking. PPP, PPG, and PGA were calculated for four foot regions – first toe (T1), first metatarsal head (M1), second metatarsal head (M2), and heel (HL). Consistent with prior studies, PPP and PPG were significantly larger in the diabetic group compared with non-diabetic controls in the T1 and M1 regions, but not M2 or HL. For example, PPP was 165% (P = 0.02) and PPG was 214% (P < 0.001) larger in T1. PGA was found to be significantly smaller in the diabetic group in T1 (46%, P = 0.04), suggesting a more concentrated pressure gradient pattern under the toe. The proposed PGA may improve our understanding of the role of pressure gradient on the risk of diabetic foot ulcers. PMID:27486576

  19. Cell motility dynamics: a novel segmentation algorithm to quantify multi-cellular bright field microscopy images.

    Directory of Open Access Journals (Sweden)

    Assaf Zaritsky

    Full Text Available Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional

  20. Cordilleran forest scaling dynamics and disturbance regimes quantified by aerial lidar

    Science.gov (United States)

    Swetnam, Tyson L.

    Semi-arid forests are in a period of rapid transition as a result of unprecedented landscape scale fires, insect outbreaks, drought, and anthropogenic land use practices. Understanding how historically episodic disturbances led to coherent forest structural and spatial patterns that promoted resilience and resistance is a critical part of addressing change. Here my coauthors and I apply metabolic scaling theory (MST) to examine scaling behavior and structural patterns of semi-arid conifer forests in Arizona and New Mexico. We conceptualize a linkage to mechanistic drivers of forest assembly that incorporates the effects of low-intensity disturbance, and physiologic and resource limitations as an extension of MST. We use both aerial LiDAR data and field observations to quantify changes in forest structure from the sub-meter to landscape scales. We found: (1) semi-arid forest structure exhibits MST-predicted behaviors regardless of disturbance and that MST can help to quantitatively measure the level of disturbance intensity in a forest, (2) the application of a power law to a forest overstory frequency distribution can help predict understory presence/absence, (3) local indicators of spatial association can help to define first order effects (e.g. topographic changes) and map where recent disturbances (e.g. logging and fire) have altered forest structure. Lastly, we produced a comprehensive set of above-ground biomass and carbon models for five distinct forest types and ten common species of the southwestern US that are meant for use in aerial LiDAR forest inventory projects. This dissertation presents both a conceptual framework and applications for investigating local scales (stands of trees) up to entire ecosystems for diagnosis of current carbon balances, levels of departure from historical norms, and ecological stability. These tools and models will become more important as we prepare our ecosystems for a future characterized by increased climatic variability

  1. Similarity measures for protein ensembles

    DEFF Research Database (Denmark)

    Lindorff-Larsen, Kresten; Ferkinghoff-Borg, Jesper

    2009-01-01

    Analyses of similarities and changes in protein conformation can provide important information regarding protein function and evolution. Many scores, including the commonly used root mean square deviation, have therefore been developed to quantify the similarities of different protein conformations...... a synthetic example from molecular dynamics simulations. We then apply the algorithms to revisit the problem of ensemble averaging during structure determination of proteins, and find that an ensemble refinement method is able to recover the correct distribution of conformations better than standard single...

  2. Perceptual similarity of visual patterns predicts dynamic neural activation patterns measured with MEG.

    Science.gov (United States)

    Wardle, Susan G; Kriegeskorte, Nikolaus; Grootswagers, Tijl; Khaligh-Razavi, Seyed-Mahdi; Carlson, Thomas A

    2016-05-15

    Perceptual similarity is a cognitive judgment that represents the end-stage of a complex cascade of hierarchical processing throughout visual cortex. Previous studies have shown a correspondence between the similarity of coarse-scale fMRI activation patterns and the perceived similarity of visual stimuli, suggesting that visual objects that appear similar also share similar underlying patterns of neural activation. Here we explore the temporal relationship between the human brain's time-varying representation of visual patterns and behavioral judgments of perceptual similarity. The visual stimuli were abstract patterns constructed from identical perceptual units (oriented Gabor patches) so that each pattern had a unique global form or perceptual 'Gestalt'. The visual stimuli were decodable from evoked neural activation patterns measured with magnetoencephalography (MEG), however, stimuli differed in the similarity of their neural representation as estimated by differences in decodability. Early after stimulus onset (from 50ms), a model based on retinotopic organization predicted the representational similarity of the visual stimuli. Following the peak correlation between the retinotopic model and neural data at 80ms, the neural representations quickly evolved so that retinotopy no longer provided a sufficient account of the brain's time-varying representation of the stimuli. Overall the strongest predictor of the brain's representation was a model based on human judgments of perceptual similarity, which reached the limits of the maximum correlation with the neural data defined by the 'noise ceiling'. Our results show that large-scale brain activation patterns contain a neural signature for the perceptual Gestalt of composite visual features, and demonstrate a strong correspondence between perception and complex patterns of brain activity.

  3. Two worlds collide: image analysis methods for quantifying structural variation in cluster molecular dynamics.

    Science.gov (United States)

    Steenbergen, K G; Gaston, N

    2014-02-14

    Inspired by methods of remote sensing image analysis, we analyze structural variation in cluster molecular dynamics (MD) simulations through a unique application of the principal component analysis (PCA) and Pearson Correlation Coefficient (PCC). The PCA analysis characterizes the geometric shape of the cluster structure at each time step, yielding a detailed and quantitative measure of structural stability and variation at finite temperature. Our PCC analysis captures bond structure variation in MD, which can be used to both supplement the PCA analysis as well as compare bond patterns between different cluster sizes. Relying only on atomic position data, without requirement for a priori structural input, PCA and PCC can be used to analyze both classical and ab initio MD simulations for any cluster composition or electronic configuration. Taken together, these statistical tools represent powerful new techniques for quantitative structural characterization and isomer identification in cluster MD.

  4. Quantifying and Modelling Long Term Sediment Dynamics in Catchments in Western Europe

    Science.gov (United States)

    Notebaert, B.; De Brue, H.; Verstraeten, G.; Broothaerts, N.

    2015-12-01

    Quantification of sediment dynamics allows to get insight in driving forces and internal dynamics of the sediment cascade system. A useful tool to achieve this is the sediment budget approach, which encompasses the quantification of different sinks and sources. A Holocene time-differentiated sediment budget has been constructed for the Belgian Dijle River catchment (720 km²), based on a large set of field data. The results show how soil erosion is driven by land use changes over longer timescales. Sediment redistribution and the relative importance of the different sinks also vary over time, mainly as a result of changing land use and related landscape connectivity. However, the coarse temporal resolution typically associated with Holocene studies complicates the understanding of sub-millennial scale processes. In a second step, the field-based sediment budget was combined with a modeling approach using Watem/Sedem, a spatially distributed model that simulates soil erosion and colluvial deposition. After validation of the model calibration against the sediment budget, the model was used in a sensitivity analysis. Results confirm the overwhelming influence of human land use on both soil erosion and landscape connectivity, whereas the climatic impact is comparatively small. In addition to catchment-wide simulations, the model also served to test the relative importance of lynchets and dry valleys in different environments. Finally, the geomorphic model was used to simulate past land use, taking into account equifinality. For this purpose, a large series of hypothetical time-independent land use maps of the Dijle catchment were modeled based on a multi-objective allocation algorithm, and applied in Watem/Sedem. Modeled soil erosion and sediment deposition outcomes for each scenario were subsequently compared with the field-based record, taking into account uncertainties. As such, the model allows to evaluate and select realistic land use scenarios for the Holocene.

  5. Quantifying Thermal Disorder in Metal–Organic Frameworks: Lattice Dynamics and Molecular Dynamics Simulations of Hybrid Formate Perovskites

    Science.gov (United States)

    2016-01-01

    Hybrid organic–inorganic materials are mechanically soft, leading to large thermoelastic effects which can affect properties such as electronic structure and ferroelectric ordering. Here we use a combination of ab initio lattice dynamics and molecular dynamics to study the finite temperature behavior of the hydrazinium and guanidinium formate perovskites, [NH2NH3][Zn(CHO2)3] and [C(NH2)3][Zn(CHO2)3]. Thermal displacement parameters and ellipsoids computed from the phonons and from molecular dynamics trajectories are found to be in good agreement. The hydrazinium compound is ferroelectric at low temperatures, with a calculated spontaneous polarization of 2.6 μC cm–2, but the thermal movement of the cation leads to variations in the instantaneous polarization and eventually breakdown of the ferroelectric order. Contrary to this the guanidinium cation is found to be stationary at all temperatures; however, the movement of the cage atoms leads to variations in the electronic structure and a renormalization in the bandgap from 6.29 eV at 0 K to an average of 5.96 eV at 300 K. We conclude that accounting for temperature is necessary for quantitative modeling of the physical properties of metal–organic frameworks. PMID:28298951

  6. Quantifying sediment dynamics over century and event timescales with Beryllium-10 and Lead-210

    Science.gov (United States)

    Belmont, P.; Willenbring, J.; Schottler, S.

    2010-12-01

    Landscape erosion is unsteady and non-uniform over human timescales. Quantifying that spatial and temporal variability is important for developing an accurate understanding of watershed erosion, as well as useful morphodynamic models that consider erosion, storage, and sediment transport pathways through watersheds. In this study, we have utilized naturally occurring meteoric 10Be and 210Pb to constrain long-term erosion rates and determine the relative importance of different sediment sources in the Le Sueur River watershed, southern Minnesota. Consistently high suspended sediment loads measured in the Le Sueur are the combined result of natural and human-induced processes. Catastrophic baselevel fall of 70 meters that occurred 13,400 years ago initiated rapid river incision with a knickpoint that has propagated 40 km up through the channel network. Over the past 150 years, agriculture has changed the vegetation cover, disturbed soils and profoundly altered watershed hydrology. Primary sediment sources include upland agricultural fields, bluffs and ravines that have resulted from Holocene river incision, and degrading banks and floodplains. Our two tracers provide complementary pieces of information to constrain erosion rates and identify sources. Both tracers exhibit high concentrations in upland soils and low concentrations in bluffs and ravines. Sediment temporarily stored in floodplains is diminished in 210Pb and enriched in 10Be concentration, which allows us to constrain the rate of channel-floodplain exchange. Results from 10Be analysis in the watershed and in the sedimentary record of Lake Pepin, a natural sediment trap downstream, suggest that agriculture has increased landscape erosion rates significantly, but that the relative magnitude of upland erosion compared to other sources has changed over time, with upland contributions being most pronounced in the mid-20th century. Suspended sediment samples analyzed for 10Be and 210Pb from different locations

  7. Quantifying the accuracy of the tumor motion and area as a function of acceleration factor for the simulation of the dynamic keyhole magnetic resonance imaging method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Danny; Pollock, Sean; Keall, Paul, E-mail: paul.keall@sydney.edu.au [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, Sydney, NSW 2006 (Australia); Greer, Peter B. [School of Mathematical and Physical Sciences, University of Newcastle, Newcastle, NSW 2308, Australia and Department of Radiation Oncology, Calvary Mater Newcastle Hospital, Newcastle, NSW 2298 (Australia); Kim, Taeho [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, Sydney, NSW 2006, Australia and Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23219 (United States)

    2016-05-15

    Purpose: The dynamic keyhole is a new MR image reconstruction method for thoracic and abdominal MR imaging. To date, this method has not been investigated with cancer patient magnetic resonance imaging (MRI) data. The goal of this study was to assess the dynamic keyhole method for the task of lung tumor localization using cine-MR images reconstructed in the presence of respiratory motion. Methods: The dynamic keyhole method utilizes a previously acquired a library of peripheral k-space datasets at similar displacement and phase (where phase is simply used to determine whether the breathing is inhale to exhale or exhale to inhale) respiratory bins in conjunction with central k-space datasets (keyhole) acquired. External respiratory signals drive the process of sorting, matching, and combining the two k-space streams for each respiratory bin, thereby achieving faster image acquisition without substantial motion artifacts. This study was the first that investigates the impact of k-space undersampling on lung tumor motion and area assessment across clinically available techniques (zero-filling and conventional keyhole). In this study, the dynamic keyhole, conventional keyhole and zero-filling methods were compared to full k-space dataset acquisition by quantifying (1) the keyhole size required for central k-space datasets for constant image quality across sixty four cine-MRI datasets from nine lung cancer patients, (2) the intensity difference between the original and reconstructed images in a constant keyhole size, and (3) the accuracy of tumor motion and area directly measured by tumor autocontouring. Results: For constant image quality, the dynamic keyhole method, conventional keyhole, and zero-filling methods required 22%, 34%, and 49% of the keyhole size (P < 0.0001), respectively, compared to the full k-space image acquisition method. Compared to the conventional keyhole and zero-filling reconstructed images with the keyhole size utilized in the dynamic keyhole

  8. Study on the Similarity Criteria of Aircraft Structure Temperature/Stress/Dynamic Response

    Science.gov (United States)

    Liu, Lei; Gui, Ye-Wei; Du, Yan-Xia; Geng, Xiang-Ren; Wang, An-Ling

    The performance parameters of thermal protection system are essential for the design and optimization of high-speed aircraft. The flight-ground conversion is a valid method to provide the effective support to the design of the thermal protection structure (TPS), because the performance data of TPS were generally obtained from wind tunnel test and should be conversed to the corresponding environment. In this paper, the similarity parameters of heat conduction and thermoelasticity equations are studied, the similarity criteria proposed, and the effectiveness of some of the similar parameters are calculated and analyzed. The research results indicated that wind tunnel test can be better designed using the proposed similarity criteria, and the data obtained from wind tunnel test can be modified more rational to accommodate the reality flight condition so as to improve the precision and the efficiency of wind tunnel experiment.

  9. 13C- and 15N-Labeling Strategies Combined with Mass Spectrometry Comprehensively Quantify Phospholipid Dynamics in C. elegans.

    Directory of Open Access Journals (Sweden)

    Blair C R Dancy

    Full Text Available Membranes define cellular and organelle boundaries, a function that is critical to all living systems. Like other biomolecules, membrane lipids are dynamically maintained, but current methods are extremely limited for monitoring lipid dynamics in living animals. We developed novel strategies in C. elegans combining 13C and 15N stable isotopes with mass spectrometry to directly quantify the replenishment rates of the individual fatty acids and intact phospholipids of the membrane. Using multiple measurements of phospholipid dynamics, we found that the phospholipid pools are replaced rapidly and at rates nearly double the turnover measured for neutral lipid populations. In fact, our analysis shows that the majority of membrane lipids are replaced each day. Furthermore, we found that stearoyl-CoA desaturases (SCDs, critical enzymes in polyunsaturated fatty acid production, play an unexpected role in influencing the overall rates of membrane maintenance as SCD depletion affected the turnover of nearly all membrane lipids. Additionally, the compromised membrane maintenance as defined by LC-MS/MS with SCD RNAi resulted in active phospholipid remodeling that we predict is critical to alleviate the impact of reduced membrane maintenance in these animals. Not only have these combined methodologies identified new facets of the impact of SCDs on the membrane, but they also have great potential to reveal many undiscovered regulators of phospholipid metabolism.

  10. Quantifying the Geomorphic Dynamics of the Extensively Impacted Lower Yuba River

    Science.gov (United States)

    Wyrick, J. R.; Pasternack, G. B.; Carley, J. K.; Barker, R.; Massa, D.; Bratovich, P.; Reedy, G.; Johnson, T.

    2010-12-01

    Traditionally it is has been thought that rivers possess the capability of adjusting their attributes to accommodate varying flow and sediment transport regimes so that sediment in- and out-fluxes are balanced and landform conditions are “stable”. In reality, however, geomorphic drivers and boundary conditions are much more independently dynamic than classically envisioned, such that landforms may always be in a state of adjustment that is normal and appropriate. Rather than thinking of landforms as stable, it is more appropriate to think of them, and the ecosystem services with which they are associated, as resilient in response to change. Knowledge of historic, pre-human baseline conditions or regional reference conditions is limited and may not be as applicable in understanding natural geomorphic and ecosystem services as once envisioned. In light of this natural complexity, a geomorphic assessment of conditions after a large dam or other facility is built and operated may not be as simple as documenting geomorphic instability and attributing that to human impacts relative to the presumed stable baseline conditions. Rather than compare anthropogenically-impacted conditions to theoretical baseline or reference conditions, a more effective approach is to deduce the geomorphic processes in a system under different regimes and evaluate the implications for resiliency of ecosystem services. Through a mechanistic understanding of environmental systems, it may be possible to rationally rehabilitate an ecosystem to achieve resiliency in cases where it has been lost or is desirable to instill, even if it was not historically present. This analytic paradigm is being used to assess the history and on-going geomorphic dynamism of the lower Yuba River (LYR) in northern California. Despite a legacy of massive hydraulic mining waste deposition, dredger re-working of the river valley, dam construction, and flow regulation, the river has been described as lacking the

  11. Dynamics in cardiometabolic risk among Turkish adults: Similarities to that in Iranians?

    Directory of Open Access Journals (Sweden)

    Altan Onat

    2011-01-01

    The author strongly suspects that such dynamics in the development of diabetes and CHD exist in Western adults prone to impaired glucose tolerance, and evidence is accumulating regarding general Iranian adults. These issues posing a vast threat on public cardiometabolic health will have to be recognized with the purpose of not delaying implementation of measures for the modification of cardiometabolic risk, especially in women.

  12. A comparison of time series similarity measures for classification and change detection of ecosystem dynamics

    NARCIS (Netherlands)

    Lhermitte, S.; Verbesselt, J.; Verstraeten, W.W.; Coppin, P.

    2011-01-01

    Time series of remote sensing imagery or derived vegetation indices and biophysical products have been shown particularly useful to characterize land ecosystem dynamics. Various methods have been developed based on temporal trajectory analysis to characterize, classify and detect changes in ecosyste

  13. Correlation between the Quantifiable Parameters of Whole Solitary Pulmonary Nodules Perfusion Imaging Derived with Dynamic CT and Nodules Size

    Directory of Open Access Journals (Sweden)

    Shiyuan LIU

    2009-05-01

    Full Text Available Background and objective The solitary pulmonary nodules (SPNs is one of the most common findings on chest radiographs. The blood flow patterns of the biggest single SPNs level has been studied. This assessment may be only a limited sample of the entire region of interest (ROI and is unrepresentative of the SPNs as a volume. Ideally, SPNs volume perfusion should be measured. The aim of this study is to evaluate the correlation between the quantifiableparameters of SPNs volume perfusion imaging derived with 16-slice spiral CT and 64-slice spiral CT and nodules size. Methods Sixty-five patients with SPNs (diameter≤3 cm; 42 malignant; 12 active inflammatory; 11 benign underwent multi-location dynamic contrast material-enhanced serial CT scanning mode with stable table were performed; The mean values of valid sections were calculated, as the quantifiable parameters of volume SPNs perfusion imaging derived with16-slice spiral CT and 64-slice spiral CT. The correlation between the quantifiable parameters of SPNs volume perfusion imaging derived with 16-slice spiral CT and 64-slice spiral CT and nodules size were assessed by means of linear regression analysis. Results No significant correlations were found between the nodules size and each of the peak height (PHSPN (32.15 Hu±14.55 Hu,ratio of peak height of the SPN to that of the aorta (SPN-to-A ratio(13.20±6.18%, perfusion(PSPN (29.79±19.12 mLmin-1100 g-1 and mean transit time (12.95±6.53 s (r =0.081, P =0.419; r =0.089, P =0.487; r =0.167, P =0.077; r =0.023, P =0.880. Conclusion No significant correlations were found between the quantifiable parameters of SPNs volume perfusion imaging derived with 16-slice spiral CT and 64-slice spiral CT and nodules size.

  14. Quantifying the spatiotemporal dynamics in a chorus frog (Pseudacris) hybrid zone over 30 years.

    Science.gov (United States)

    Engebretsen, Kristin N; Barrow, Lisa N; Rittmeyer, Eric N; Brown, Jeremy M; Moriarty Lemmon, Emily

    2016-07-01

    Although theory suggests that hybrid zones can move or change structure over time, studies supported by direct empirical evidence for these changes are relatively limited. We present a spatiotemporal genetic study of a hybrid zone between Pseudacris nigrita and P. fouquettei across the Pearl River between Louisiana and Mississippi. This hybrid zone was initially characterized in 1980 as a narrow and steep "tension zone," in which hybrid populations were inferior to parentals and were maintained through a balance between selection and dispersal. We reanalyzed historical tissue samples and compared them to samples of recently collected individuals using microsatellites. Clinal analyses indicate that the cline has not shifted in roughly 30 years but has widened significantly. Anthropogenic and natural changes may have affected selective pressure or dispersal, and our results suggest that the zone may no longer best be described as a tension zone. To the best of our knowledge, this study provides the first evidence of significant widening of a hybrid cline but stasis of its center. Continued empirical study of dynamic hybrid zones will provide insight into the forces shaping their structure and the evolutionary potential they possess for the elimination or generation of species.

  15. Using task dynamics to quantify the affordances of throwing for long distance and accuracy.

    Science.gov (United States)

    Wilson, Andrew D; Weightman, Andrew; Bingham, Geoffrey P; Zhu, Qin

    2016-07-01

    In 2 experiments, the current study explored how affordances structure throwing for long distance and accuracy. In Experiment 1, 10 expert throwers (from baseball, softball, and cricket) threw regulation tennis balls to hit a vertically oriented 4 ft × 4 ft target placed at each of 9 locations (3 distances × 3 heights). We measured their release parameters (angle, speed, and height) and showed that they scaled their throws in response to changes in the target's location. We then simulated the projectile motion of the ball and identified a continuous subspace of release parameters that produce hits to each target location. Each subspace describes the affordance of our target to be hit by a tennis ball moving in a projectile motion to the relevant location. The simulated affordance spaces showed how the release parameter combinations required for hits changed with changes in the target location. The experts tracked these changes in their performance and were successful in hitting the targets. We next tested unusual (horizontal) targets that generated correspondingly different affordance subspaces to determine whether the experts would track the affordance to generate successful hits. Do the experts perceive the affordance? They do. In Experiment 2, 5 cricketers threw to hit either vertically or horizontally oriented targets and successfully hit both, exhibiting release parameters located within the requisite affordance subspaces. We advocate a task dynamical approach to the study of affordances as properties of objects and events in the context of tasks as the future of research in this area. (PsycINFO Database Record

  16. Quantifying Transient States in Materials with the Dynamic Transmission Electron Microscope

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, G; LaGrange, T; Kim, J; Reed, B; Browning, N

    2009-09-21

    The Dynamic Transmission Electron Microscope (DTEM) offers a means of capturing rapid evolution in a specimen through in-situ microscopy experiments by allowing 15 ns electron micrograph exposure times. The rapid exposure time is enabled by creating a burst of electrons at the emitter by ultraviolet pulsed laser illumination. This burst arrives a specified time after a second laser initiates the specimen reaction. The timing of the two Q-switched lasers is controlled by high-speed pulse generators with a timing error much less than the pulse duration. Both diffraction and imaging experiments can be performed, just as in a conventional TEM. The brightness of the emitter and the total current control the spatial and temporal resolutions. We have demonstrated 7 nm spatial resolution in single 15 ns pulsed images. These single-pulse imaging experiments have been used to study martensitic transformations, nucleation and crystallization of an amorphous metal, and rapid chemical reactions. Measurements have been performed on these systems that are possible by no other experimental approaches currently available.

  17. Quantifying the impact of woodpecker predation on population dynamics of the emerald ash borer (Agrilus planipennis.

    Directory of Open Access Journals (Sweden)

    David E Jennings

    Full Text Available The emerald ash borer (EAB, Agrilus planipennis, is an invasive beetle that has killed millions of ash trees (Fraxinus spp. since it was accidentally introduced to North America in the 1990s. Understanding how predators such as woodpeckers (Picidae affect the population dynamics of EAB should enable us to more effectively manage the spread of this beetle, and toward this end we combined two experimental approaches to elucidate the relative importance of woodpecker predation on EAB populations. First, we examined wild populations of EAB in ash trees in New York, with each tree having a section screened to exclude woodpeckers. Second, we established experimental cohorts of EAB in ash trees in Maryland, and the cohorts on half of these trees were caged to exclude woodpeckers. The following spring these trees were debarked and the fates of the EAB larvae were determined. We found that trees from which woodpeckers were excluded consistently had significantly lower levels of predation, and that woodpecker predation comprised a greater source of mortality at sites with a more established wild infestation of EAB. Additionally, there was a considerable difference between New York and Maryland in the effect that woodpecker predation had on EAB population growth, suggesting that predation alone may not be a substantial factor in controlling EAB. In our experimental cohorts we also observed that trees from which woodpeckers were excluded had a significantly higher level of parasitism. The lower level of parasitism on EAB larvae found when exposed to woodpeckers has implications for EAB biological control, suggesting that it might be prudent to exclude woodpeckers from trees when attempting to establish parasitoid populations. Future studies may include utilizing EAB larval cohorts with a range of densities to explore the functional response of woodpeckers.

  18. Quantifying the impact of woodpecker predation on population dynamics of the emerald ash borer (Agrilus planipennis).

    Science.gov (United States)

    Jennings, David E; Gould, Juli R; Vandenberg, John D; Duan, Jian J; Shrewsbury, Paula M

    2013-01-01

    The emerald ash borer (EAB), Agrilus planipennis, is an invasive beetle that has killed millions of ash trees (Fraxinus spp.) since it was accidentally introduced to North America in the 1990s. Understanding how predators such as woodpeckers (Picidae) affect the population dynamics of EAB should enable us to more effectively manage the spread of this beetle, and toward this end we combined two experimental approaches to elucidate the relative importance of woodpecker predation on EAB populations. First, we examined wild populations of EAB in ash trees in New York, with each tree having a section screened to exclude woodpeckers. Second, we established experimental cohorts of EAB in ash trees in Maryland, and the cohorts on half of these trees were caged to exclude woodpeckers. The following spring these trees were debarked and the fates of the EAB larvae were determined. We found that trees from which woodpeckers were excluded consistently had significantly lower levels of predation, and that woodpecker predation comprised a greater source of mortality at sites with a more established wild infestation of EAB. Additionally, there was a considerable difference between New York and Maryland in the effect that woodpecker predation had on EAB population growth, suggesting that predation alone may not be a substantial factor in controlling EAB. In our experimental cohorts we also observed that trees from which woodpeckers were excluded had a significantly higher level of parasitism. The lower level of parasitism on EAB larvae found when exposed to woodpeckers has implications for EAB biological control, suggesting that it might be prudent to exclude woodpeckers from trees when attempting to establish parasitoid populations. Future studies may include utilizing EAB larval cohorts with a range of densities to explore the functional response of woodpeckers.

  19. Integrated Analysis of Interferometric SAR, Satellite Altimetry and Hydraulic Modeling to Quantify Louisiana Wetland Dynamics

    Science.gov (United States)

    Lee, Hyongki; Kim, Jin-woo; Lu, Zhong; Jung, Hahn Chul; Shum, C. K.; Alsdorf, Doug

    2012-01-01

    Wetland loss in Louisiana has been accelerating due primarily to anthropogenic and nature processes, and is being advocated as a problem with national importance. Accurate measurement or modeling of wetland-wide water level changes, its varying extent, its storage and discharge changes resulting in part from sediment loads, erosion and subsidence are fundamental to assessment of hurricane-induced flood hazards and wetland ecology. Here, we use innovative method to integrate interferometric SAR (InSAR) and satellite radar altimetry for measuring absolute or geocentric water level changes and applied the methodology to remote areas of swamp forest in coastal Louisiana. Coherence analysis of InSAR pairs suggested that the HH polarization is preferred for this type of observation, and polarimetric analysis can help to identi:fy double-bonnce backscattering areas in the wetland. Envisat radar altimeter-measured 18- Hz (along-track sampling of 417 m) water level data processed with regional stackfile method have been used to provide vertical references for water bodies separated by levees. The high-resolution (approx.40 m) relative water changes measured from ALOS PALSAR L-band and Radarsat-l C-band InSAR are then integrated with Envisat radar altimetry to obtain absolute water level. The resulting water level time series were validated with in situ gauge observations within the swamp forest. Furthermore, we compare our water elevation changes with 2D flood modeling from LISFLOOD hydrodynamic model. Our study demonstrates that this new technique allows retrospective reconstruction and concurrent monitoring of water conditions and flow dynamics in wetlands, especially those lacking gauge networks.

  20. Extracting key information from historical data to quantify the transmission dynamics of smallpox

    Directory of Open Access Journals (Sweden)

    Brockmann Stefan O

    2008-08-01

    Full Text Available Abstract Background Quantification of the transmission dynamics of smallpox is crucial for optimizing intervention strategies in the event of a bioterrorist attack. This article reviews basic methods and findings in mathematical and statistical studies of smallpox which estimate key transmission parameters from historical data. Main findings First, critically important aspects in extracting key information from historical data are briefly summarized. We mention different sources of heterogeneity and potential pitfalls in utilizing historical records. Second, we discuss how smallpox spreads in the absence of interventions and how the optimal timing of quarantine and isolation measures can be determined. Case studies demonstrate the following. (1 The upper confidence limit of the 99th percentile of the incubation period is 22.2 days, suggesting that quarantine should last 23 days. (2 The highest frequency (61.8% of secondary transmissions occurs 3–5 days after onset of fever so that infected individuals should be isolated before the appearance of rash. (3 The U-shaped age-specific case fatality implies a vulnerability of infants and elderly among non-immune individuals. Estimates of the transmission potential are subsequently reviewed, followed by an assessment of vaccination effects and of the expected effectiveness of interventions. Conclusion Current debates on bio-terrorism preparedness indicate that public health decision making must account for the complex interplay and balance between vaccination strategies and other public health measures (e.g. case isolation and contact tracing taking into account the frequency of adverse events to vaccination. In this review, we summarize what has already been clarified and point out needs to analyze previous smallpox outbreaks systematically.

  1. Methyl mercury dynamics in a tidal wetland quantified using in situ optical measurements

    Science.gov (United States)

    Bergamaschi, B.A.; Fleck, J.A.; Downing, B.D.; Boss, E.; Pellerin, B.; Ganju, N.K.; Schoellhamer, D.H.; Byington, A.A.; Heim, W.A.; Stephenson, M.; Fujii, R.

    2011-01-01

    We assessed monomethylmercury (MeHg) dynamics in a tidal wetland over three seasons using a novel method that employs a combination of in situ optical measurements as concentration proxies. MeHg concentrations measured over a single spring tide were extended to a concentration time series using in situ optical measurements. Tidal fluxes were calculated using modeled concentrations and bi-directional velocities obtained acoustically. The magnitude of the flux was the result of complex interactions of tides, geomorphic features, particle sorption, and random episodic events such as wind storms and precipitation. Correlation of dissolved organic matter quality measurements with timing of MeHg release suggests that MeHg is produced in areas of fluctuating redox and not limited by buildup of sulfide. The wetland was a net source of MeHg to the estuary in all seasons, with particulate flux being much higher than dissolved flux, even though dissolved concentrations were commonly higher. Estimated total MeHg yields out of the wetland were approximately 2.5 μg m−2 yr−1—4–40 times previously published yields—representing a potential loading to the estuary of 80 g yr−1, equivalent to 3% of the river loading. Thus, export from tidal wetlands should be included in mass balance estimates for MeHg loading to estuaries. Also, adequate estimation of loads and the interactions between physical and biogeochemical processes in tidal wetlands might not be possible without long-term, high-frequency in situ measurements.

  2. SELF-SIMILAR SOLUTIONS OF FRACTURE DYNAMICS PROBLEMS ON AXIALLY SYMMETRY

    Institute of Scientific and Technical Information of China (English)

    吕念春; 程靳; 程云虹; 屈德志

    2001-01-01

    By the theory of complex functions, a penny-shaped crack on axially symmetric propagating problems for composite materials was studied. The general representations of the analytical solutions with arbitrary index of self-similarity were presented for fracture elastodynamics problems on axially symmetry by the ways of self-similarity under the /addershaped loads. The problerns dealt with can be transformed into Riemann-Hilbert problems and their closed analytical solutions are obtained rather simple by this method. After those analytical solutions are utilized by using the method of rotational superposition theorem in conjunction with that of Smirnov-Sobolev, the solutions of arbitrary complicated problems can be obtained.

  3. Spatiotemporal dynamics of similarity-based neural representations of facial identity.

    Science.gov (United States)

    Vida, Mark D; Nestor, Adrian; Plaut, David C; Behrmann, Marlene

    2017-01-10

    Humans' remarkable ability to quickly and accurately discriminate among thousands of highly similar complex objects demands rapid and precise neural computations. To elucidate the process by which this is achieved, we used magnetoencephalography to measure spatiotemporal patterns of neural activity with high temporal resolution during visual discrimination among a large and carefully controlled set of faces. We also compared these neural data to lower level "image-based" and higher level "identity-based" model-based representations of our stimuli and to behavioral similarity judgments of our stimuli. Between ∼50 and 400 ms after stimulus onset, face-selective sources in right lateral occipital cortex and right fusiform gyrus and sources in a control region (left V1) yielded successful classification of facial identity. In all regions, early responses were more similar to the image-based representation than to the identity-based representation. In the face-selective regions only, responses were more similar to the identity-based representation at several time points after 200 ms. Behavioral responses were more similar to the identity-based representation than to the image-based representation, and their structure was predicted by responses in the face-selective regions. These results provide a temporally precise description of the transformation from low- to high-level representations of facial identity in human face-selective cortex and demonstrate that face-selective cortical regions represent multiple distinct types of information about face identity at different times over the first 500 ms after stimulus onset. These results have important implications for understanding the rapid emergence of fine-grained, high-level representations of object identity, a computation essential to human visual expertise.

  4. Quantifying the Impacts of Environmental Factors on Vegetation Dynamics over Climatic and Management Gradients of Central Asia

    Directory of Open Access Journals (Sweden)

    Olena Dubovyk

    2016-07-01

    Full Text Available Currently there is a lack of quantitative information regarding the driving factors of vegetation dynamics in post-Soviet Central Asia. Insufficient knowledge also exists concerning vegetation variability across sub-humid to arid climatic gradients as well as vegetation response to different land uses, from natural rangelands to intensively irrigated croplands. In this study, we analyzed the environmental drivers of vegetation dynamics in five Central Asian countries by coupling key vegetation parameter “overall greenness” derived from Moderate Resolution Imaging Spectroradiometer (MODIS Normalized Difference Vegetation Index (NDVI time series data, with its possible factors across various management and climatic gradients. We developed nine generalized least-squares random effect (GLS-RE models to analyze the relative impact of environmental factors on vegetation dynamics. The obtained results quantitatively indicated the extensive control of climatic factors on managed and unmanaged vegetation cover across Central Asia. The most diverse vegetation dynamics response to climatic variables was observed for “intensively managed irrigated croplands”. Almost no differences in response to these variables were detected for managed non-irrigated vegetation and unmanaged (natural vegetation across all countries. Natural vegetation and rainfed non-irrigated crop dynamics were principally associated with temperature and precipitation parameters. Variables related to temperature had the greatest relative effect on irrigated croplands and on vegetation cover within the mountainous zone. Further research should focus on incorporating the socio-economic factors discussed here in a similar analysis.

  5. Entropy measures, entropy estimators, and their performance in quantifying complex dynamics: Effects of artifacts, nonstationarity, and long-range correlations

    Science.gov (United States)

    Xiong, Wanting; Faes, Luca; Ivanov, Plamen Ch.

    2017-06-01

    Entropy measures are widely applied to quantify the complexity of dynamical systems in diverse fields. However, the practical application of entropy methods is challenging, due to the variety of entropy measures and estimators and the complexity of real-world time series, including nonstationarities and long-range correlations (LRC). We conduct a systematic study on the performance, bias, and limitations of three basic measures (entropy, conditional entropy, information storage) and three traditionally used estimators (linear, kernel, nearest neighbor). We investigate the dependence of entropy measures on estimator- and process-specific parameters, and we show the effects of three types of nonstationarities due to artifacts (trends, spikes, local variance change) in simulations of stochastic autoregressive processes. We also analyze the impact of LRC on the theoretical and estimated values of entropy measures. Finally, we apply entropy methods on heart rate variability data from subjects in different physiological states and clinical conditions. We find that entropy measures can only differentiate changes of specific types in cardiac dynamics and that appropriate preprocessing is vital for correct estimation and interpretation. Demonstrating the limitations of entropy methods and shedding light on how to mitigate bias and provide correct interpretations of results, this work can serve as a comprehensive reference for the application of entropy methods and the evaluation of existing studies.

  6. Quantifying Grassland-to-Woodland Transitions and the Implications for Carbon and Nitrogen Dynamics in the Southwest United States

    Science.gov (United States)

    Wessman, Carol A.; Archer, Steven R.; Asner, Gregory P.; Bateson, C. Ann

    2004-01-01

    Replacement of grasslands and savannas by shrublands and woodlands has been widely reported in tropical, temperate and high-latitude rangelands worldwide (Archer 1994). These changes in vegetation structure may reflect historical shifts in climate and land use; and are likely to influence biodiversity, productivity, above- and below ground carbon and nitrogen sequestration and biophysical aspects of land surface-atmosphere interactions. The goal of our proposed research is to investigate how changes in the relative abundance of herbaceous and woody vegetation affect carbon and nitrogen dynamics across heterogeneous savannas and shrub/woodlands. By linking actual land-cover composition (derived through spectral mixture analysis of AVIRIS, TM, and AVHRR imagery) with a process-based ecosystem model, we will generate explicit predictions of the C and N storage in plants and soils resulting from changes in vegetation structure. Our specific objectives will be to (1) continue development and test applications of spectral mixture analysis across grassland-to-woodland transitions; (2) quantify temporal changes in plant and soil C and N storage and turnover for remote sensing and process model parameterization and verification; and (3) couple landscape fraction maps to an ecosystem simulation model to observe biogeochemical dynamics under changing landscape structure and climatological forcings.

  7. Quantifying the impacts of land surface schemes and dynamic vegetation on the model dependency of projected changes in surface energy and water budgets

    Science.gov (United States)

    Yu, Miao; Wang, Guiling; Chen, Haishan

    2016-03-01

    Assessing and quantifying the uncertainties in projected future changes of energy and water budgets over land surface are important steps toward improving our confidence in climate change projections. In this study, the contribution of land surface models to the inter-GCM variation of projected future changes in land surface energy and water fluxes are assessed based on output from 19 global climate models (GCMs) and offline Community Land Model version 4 (CLM4) simulations driven by meteorological forcing from the 19 GCMs. Similar offline simulations using CLM4 with its dynamic vegetation submodel are also conducted to investigate how dynamic vegetation feedback, a process that is being added to more earth system models, may amplify or moderate the intermodel variations of projected future changes. Projected changes are quantified as the difference between the 2081-2100 period from the Representative Concentration Pathway 8.5 (RCP8.5) future experiment and the 1981-2000 period from the historical simulation. Under RCP8.5, projected changes in surface water and heat fluxes show a high degree of model dependency across the globe. Although precipitation is very likely to increase in the high latitudes of the Northern Hemisphere, a high degree of model-related uncertainty exists for evapotranspiration, soil water content, and surface runoff, suggesting discrepancy among land surface models (LSMs) in simulating the surface hydrological processes and snow-related processes. Large model-related uncertainties for the surface water budget also exist in the Tropics including southeastern South America and Central Africa. These uncertainties would be reduced in the hypothetical scenario of a single near-perfect land surface model being used across all GCMs, suggesting the potential to reduce uncertainties through the use of more consistent approaches toward land surface model development. Under such a scenario, the most significant reduction is likely to be seen in the

  8. A Representational Similarity Analysis of the Dynamics of Object Processing Using Single-Trial EEG Classification.

    Directory of Open Access Journals (Sweden)

    Blair Kaneshiro

    Full Text Available The recognition of object categories is effortlessly accomplished in everyday life, yet its neural underpinnings remain not fully understood. In this electroencephalography (EEG study, we used single-trial classification to perform a Representational Similarity Analysis (RSA of categorical representation of objects in human visual cortex. Brain responses were recorded while participants viewed a set of 72 photographs of objects with a planned category structure. The Representational Dissimilarity Matrix (RDM used for RSA was derived from confusions of a linear classifier operating on single EEG trials. In contrast to past studies, which used pairwise correlation or classification to derive the RDM, we used confusion matrices from multi-class classifications, which provided novel self-similarity measures that were used to derive the overall size of the representational space. We additionally performed classifications on subsets of the brain response in order to identify spatial and temporal EEG components that best discriminated object categories and exemplars. Results from category-level classifications revealed that brain responses to images of human faces formed the most distinct category, while responses to images from the two inanimate categories formed a single category cluster. Exemplar-level classifications produced a broadly similar category structure, as well as sub-clusters corresponding to natural language categories. Spatiotemporal components of the brain response that differentiated exemplars within a category were found to differ from those implicated in differentiating between categories. Our results show that a classification approach can be successfully applied to single-trial scalp-recorded EEG to recover fine-grained object category structure, as well as to identify interpretable spatiotemporal components underlying object processing. Finally, object category can be decoded from purely temporal information recorded at single

  9. Dynamical Topological Symmetry Breaking as the Origin of Turbulence, Non-Markovianity, and Self-Similarity

    CERN Document Server

    Ovchinnikov, Igor V

    2012-01-01

    Here it is shown that the most general Parisi-Sourlas-Wu stochastic quantization procedure applied to any stochastic differential equation (SDE) leads to a Witten-type topological field theory - a model with a global topological Becchi-Rouet-Stora-Tyutin supersymmetry (Q-symmetry). Q-symmetry can be dynamically broken only by (anti-)instantons - ultimately nonlinear sudden tunneling processes of (creation)annihilation of solitons, e.g., avalanches in self-organized criticality (SOC) or (creation)annihilation of vortices in turbulent water. The phases with unbroken Q-symmetry are essentially markovian and can be understood solely in terms of the conventional Fokker-Plank evolution of the probability density. For these phases, Ito interpretation of SDEs and/or Martin-Siggia-Rose approximation of the stochastic quantization are applicable. SOC, turbulence, glasses, quenches etc. constitute the "generalized turbulence" category of stochastic phases with broken Q-symmetry. In this category, (anti-)instantons conde...

  10. Quantifying spatial and temporal discharge dynamics of an event in a first order stream, using Distributed Temperature Sensing

    Directory of Open Access Journals (Sweden)

    M. C. Westhoff

    2011-03-01

    Full Text Available Understanding spatial distribution of discharge can be important for water quality and quantity modeling. Non-steady flood waves can influence small headwater streams significantly, particularly as a result of short high intensity summer rainstorms. The aim of this paper is to quantify the spatial and temporal dynamics of stream flow in a headwater catchment during a summer rainstorm. These dynamics include gains and losses of stream water, the effect of bypasses that become active and hyporheic exchange fluxes that may vary over time as a function of discharge. We use an advection-dispersion model coupled with an energy balance model to simulate in-stream water temperature, which we confront with high resolution temperature observations obtained with Distributed Temperature Sensing. This model was used as a learning tool to stepwise unravel the complex puzzle of in-stream processes subject to varying discharge. Hypotheses were tested and rejected, which led to more insight in spatial and temporal dynamics in discharge and hyporheic exchange processes. We showed that infiltration losses increase during a rain event, while gains of water remained constant over time. We conclude that, eventually, part of the stream water bypassed the main channel during peak discharge. It also seems that hyporheic exchange varies with varying discharge in the first 250 of the stream; while further downstream it remains constant. Because we relied on solar radiation as the main energy input, we were only able to apply this method during a small event and low flow. However, when additional (artificial energy is available, the presented method is also applicable in larger streams, or during higher flow conditions.

  11. Quantified Facial Soft-tissue Strain in Animation Measured by Real-time Dynamic 3-Dimensional Imaging

    Science.gov (United States)

    Hsu, Vivian M.; Wes, Ari M.; Tahiri, Youssef; Cornman-Homonoff, Joshua

    2014-01-01

    Background: The aim of this study is to evaluate and quantify dynamic soft-tissue strain in the human face using real-time 3-dimensional imaging technology. Methods: Thirteen subjects (8 women, 5 men) between the ages of 18 and 70 were imaged using a dual-camera system and 3-dimensional optical analysis (ARAMIS, Trilion Quality Systems, Pa.). Each subject was imaged at rest and with the following facial expressions: (1) smile, (2) laughter, (3) surprise, (4) anger, (5) grimace, and (6) pursed lips. The facial strains defining stretch and compression were computed for each subject and compared. Results: The areas of greatest strain were localized to the midface and lower face for all expressions. Subjects over the age of 40 had a statistically significant increase in stretch in the perioral region while lip pursing compared with subjects under the age of 40 (58.4% vs 33.8%, P = 0.015). When specific components of lip pursing were analyzed, there was a significantly greater degree of stretch in the nasolabial fold region in subjects over 40 compared with those under 40 (61.6% vs 32.9%, P = 0.007). Furthermore, we observed a greater degree of asymmetry of strain in the nasolabial fold region in the older age group (18.4% vs 5.4%, P = 0.03). Conclusions: This pilot study illustrates that the face can be objectively and quantitatively evaluated using dynamic major strain analysis. The technology of 3-dimensional optical imaging can be used to advance our understanding of facial soft-tissue dynamics and the effects of animation on facial strain over time. PMID:25426394

  12. Quantified Facial Soft-tissue Strain in Animation Measured by Real-time Dynamic 3-Dimensional Imaging.

    Science.gov (United States)

    Hsu, Vivian M; Wes, Ari M; Tahiri, Youssef; Cornman-Homonoff, Joshua; Percec, Ivona

    2014-09-01

    The aim of this study is to evaluate and quantify dynamic soft-tissue strain in the human face using real-time 3-dimensional imaging technology. Thirteen subjects (8 women, 5 men) between the ages of 18 and 70 were imaged using a dual-camera system and 3-dimensional optical analysis (ARAMIS, Trilion Quality Systems, Pa.). Each subject was imaged at rest and with the following facial expressions: (1) smile, (2) laughter, (3) surprise, (4) anger, (5) grimace, and (6) pursed lips. The facial strains defining stretch and compression were computed for each subject and compared. The areas of greatest strain were localized to the midface and lower face for all expressions. Subjects over the age of 40 had a statistically significant increase in stretch in the perioral region while lip pursing compared with subjects under the age of 40 (58.4% vs 33.8%, P = 0.015). When specific components of lip pursing were analyzed, there was a significantly greater degree of stretch in the nasolabial fold region in subjects over 40 compared with those under 40 (61.6% vs 32.9%, P = 0.007). Furthermore, we observed a greater degree of asymmetry of strain in the nasolabial fold region in the older age group (18.4% vs 5.4%, P = 0.03). This pilot study illustrates that the face can be objectively and quantitatively evaluated using dynamic major strain analysis. The technology of 3-dimensional optical imaging can be used to advance our understanding of facial soft-tissue dynamics and the effects of animation on facial strain over time.

  13. Syntactic computations in the language network: Characterising dynamic network properties using representational similarity analysis

    Directory of Open Access Journals (Sweden)

    Lorraine Komisarjevsky Tyler

    2013-05-01

    Full Text Available The core human capacity of syntactic analysis involves a left hemisphere network involving left inferior frontal gyrus (LIFG and posterior middle temporal gyrus (LMTG and the anatomical connections between them. Here we use MEG to determine the spatio-temporal properties of syntactic computations in this network. Listeners heard spoken sentences containing a local syntactic ambiguity (e.g. …landing planes…, at the offset of which they heard a disambiguating verb and decided whether it was an acceptable/unacceptable continuation of the sentence. We charted the time-course of processing and resolving syntactic ambiguity by measuring MEG responses from the onset of each word in the ambiguous phrase and the disambiguating word. We used representational similarity analysis (RSA to characterize syntactic information represented in the LIFG and LpMTG over time and to investigate their relationship to each other. Testing a variety of lexico-syntactic and ambiguity models against the MEG data, our results suggest early lexico-syntactic responses in the LpMTG and later effects of ambiguity in the LIFG, pointing to a clear differentiation in the functional roles of these two regions. Our results suggest the LpMTG represents and transmits lexical information to the LIFG, which responds to and resolves the ambiguity.

  14. Quantifying the patterns and dynamics of river deltas under conditions of steady forcing and relative sea level rise

    Science.gov (United States)

    Liang, Man; Van Dyk, Corey; Passalacqua, Paola

    2016-02-01

    Understanding deltaic channel dynamics is essential to acquiring knowledge on how deltas respond to environmental changes, as channels control the distribution of water, sediment, and nutrients. Channel-resolving morphodynamic models provide the basis for quantitative study of channel-scale dynamics, but they need to be properly assessed with a set of robust metrics able to quantitatively characterize delta patterns and dynamics before being used as predictive tools. In this work we use metrics developed in the context of delta formation, to assess the morphodynamic results of DeltaRCM, a parcel-based cellular model for delta formation and evolution. By comparing model results to theoretical predictions and field and experimental observations, we show that DeltaRCM captures the geometric growth characteristics of deltas such as fractality of channel network, spatial distribution of wet and dry surfaces, and temporal dynamics of channel-scale processes such as the decay of channel planform correlation. After evaluating the ability of DeltaRCM to produce delta patterns and dynamics at the scale of channel processes, we use the model to predict the deltaic response to relative sea level rise (RSLR). We show that uniform subsidence and absolute sea level rise have similar effects on delta evolution and cause intensified channel branching. Channel network fractality and channel mobility increase with higher-RSLR rates, while the spatial and temporal scales of avulsion events decrease, resulting in smaller sand bodies in the stratigraphy. Our modeling results provide the first set of quantitative predictions of the effects of RSLR on river deltas with a specific focus on the distributary channel network.

  15. Dynamic Strength Evaluations for Self-Piercing Rivets and Resistance Spot Welds Joining Similar and Dissimilar Metals

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Xin; Khaleel, Mohammad A.

    2007-10-01

    This paper summarizes the dynamic joint strength evaluation procedures and the measured dynamic strength data for thirteen joint populations of self-piercing rivets (SPR) and resistance spot welds (RSW) joining similar and dissimilar metals. A state-of-the-art review of the current practice for conducting dynamic tensile/compressive strength tests in different strain rate regimes is first presented, and the generic issues associated with dynamic strength test are addressed. Then, the joint strength testing procedures and fixture designs used in the current study are described, and the typical load versus displacement curves under different loading configurations are presented. Uniqueness of the current data compared with data in the open literature is discussed. The experimental results for all the joint populations indicate that joint strength increases with increasing loading rate. However, the strength increase from 4.47m/s (10mph) to 8.94m/s (20mph) is not as significant as the strength increase from static to 4.47m/s. It is also found that with increasing loading velocity, displacement to failure decreases for all the joint samples. Therefore, “brittleness” of the joint sample increases with impact velocity. Detailed static and dynamic strength data and the associated energy absorption levels for all the samples in the thirteen joint populations are also included.

  16. Quantifying and Contrasting Spatial and Temporal Dynamics of Vadose Zone Moisture under Different Vegetation Types Using Electrical Resistivity Imaging

    Science.gov (United States)

    Sharma Acharya, B.; Zou, C.; Halihan, T.

    2016-12-01

    Spatial-temporal dynamics of vadose zone moisture is important for assessing deep drainage in water-limited ecosystems. Time-lapse electrical resistivity imaging (ERI) was used to monitor vadose zone moisture to a depth of 9 m in a tallgrass prairie and a prairie heavily encroached by a juniper species (Juniperus virigiana) in the south-central Great Plains, US. Resistivity images were converted to volumetric water content images based on site-specific relationship between volumetric water content and inverted resistivity values. Results show (a) vegetation induced vertical soil moisture profiling in the vadose zone, (b) increased spatial-temporal variability in rooting zone conductivity under juniper-encroached site compared with tallgrass prairie site, and (c) two-layered conductivity profiles irrespective of vegetation types, with increased conductivity below 3 m soil depth. ERI could be used as an effective approach to quantify spatial-temporal variation of vadose zone moisture and time-lapse ERI images can be used to assess deep drainage potential associated with different land cover types in water-limited ecosystems.

  17. Quantifying the role of immobile water on pollutant fluxes in double-permeable media under dynamic flow conditions

    Science.gov (United States)

    Knorr, Bastian; Krämer, Florian; Stumpp, Christine; Maloszewski, Piotr

    2014-05-01

    inhibited the back-diffusion from immobile water to mobile water zones. Mathematical models based on analytical and numerical models have to be further developed to describe and quantify these observed processes. A better understanding about the influence of immobile water and dynamic flow conditions on pollutant transport will help to improve prediction of pollutant fluxes and site remediation techniques and management.

  18. Comparative analysis of the quasi-similar structures on the dynamic spectra of the Sun and Jupiter

    Science.gov (United States)

    Litvinenko, G.; Konovalenko, A.; Zakharenko, V.; Vinogradov, V.; Dorovsky, V.; Melnik, V.; Brazhenko, A.; Shaposhnikov, V.; Rucker, H. O.; Zarka, Ph.

    2014-04-01

    In many literary sources planet Jupiter called the Sun, which is not fully developed. It should be partially confirmed by the experimental fact that the quasisimilar in shape features appear in the dynamic spectra both in the Sun and the Jovian radio emission. The comparative analysis of the similar properties in the emission spectra of Jupiter and the Sun and analogy between the plasma processes in the solar corona and magnetosphere of Jupiter can allow also define the similar features in the radiation mechanisms of these cosmic objects. One of the promising approaches to investigating features of the Jovian DAM emission and the decameter solar radiation is application of novel experimental techniques with a further detailed analysis of the obtained data.

  19. Dynamic contrast-enhanced 3-T magnetic resonance imaging: a method for quantifying disease activity in early polyarthritis

    Energy Technology Data Exchange (ETDEWEB)

    Navalho, Marcio [Faculdade de Medicina da Universidade de Lisboa, Rheumatology Research Unit, Instituto de Medicina Molecular, Lisbon (Portugal); Hospital da Luz, Radiology Department, Lisbon (Portugal); Hospital da Luz, Centro de Imagiologia, Lisbon (Portugal); Resende, Catarina [Hospital da Luz, Rheumatology Department, Lisbon (Portugal); Hospital de Santa Maria, Rheumatology Department, Centro Hospitalar de Lisboa Norte, EPE, Lisbon (Portugal); Rodrigues, Ana Maria; Fonseca, Joao Eurico; Canhao, Helena [Faculdade de Medicina da Universidade de Lisboa, Rheumatology Research Unit, Instituto de Medicina Molecular, Lisbon (Portugal); Hospital de Santa Maria, Rheumatology Department, Centro Hospitalar de Lisboa Norte, EPE, Lisbon (Portugal); Gaspar, Augusto [Hospital da Luz, Radiology Department, Lisbon (Portugal); Campos, Jorge [Hospital de Santa Maria, Radiology Department, Centro Hospitalar de Lisboa Norte, EPE, Lisbon (Portugal)

    2012-01-15

    To determine whether measurement of synovial enhancement and thickness quantification parameters with 3.0-Tesla magnetic resonance imaging (3-T MRI) can reliably quantify disease activity in patients with early polyarthritis. Eighteen patients (16 women, 2 men; mean age 46 years) with early polyarthritis with less than 12 months of symptoms were included. MRI examination using 3-T device was performed by a new approach including both wrists and hands simultaneously in the examination field-of-view. MRI scoring of disease activity included quantification of synovial enhancement with simple measurements such as rate of early enhancement (REE; REE{sub 57} = S{sub 57}/S{sub 200}, where S{sub 57} and S{sub 200} are the signal intensities 57 s and 200 s after gadolinium injection) and rate of relative enhancement (RE; RE = S{sub 200} - S{sub 0}). Both wrists and hands were scored according to the Rheumatoid Arthritis MRI Scoring System (RAMRIS) for synovitis. Disease activity was clinically assessed by the 28-joint Disease Activity Score (DAS28). DAS28 score was strongly correlated with RE (r = 0.8331, p < 0.0001), REE (r = 0.8112, p < 0.0001), and RAMRIS score for synovitis (r = 0.7659, p < 0.0002). An REE score above 0.778 accurately identified patients with clinically active disease (sensitivity 92%; specificity 67%; p < 0.05). A statistically significant difference was observed in the RE, REE, and RAMRIS scores for synovitis between patients with active and inactive disease (p < 0.05). Our findings support the use of 3-T dynamic contrast-enhanced MRI for precise quantification of disease activity and for discriminating active disease from inactive disease in early polyarthritis. (orig.)

  20. Quantifying Hyporheic Exchanges in a Large Scale River Reach Using Coupled 3-D Surface and Subsurface Computational Fluid Dynamics Simulations

    Science.gov (United States)

    Bao, J.; Zhou, T.; Huang, M.; Hou, Z.; Perkins, W. A.; Harding, S.; Hammond, G. E.; Ren, H.; Thorne, P. D.; Suffield, S. R.; Zachara, J. M.

    2016-12-01

    Hyporheic exchange between river water and groundwater is an important mechanism for biogeochemical processes, such as carbon and nitrogen cycling, and biodegradation of organic contaminants, in the subsurface interaction zone. The relationship between river flow conditions and hyporheic exchanges therefore is of great interests to hydrologists, biogeochemists, and ecologists. However, quantifying relative influences of hydrostatic and hydrodynamic drivers on hyporheic exchanges is very challenging in large rivers due to accessibility and spatial coverage of measurements, and computational tools available for numerical experiments. In this study, we aim to demonstrate that a high resolution computational fluid dynamics (CFD) model that couples surface and subsurface flow and transport can be used to simulate hyporheic exchanges and the residence time of river water in the hypothetic zone. Base on the assumption that the hyporheic exchange does not affect the surface water flow condition due to its small magnitude compared to the velocity of river water, we developed a one way coupled surface and subsurface water flow model in a commercial CFD software STAR-CCM+, that connects the Reynolds-averaged Navier-Stokes (RANS) equation solver with a realizable two-layer turbulence model, a two-layer all y+ wall treatment, and the volume of fluid (VOF) method for tracking the free water-air interface as well as porous media flow in the subsurface domain. The model is applied to a 7-km long section of the Columbia River and validated against measurements from the acoustic Doppler current profiler (ADCP) in the surface water and hyporheic fluxes derived from a set of temperature profilers installed across the riverbed. The validated model is then employed to systematically investigate how hyporheic exchanges influenced by 1) riverbed properties such as the permeability and thickness of the alluvial layer; 2) surface water hydrodynamics due to channel geomorphological settings

  1. Quantifying the spatio-temporal dynamics of woody plant encroachment using an integrative remote sensing, GIS, and spatial modeling approach

    Science.gov (United States)

    Buenemann, Michaela

    Despite a longstanding universal concern about and intensive research into woody plant encroachment (WPE)---the replacement of grasslands by shrub- and woodlands---our accumulated understanding of the process has either not been translated into sustainable rangeland management strategies or with only limited success. In order to increase our scientific insights into WPE, move us one step closer toward the sustainable management of rangelands affected by or vulnerable to the process, and identify needs for a future global research agenda, this dissertation presents an unprecedented critical, qualitative and quantitative assessment of the existing literature on the topic and evaluates the utility of an integrative remote sensing, GIS, and spatial modeling approach for quantifying the spatio-temporal dynamics of WPE. Findings from this research suggest that gaps in our current understanding of WPE and difficulties in devising sustainable rangeland management strategies are in part due to the complex spatio-temporal web of interactions between geoecological and anthropogenic variables involved in the process as well as limitations of presently available data and techniques. However, an in-depth analysis of the published literature also reveals that aforementioned problems are caused by two further crucial factors: the absence of information acquisition and reporting standards and the relative lack of long-term, large-scale, multi-disciplinary research efforts. The methodological framework proposed in this dissertation yields data that are easily standardized according to various criteria and facilitates the integration of spatially explicit data generated by a variety of studies. This framework may thus provide one common ground for scientists from a diversity of fields. Also, it has utility for both research and management. Specifically, this research demonstrates that the application of cutting-edge remote sensing techniques (Multiple Endmember Spectral Mixture

  2. Investigating Forest Harvest Effects on DOC Concentration and Quality: An In Situ, High Resolution Approach to Quantifying DOC Export Dynamics

    Science.gov (United States)

    Jollymore, A. J.; Johnson, M. S.; Hawthorne, I.

    2013-12-01

    the months following harvest. A major advantage of this study is the use of in situ measurements, allowing for high temporal resolution of DOC dynamics occurring within specific hydrologic events. For example, concentration-discharge relationships for both the pre- and post-logging periods demonstrate similar clockwise hysteresis during individual storm events, while the magnitude of change dramatically increased during the post-logging period. However, in situ measurements of SUVA over this period suggest that DOC quality may be less affected by forest harvest than overall DOC concentration, where high frequency data also allows for the observation of SUVA and spectral slope responses to specific hydrologic events during the pre- and post- harvest period.

  3. Chemical dynamics between wells across a time-dependent barrier: Self-similarity in the Lagrangian descriptor and reactive basins

    Science.gov (United States)

    Junginger, Andrej; Duvenbeck, Lennart; Feldmaier, Matthias; Main, Jörg; Wunner, Günter; Hernandez, Rigoberto

    2017-08-01

    In chemical or physical reaction dynamics, it is essential to distinguish precisely between reactants and products for all times. This task is especially demanding in time-dependent or driven systems because therein the dividing surface (DS) between these states often exhibits a nontrivial time-dependence. The so-called transition state (TS) trajectory has been seen to define a DS which is free of recrossings in a large number of one-dimensional reactions across time-dependent barriers and thus, allows one to determine exact reaction rates. A fundamental challenge to applying this method is the construction of the TS trajectory itself. The minimization of Lagrangian descriptors (LDs) provides a general and powerful scheme to obtain that trajectory even when perturbation theory fails. Both approaches encounter possible breakdowns when the overall potential is bounded, admitting the possibility of returns to the barrier long after the trajectories have reached the product or reactant wells. Such global dynamics cannot be captured by perturbation theory. Meanwhile, in the LD-DS approach, it leads to the emergence of additional local minima which make it difficult to extract the optimal branch associated with the desired TS trajectory. In this work, we illustrate this behavior for a time-dependent double-well potential revealing a self-similar structure of the LD, and we demonstrate how the reflections and side-minima can be addressed by an appropriate modification of the LD associated with the direct rate across the barrier.

  4. Quantifying and comparing dynamic predictive accuracy of joint models for longitudinal marker and time-to-event in presence of censoring and competing risks

    DEFF Research Database (Denmark)

    Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie

    2015-01-01

    's health profile grows with time. We focus in this work on statistical methods for quantifying and comparing dynamic predictive accuracy of this kind of prognostic models, accounting for right censoring and possibly competing events. Dynamic area under the ROC curve (AUC) and Brier Score (BS) are used......Thanks to the growing interest in personalized medicine, joint modeling of longitudinal marker and time-to-event data has recently started to be used to derive dynamic individual risk predictions. Individual predictions are called dynamic because they are updated when information on the subject...... psychometric tests to predict dementia in the elderly, accounting for the competing risk of death. Models are estimated on the French Paquid cohort and predictive accuracies are evaluated and compared on the French Three-City cohort....

  5. Similar Structural Dynamics for the Degradation of CH3 NH3 PbI3 in Air and in Vacuum.

    Science.gov (United States)

    Alberti, Alessandra; Deretzis, Ioannis; Pellegrino, Giovanna; Bongiorno, Corrado; Smecca, Emanuele; Mannino, Giovanni; Giannazzo, Filippo; Condorelli, Guglielmo Guido; Sakai, Nobuya; Miyasaka, Tsutomu; Spinella, Corrado; La Magna, Antonino

    2015-10-05

    We investigate the degradation path of MAPbI3 (MA=methylammonium) films over flat TiO2 substrates at room temperature by means of X-ray diffraction, spectroscopic ellipsometry, X-ray photoelectron spectroscopy, and high-resolution transmission electron microscopy. The degradation dynamics is found to be similar in air and under vacuum conditions, which leads to the conclusion that the occurrence of intrinsic thermodynamic mechanisms is not necessarily linked to humidity. The process has an early stage, which drives the starting tetragonal lattice in the direction of a cubic atomic arrangement. This early stage is followed by a phase change towards PbI2 . We describe how this degradation product is structurally coupled with the original MAPbI3 lattice through the orientation of its constituent PbI6 octahedra. Our results suggest a slight octahedral rearrangement after volatilization of HI+CH3 NH2 or MAI, with a relatively low energy cost. Our experiments also clarify why reducing the interfaces and internal defects in the perovskite lattice enhances the stability of the material.

  6. Similarity measure and topology evolution of foreign exchange markets using dynamic time warping method: Evidence from minimal spanning tree

    Science.gov (United States)

    Wang, Gang-Jin; Xie, Chi; Han, Feng; Sun, Bo

    2012-08-01

    In this study, we employ a dynamic time warping method to study the topology of similarity networks among 35 major currencies in international foreign exchange (FX) markets, measured by the minimal spanning tree (MST) approach, which is expected to overcome the synchronous restriction of the Pearson correlation coefficient. In the empirical process, firstly, we subdivide the analysis period from June 2005 to May 2011 into three sub-periods: before, during, and after the US sub-prime crisis. Secondly, we choose NZD (New Zealand dollar) as the numeraire and then, analyze the topology evolution of FX markets in terms of the structure changes of MSTs during the above periods. We also present the hierarchical tree associated with the MST to study the currency clusters in each sub-period. Our results confirm that USD and EUR are the predominant world currencies. But USD gradually loses the most central position while EUR acts as a stable center in the MST passing through the crisis. Furthermore, an interesting finding is that, after the crisis, SGD (Singapore dollar) becomes a new center currency for the network.

  7. Quantified Facial Soft-tissue Strain in Animation Measured by Real-time Dynamic 3-Dimensional Imaging

    Directory of Open Access Journals (Sweden)

    Vivian M. Hsu, MD

    2014-09-01

    Conclusions: This pilot study illustrates that the face can be objectively and quantitatively evaluated using dynamic major strain analysis. The technology of 3-dimensional optical imaging can be used to advance our understanding of facial soft-tissue dynamics and the effects of animation on facial strain over time.

  8. A Frequency Domain EM Algorithm to Detect Similar Dynamics in Time Series with Applications to Spike Sorting and Macro-Economics

    CERN Document Server

    Goerg, Georg M

    2011-01-01

    In this work I propose a frequency domain adaptation of the Expectation Maximization (EM) algorithm to separate a family of sequential observations in classes of similar dynamic structure, which can either mean non-stationary signals of similar shape, or stationary signals with similar auto-covariance function. It does this by viewing the magnitude of the discrete Fourier transform (DFT) of the signals (or power spectrum) as a probability density/mass function (pdf/pmf) on the unit circle: signals with similar dynamics have similar pdfs; distinct patterns have distinct pdfs. An advantage of this approach is that it does not rely on any parametric form of the dynamic structure, but can be used for non-parametric, robust and model-free classi?cation. Applications to neural spike sorting (non-stationary) and pattern-recognition in socio-economic time series (stationary) demonstrate the usefulness and wide applicability of the proposed method.

  9. Correlation between the quantifiable parameters of blood flow pattern derived with dynamic CT in maliagnant solitary pulmonary nodules and tumor size

    Directory of Open Access Journals (Sweden)

    Chenshi ZHANG

    2008-02-01

    Full Text Available Background and Objective The solitary pulmonary nodules (SPNs is one of the most common findings on chest radiographs. It becomes possible to provide more accurately quantitative information about blood flow patterns of solitary pulmonary nodules (SPNs with multi-slice spiral computed tomography (MSCT. The aim of this study is to evaluate the correlation between the quantifiable parameters of blood flow pattern derived with dynamic CT in maliagnant solitary pulmonary nodules and tumor size. Methods 68 patients with maliagnant solitary pulmonary nodules (SPNs (diameter <=4 cmunderwent multi-location dynamic contrast material-enhanced (nonionic contrast material was administrated via the antecubital vein at a rate of 4mL/s by an autoinjector, 4*5mm or 4*2.5mm scanning mode with stable table were performed. serial CT. Precontrast and postcontrast attenuation on every scan was recorded. Perfusion (PSPN, peak height (PHSPNratio of peak height of the SPN to that of the aorta (SPN-to-A ratioand mean transit time(MTT were calculated. The correlation between the quantifiable parameters of blood flow pattern derived with dynamic CT in maliagnant solitary pulmonary nodules and tumor size were assessed by means of linear regression analysis. Results No significant correlations were found between the tumor size and each of the peak height (PHSPN ratio of peak height of the SPN to that of the aorta (SPN-to-A ratio perfusion(PSPNand mean transit time (r=0.18, P=0.14; r=0.20,P=0.09; r=0.01, P=0.95; r=0.01, P=0.93. Conclusion No significant correlation is found between the tumor size and each of the quantifiable parameters of blood flow pattern derived with dynamic CT in maliagnant solitary pulmonary nodules.

  10. Evaluating sugarcane families by the method of Dynamic Technique for Order Preference by Similarity to Ideal Solution (DTOPSIS

    Directory of Open Access Journals (Sweden)

    Peifang Zhao

    2014-09-01

    Full Text Available Enlarging the quantity of seedlings of elite families and discarding inferior sugarcane (Saccharum spp. families could improve sugarcane breeding and selection efficiency. The feasibility of using the method Dynamic Technique for Order Preference by Similarity to Ideal Solution (DTOPSIS method was explored to identify superior sugarcane families. Data on 5 traits: Brix, millable stalks per stool (MS, stalk diameter (SD, plant height (PH, and percent pith were collected from two family trials having 17 families and two check cultivars at two sites including plant-cane and first-ratoon crops. The rest of the seedlings were planted into field for routine selection in the regular program. The DTOPSIS method calculates a comprehensive index (Ci which expresses the closeness of a solution to the ideal solution and was used in this study to test the distance of each family to the ideal family. The Ci of the families was compared to the family selection rate in the regular program by determining the selection rate at Stage 1 to Stage 4 for each family in the regular program. The result indicated that the Ci values calculated from family trials were significantly (p<0.01 correlated to the selection rate at Stage 2 (r=0.8059, Stage3 (r=0.7967, and Stage 4 (r=0.8202, and indicating that promising clones were selected from families with higher Ci values in the family trial. Thus, it could be feasible to use DTOPSIS to determine elite sugarcane families and to eliminate inferior families and thereby, increasing the variety selection efficiency.

  11. Quantifying the potential of automated dynamic solar shading in office buildings through integrated simulations of energy and daylight

    DEFF Research Database (Denmark)

    Nielsen, Martin Vraa; Svendsen, Svend; Bjerregaard Jensen, Lotte

    2011-01-01

    The façade design is and should be considered a central issue in the design of energy-efficient buildings. That is why dynamic façade components are increasingly used to adapt to both internal and external impacts, and to cope with a reduction in energy consumption and an increase in occupant com...... solar shading, which emphasises the need for dynamic and integrated simulations early in the design process to facilitate informed design decisions about the façade....... components by using integrated simulations that took energy demand, the indoor air quality, the amount of daylight available, and visual comfort into consideration. Three types of façades were investigated (without solar shading, with fixed solar shading, and with dynamic solar shading), and we simulated...

  12. Locomotor forces on a swimming fish: three-dimensional vortex wake dynamics quantified using digital particle image velocimetry.

    Science.gov (United States)

    Drucker; Lauder

    1999-01-01

    Quantifying the locomotor forces experienced by swimming fishes represents a significant challenge because direct measurements of force applied to the aquatic medium are not feasible. However, using the technique of digital particle image velocimetry (DPIV), it is possible to quantify the effect of fish fins on water movement and hence to estimate momentum transfer from the animal to the fluid. We used DPIV to visualize water flow in the wake of the pectoral fins of bluegill sunfish (Lepomis macrochirus) swimming at speeds of 0.5-1.5 L s(-)(1), where L is total body length. Velocity fields quantified in three perpendicular planes in the wake of the fins allowed three-dimensional reconstruction of downstream vortex structures. At low swimming speed (0.5 L s(-)(1)), vorticity is shed by each fin during the downstroke and stroke reversal to generate discrete, roughly symmetrical, vortex rings of near-uniform circulation with a central jet of high-velocity flow. At and above the maximum sustainable labriform swimming speed of 1.0 L s(-)(1), additional vorticity appears on the upstroke, indicating the production of linked pairs of rings by each fin. Fluid velocity measured in the vicinity of the fin indicates that substantial spanwise flow during the downstroke may occur as vortex rings are formed. The forces exerted by the fins on the water in three dimensions were calculated from vortex ring orientation and momentum. Mean wake-derived thrust (11.1 mN) and lift (3.2 mN) forces produced by both fins per stride at 0.5 L s(-)(1) were found to match closely empirically determined counter-forces of body drag and weight. Medially directed reaction forces were unexpectedly large, averaging 125 % of the thrust force for each fin. Such large inward forces and a deep body that isolates left- and right-side vortex rings are predicted to aid maneuverability. The observed force balance indicates that DPIV can be used to measure accurately large-scale vorticity in the wake of

  13. Quantifying the dynamics of flow within a permeable bed using time-resolved endoscopic particle imaging velocimetry (EPIV)

    Energy Technology Data Exchange (ETDEWEB)

    Blois, G. [University of Birmingham, School of Geography, Earth and Environmental Sciences, Birmingham (United Kingdom); University of Illinois, Department of Mechanical Science and Engineering, Urbana, IL (United States); Sambrook Smith, G.H.; Lead, J.R. [University of Birmingham, School of Geography, Earth and Environmental Sciences, Birmingham (United Kingdom); Best, J.L. [University of Illinois, Departments of Geology, Geography, Mechanical Science and Engineering, and Ven Te Chow Hydrosystems Laboratory, Urbana, IL (United States); Hardy, R.J. [Durham University, Department of Geography, Science Laboratories, Durham (United Kingdom)

    2012-07-15

    This paper presents results of an experimental study investigating the mean and temporal evolution of flow within the pore space of a packed bed overlain by a free-surface flow. Data were collected by an endoscopic PIV (EPIV) technique. EPIV allows the instantaneous velocity field within the pore space to be quantified at a high spatio-temporal resolution, thus permitting investigation of the structure of turbulent subsurface flow produced by a high Reynolds number freestream flow (Re{sub s} in the range 9.8 x 10{sup 3}-9.7 x 10{sup 4}). Evolution of coherent flow structures within the pore space is shown to be driven by jet flow, with the interaction of this jet with the pore flow generating distinct coherent flow structures. The effects of freestream water depth, Reynolds and Froude numbers are investigated. (orig.)

  14. Small amplitude Dynamic AFM: quantifying interactions with different tip detection and excitation schemes in presence of additional resonances

    CERN Document Server

    Costa, Luca

    2014-01-01

    Quantifying the tip-sample interaction at the nanoscale in Amplitude Modulation mode AFM is challenging, especially when measuring in liquids. Here, we derive formulas for the tip-sample conservative and dissipative interactions and investigate the effect that spurious resonances have on the measured interaction. Both direct and acoustic excitation are considered. We also highlight the differences between measuring directly the tip position or the cantilever deflection. We show that, when probing the tip-sample forces, the acoustically excited cantilever behavior is insensitive to spurious resonances as long as the measured signal corresponds to the tip position, or if the excitation force is correctly taken into account. Since the effective excitation force may depend on the presence of such spurious resonances, we consider the cases where the frequency is kept constant during the measurement so that the proportionality between excitation signal and actual excitation force is kept constant. With the present ...

  15. Two porous luminescent metal-organic frameworks: quantifiable evaluation of dynamic and static luminescent sensing mechanisms towards Fe(3.).

    Science.gov (United States)

    Jin, Jun-Cheng; Pang, Ling-Yan; Yang, Guo-Ping; Hou, Lei; Wang, Yao-Yu

    2015-10-21

    Two novel porous luminescent metal-organic frameworks (MOFs, 1 and 2) have been constructed using 3,4-di(3,5-dicarboxyphenyl)phthalic acid using a hydrothermal method. Both MOFs can work as highly sensitive sensors to Fe(3+) by luminescent quenching. Analyses of the structures indicate a higher quenching efficiency of 2 because of the existence of active -COOH groups. Based on this consideration, the quenching mechanisms are studied and the processes are controlled by multiple mechanisms in which dynamic and static mechanisms of MOFs are discussed. Besides, the corresponding dynamic and static quenching constants are calculated, achieving the quantification evaluation of the quenching process. As expected, experimental data show that compound 2 possesses an overall quenching efficiency 6.9 times that of compound 1. Additionally, time-dependent intensity measurements, the shifts of the excitation spectrum and the appearance of a new emission peak all give visual proofs of the distinct mechanisms between the two MOFs.

  16. Development and application of methods to quantify spatial and temporal hyperpolarized 3He MRI ventilation dynamics: preliminary results in chronic obstructive pulmonary disease

    Science.gov (United States)

    Kirby, Miranda; Wheatley, Andrew; McCormack, David G.; Parraga, Grace

    2010-03-01

    Hyperpolarized helium-3 (3He) magnetic resonance imaging (MRI) has emerged as a non-invasive research method for quantifying lung structural and functional changes, enabling direct visualization in vivo at high spatial and temporal resolution. Here we described the development of methods for quantifying ventilation dynamics in response to salbutamol in Chronic Obstructive Pulmonary Disease (COPD). Whole body 3.0 Tesla Excite 12.0 MRI system was used to obtain multi-slice coronal images acquired immediately after subjects inhaled hyperpolarized 3He gas. Ventilated volume (VV), ventilation defect volume (VDV) and thoracic cavity volume (TCV) were recorded following segmentation of 3He and 1H images respectively, and used to calculate percent ventilated volume (PVV) and ventilation defect percent (VDP). Manual segmentation and Otsu thresholding were significantly correlated for VV (r=.82, p=.001), VDV (r=.87 p=.0002), PVV (r=.85, p=.0005), and VDP (r=.85, p=.0005). The level of agreement between these segmentation methods was also evaluated using Bland-Altman analysis and this showed that manual segmentation was consistently higher for VV (Mean=.22 L, SD=.05) and consistently lower for VDV (Mean=-.13, SD=.05) measurements than Otsu thresholding. To automate the quantification of newly ventilated pixels (NVp) post-bronchodilator, we used translation, rotation, and scaling transformations to register pre-and post-salbutamol images. There was a significant correlation between NVp and VDV (r=-.94 p=.005) and between percent newly ventilated pixels (PNVp) and VDP (r=- .89, p=.02), but not for VV or PVV. Evaluation of 3He MRI ventilation dynamics using Otsu thresholding and landmark-based image registration provides a way to regionally quantify functional changes in COPD subjects after treatment with beta-agonist bronchodilators, a common COPD and asthma therapy.

  17. Quantifying variation in forest disturbance, and its effects on aboveground biomass dynamics, across the eastern United States.

    Science.gov (United States)

    Vanderwel, Mark C; Coomes, David A; Purves, Drew W

    2013-05-01

    The role of tree mortality in the global carbon balance is complicated by strong spatial and temporal heterogeneity that arises from the stochastic nature of carbon loss through disturbance. Characterizing spatio-temporal variation in mortality (including disturbance) and its effects on forest and carbon dynamics is thus essential to understanding the current global forest carbon sink, and to predicting how it will change in future. We analyzed forest inventory data from the eastern United States to estimate plot-level variation in mortality (relative to a long-term background rate for individual trees) for nine distinct forest regions. Disturbances that produced at least a fourfold increase in tree mortality over an approximately 5 year interval were observed in 1-5% of plots in each forest region. The frequency of disturbance was lowest in the northeast, and increased southwards along the Atlantic and Gulf coasts as fire and hurricane disturbances became progressively more common. Across the central and northern parts of the region, natural disturbances appeared to reflect a diffuse combination of wind, insects, disease, and ice storms. By linking estimated covariation in tree growth and mortality over time with a data-constrained forest dynamics model, we simulated the implications of stochastic variation in mortality for long-term aboveground biomass changes across the eastern United States. A geographic gradient in disturbance frequency induced notable differences in biomass dynamics between the least- and most-disturbed regions, with variation in mortality causing the latter to undergo considerably stronger fluctuations in aboveground stand biomass over time. Moreover, regional simulations showed that a given long-term increase in mean mortality rates would support greater aboveground biomass when expressed through disturbance effects compared with background mortality, particularly for early-successional species. The effects of increased tree mortality on

  18. Annual dynamics of daylight variability and contrast a simulation-based approach to quantifying visual effects in architecture

    CERN Document Server

    Rockcastle, Siobhan

    2013-01-01

    Daylight is a dynamic source of illumination in architectural space, creating diverse and ephemeral configurations of light and shadow within the built environment. Perceptual qualities of daylight, such as contrast and temporal variability, are essential to our understanding of both material and visual effects in architecture. Although spatial contrast and light variability are fundamental to the visual experience of architecture, architects still rely primarily on intuition to evaluate their designs because there are few metrics that address these factors. Through an analysis of contemporary

  19. Quantifying Concordance

    CERN Document Server

    Seehars, Sebastian; Amara, Adam; Refregier, Alexandre

    2015-01-01

    Quantifying the concordance between different cosmological experiments is important for testing the validity of theoretical models and systematics in the observations. In earlier work, we thus proposed the Surprise, a concordance measure derived from the relative entropy between posterior distributions. We revisit the properties of the Surprise and describe how it provides a general, versatile, and robust measure for the agreement between datasets. We also compare it to other measures of concordance that have been proposed for cosmology. As an application, we extend our earlier analysis and use the Surprise to quantify the agreement between WMAP 9, Planck 13 and Planck 15 constraints on the $\\Lambda$CDM model. Using a principle component analysis in parameter space, we find that the large Surprise between WMAP 9 and Planck 13 (S = 17.6 bits, implying a deviation from consistency at 99.8% confidence) is due to a shift along a direction that is dominated by the amplitude of the power spectrum. The Surprise disa...

  20. Quantifying linguistic coordination

    DEFF Research Database (Denmark)

    Fusaroli, Riccardo; Tylén, Kristian

    ). We employ nominal recurrence analysis (Orsucci et al 2005, Dale et al 2011) on the decision-making conversations between the participants. We report strong correlations between various indexes of recurrence and collective performance. We argue this method allows us to quantify the qualities......Language has been defined as a social coordination device (Clark 1996) enabling innovative modalities of joint action. However, the exact coordinative dynamics over time and their effects are still insufficiently investigated and quantified. Relying on the data produced in a collective decision...

  1. Quantifying transient 3D dynamical phenomena of single mRNA particles in live yeast cell measurements.

    Science.gov (United States)

    Calderon, Christopher P; Thompson, Michael A; Casolari, Jason M; Paffenroth, Randy C; Moerner, W E

    2013-12-12

    Single-particle tracking (SPT) has been extensively used to obtain information about diffusion and directed motion in a wide range of biological applications. Recently, new methods have appeared for obtaining precise (10s of nm) spatial information in three dimensions (3D) with high temporal resolution (measurements obtained every 4 ms), which promise to more accurately sense the true dynamical behavior in the natural 3D cellular environment. Despite the quantitative 3D tracking information, the range of mathematical methods for extracting information about the underlying system has been limited mostly to mean-squared displacement analysis and other techniques not accounting for complex 3D kinetic interactions. There is a great need for new analysis tools aiming to more fully extract the biological information content from in vivo SPT measurements. High-resolution SPT experimental data has enormous potential to objectively scrutinize various proposed mechanistic schemes arising from theoretical biophysics and cell biology. At the same time, methods for rigorously checking the statistical consistency of both model assumptions and estimated parameters against observed experimental data (i.e., goodness-of-fit tests) have not received great attention. We demonstrate methods enabling (1) estimation of the parameters of 3D stochastic differential equation (SDE) models of the underlying dynamics given only one trajectory; and (2) construction of hypothesis tests checking the consistency of the fitted model with the observed trajectory so that extracted parameters are not overinterpreted (the tools are applicable to linear or nonlinear SDEs calibrated from nonstationary time series data). The approach is demonstrated on high-resolution 3D trajectories of single ARG3 mRNA particles in yeast cells in order to show the power of the methods in detecting signatures of transient directed transport. The methods presented are generally relevant to a wide variety of 2D and 3D SPT

  2. Pre-Constructed Dynamic Geometry Materials in the Classroom--How Do They Facilitate the Learning of "Similar Triangles"?

    Science.gov (United States)

    Poon, Kin Keung; Wong, Kwan Lam

    2017-01-01

    The use of dynamic geometry software (DGS) is becoming increasingly familiar among teachers, but letting students conduct inquiries using computers is still not a welcome idea. In addition to logistics and discipline concerns, many teachers believe that mathematics at the lower secondary level can be learned efficiently through practice alone.…

  3. 87Sr/86Sr as a quantitative geochemical proxy for 14C reservoir age in dynamic, brackish waters: assessing applicability and quantifying uncertainties.

    Science.gov (United States)

    Lougheed, Bryan; van der Lubbe, Jeroen; Davies, Gareth

    2016-04-01

    Accurate geochronologies are crucial for reconstructing the sensitivity of brackish and estuarine environments to rapidly changing past external impacts. A common geochronological method used for such studies is radiocarbon (14C) dating, but its application in brackish environments is severely limited by an inability to quantify spatiotemporal variations in 14C reservoir age, or R(t), due to dynamic interplay between river runoff and marine water. Additionally, old carbon effects and species-specific behavioural processes also influence 14C ages. Using the world's largest brackish water body (the estuarine Baltic Sea) as a test-bed, combined with a comprehensive approach that objectively excludes both old carbon and species-specific effects, we demonstrate that it is possible to use 87Sr/86Sr ratios to quantify R(t) in ubiquitous mollusc shell material, leading to almost one order of magnitude increase in Baltic Sea 14C geochronological precision over the current state-of-the-art. We propose that this novel proxy method can be developed for other brackish water bodies worldwide, thereby improving geochronological control in these climate sensitive, near-coastal environments.

  4. Similarity measures for protein ensembles

    DEFF Research Database (Denmark)

    Lindorff-Larsen, Kresten; Ferkinghoff-Borg, Jesper

    2009-01-01

    Analyses of similarities and changes in protein conformation can provide important information regarding protein function and evolution. Many scores, including the commonly used root mean square deviation, have therefore been developed to quantify the similarities of different protein conformatio...

  5. Web Similarity

    NARCIS (Netherlands)

    Cohen, A.R.; Vitányi, P.M.B.

    2015-01-01

    Normalized web distance (NWD) is a similarity or normalized semantic distance based on the World Wide Web or any other large electronic database, for instance Wikipedia, and a search engine that returns reliable aggregate page counts. For sets of search terms the NWD gives a similarity on a scale fr

  6. Self-Similar Nonlinear Dynamical Solutions for One-Component Nonneutral Plasma in a Time-Dependent Linear Focusing Field

    Energy Technology Data Exchange (ETDEWEB)

    Hong Qin and Ronald C. Davidson

    2011-07-19

    In a linear trap confining a one-component nonneutral plasma, the external focusing force is a linear function of the configuration coordinates and/or the velocity coordinates. Linear traps include the classical Paul trap and the Penning trap, as well as the newly proposed rotating-radio- frequency traps and the Mobius accelerator. This paper describes a class of self-similar nonlinear solutions of nonneutral plasma in general time-dependent linear focusing devices, with self-consistent electrostatic field. This class of nonlinear solutions includes many known solutions as special cases.

  7. The growth and the fluid dynamics of protein crystals and soft organic tissues: models and simulations, similarities and differences.

    Science.gov (United States)

    Lappa, Marcello

    2003-09-21

    The fluid-dynamic environment within typical growth reactors as well as the interaction of such flow with the intrinsic kinetics of the growth process are investigated in the frame of the new fields of protein crystal and tissue engineering. The paper uses available data to introduce a set of novel growth models. The surface conditions are coupled to the exchange mass flux at the specimen/culture-medium interface and lead to the introduction of a group of differential equations for the nutrient concentration around the sample and for the evolution of the construct mass displacement. These models take into account the sensitivity of the construct/liquid interface to the level of supersaturation in the case of macromolecular crystal growth and to the "direct" effect of the fluid-dynamic shear stress in the case of biological tissue growth. They then are used to show how the proposed surface kinetic laws can predict (through sophisticated numerical simulations) many of the known characteristics of protein crystals and biological tissues produced using well-known and widely used reactors. This procedure provides validation of the models and associated numerical method and at the same time gives insights into the mechanisms of the phenomena. The onset of morphological instabilities is discussed and investigated in detail. The interplay between the increasing size of the sample and the structure of the convective field established inside the reactor is analysed. It is shown that this interaction is essential in determining the time evolution of the specimen shape. Analogies about growing macromolecular crystals and growing biological tissues are pointed out in terms of behaviours and cause-and-effect relationships. These aspects lead to a common source (in terms of original mathematical models, ideas and results) made available for the scientific community under the optimistic idea that the contacts established between the "two fields of engineering" will develop into an

  8. Quantifying the spatio-temporal pattern of the ground impact of space weather events using dynamical networks formed from the SuperMAG database of ground based magnetometer stations.

    Science.gov (United States)

    Dods, Joe; Chapman, Sandra; Gjerloev, Jesper

    2016-04-01

    Quantitative understanding of the full spatial-temporal pattern of space weather is important in order to estimate the ground impact. Geomagnetic indices such as AE track the peak of a geomagnetic storm or substorm, but cannot capture the full spatial-temporal pattern. Observations by the ~100 ground based magnetometers in the northern hemisphere have the potential to capture the detailed evolution of a given space weather event. We present the first analysis of the full available set of ground based magnetometer observations of substorms using dynamical networks. SuperMAG offers a database containing ground station magnetometer data at a cadence of 1min from 100s stations situated across the globe. We use this data to form dynamic networks which capture spatial dynamics on timescales from the fast reconfiguration seen in the aurora, to that of the substorm cycle. Windowed linear cross-correlation between pairs of magnetometer time series along with a threshold is used to determine which stations are correlated and hence connected in the network. Variations in ground conductivity and differences in the response functions of magnetometers at individual stations are overcome by normalizing to long term averages of the cross-correlation. These results are tested against surrogate data in which phases have been randomised. The network is then a collection of connected points (ground stations); the structure of the network and its variation as a function of time quantify the detailed dynamical processes of the substorm. The network properties can be captured quantitatively in time dependent dimensionless network parameters and we will discuss their behaviour for examples of 'typical' substorms and storms. The network parameters provide a detailed benchmark to compare data with models of substorm dynamics, and can provide new insights on the similarities and differences between substorms and how they correlate with external driving and the internal state of the

  9. Dynamics Inside the Radio and X-ray Cluster Cavities of Cygnus A and Similar FRII Sources

    CERN Document Server

    Mathews, William G

    2012-01-01

    We describe approximate axisymmetric computations of the dynamical evolution of material inside radio lobes and X-ray cluster gas cavities in Fanaroff-Riley II sources such as Cygnus A. All energy is delivered by a jet to the lobe/cavity via a moving hotspot where jet energy dissipates in a reverse shock. Our calculations describe the evolution of hot plasma, cosmic rays (CRs) and toroidal magnetic fields flowing from the hotspot into the cavity. Many observed features are explained. Gas, CRs and field flow back along the cavity surface in a "boundary backflow" consistent with detailed FRII observations. Computed ages of backflowing CRs are consistent with observed radio-synchrotron age variations only if shear instabilities in the boundary backflow are damped and we assume this is done with viscosity of unknown origin. Magnetic fields estimated from synchrotron self-Compton (SSC) X-radiation observed near the hotspot evolve into radio lobe fields. Computed profiles of radio synchrotron lobe emission perpendi...

  10. Dynamic spatiotemporal brain analyses of the visual checkerboard task: Similarities and differences between passive and active viewing conditions.

    Science.gov (United States)

    Cacioppo, Stephanie; Weiss, Robin M; Cacioppo, John T

    2016-10-01

    We introduce a new analytic technique for the microsegmentation of high-density EEG to identify the discrete brain microstates evoked by the visual reversal checkerboard task. To test the sensitivity of the present analytic approach to differences in evoked brain microstates across experimental conditions, subjects were instructed to (a) passively view the reversals of the checkerboard (passive viewing condition), or (b) actively search for a target stimulus that may appear at the fixation point, and they were offered a monetary reward if they correctly detected the stimulus (active viewing condition). Results revealed that, within the first 168 ms of a checkerboard presentation, the same four brain microstates were evoked in the passive and active viewing conditions, whereas the brain microstates evoked after 168 ms differed between these two conditions, with more brain microstates elicited in the active than in the passive viewing condition. Additionally, distinctions were found in the active condition between a change in a scalp configuration that reflects a change in microstate and a change in scalp configuration that reflects a change in the level of activation of the same microstate. Finally, the bootstrapping procedure identified that two microstates lacked robustness even though statistical significance thresholds were met, suggesting these microstates should be replicated prior to placing weight on their generalizability across individuals. These results illustrate the utility of the analytic approach and provide new information about the spatiotemporal dynamics of the brain states underlying passive and active viewing in the visual checkerboard task. © 2016 Society for Psychophysiological Research.

  11. Similarities and differences of serotonin and its precursors in their interactions with model membranes studied by molecular dynamics simulation

    Science.gov (United States)

    Wood, Irene; Martini, M. Florencia; Pickholz, Mónica

    2013-08-01

    In this work, we report a molecular dynamics (MD) simulations study of relevant biological molecules as serotonin (neutral and protonated) and its precursors, tryptophan and 5-hydroxy-tryptophan, in a fully hydrated bilayer of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidyl-choline (POPC). The simulations were carried out at the fluid lamellar phase of POPC at constant pressure and temperature conditions. Two guest molecules of each type were initially placed at the water phase. We have analyzed, the main localization, preferential orientation and specific interactions of the guest molecules within the bilayer. During the simulation run, the four molecules were preferentially found at the water-lipid interphase. We found that the interactions that stabilized the systems are essentially hydrogen bonds, salt bridges and cation-π. None of the guest molecules have access to the hydrophobic region of the bilayer. Besides, zwitterionic molecules have access to the water phase, while protonated serotonin is anchored in the interphase. Even taking into account that these simulations were done using a model membrane, our results suggest that the studied molecules could not cross the blood brain barrier by diffusion. These results are in good agreement with works that show that serotonin and Trp do not cross the BBB by simple diffusion.

  12. Dynamic Service Negotiation Model Based on Interval Similarity%基于区间相似度的动态服务协商模型

    Institute of Scientific and Technical Information of China (English)

    冯秀珍; 武高峰

    2011-01-01

    To deal with asymmejc information, dynamic environment as well as vagueness and uncertainty of QoS in service negotiation, this paper proposes a dynamic service negotiation model based on interval similarity.This model can be used to forecast the negotiation strategies of the counterpart via using interval similarity and interval estimation, and on this basis to make optimal counter-strategies.A numerical example shows that dynamic service negotiation model is closer to reality behaviors compared with static, and it can effectively improve negotiation efficiency in the dynamic service negotiation environment.%针对服务协商中信息的不对称性、协商环境的动态性以及QoS属性的不确定性和模糊性,提出基于区间相似度的动态服务协商模型.利用区间相似度和区间估计预测对方的协商策略,以此制定己方的最优反策略.算例分析表明,在动态服务协商环境下,该模型比静态协商模型更贴近现实的协商行为,能有效提高协商效率.

  13. DYNAMICS INSIDE THE RADIO AND X-RAY CLUSTER CAVITIES OF CYGNUS A AND SIMILAR FRII SOURCES

    Energy Technology Data Exchange (ETDEWEB)

    Mathews, William G.; Guo Fulai, E-mail: mathews@ucolick.org [University of California Observatories/Lick Observatory, Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States)

    2012-08-10

    We describe approximate axisymmetric computations of the dynamical evolution of material inside radio lobes and X-ray cluster gas cavities in Fanaroff-Riley II (FRII) sources such as Cygnus A. All energy is delivered by a jet to the lobe/cavity via a moving hotspot where jet energy dissipates in a reverse shock. Our calculations describe the evolution of hot plasma, cosmic rays (CRs), and toroidal magnetic fields flowing from the hotspot into the cavity. Many important observational features are explained. Gas, CRs, and field flow back along the cavity surface in a 'boundary backflow' consistent with detailed FRII observations. Computed ages of backflowing CRs are consistent with observed radio-synchrotron age variations only if shear instabilities in the boundary backflow are damped and we assume this is done with viscosity of unknown origin. We compute a faint thermal jet along the symmetry axis and suggest that it is responsible for redirecting the Cygnus A nonthermal jet. Magnetic fields estimated from synchrotron self-Compton (SSC) X-radiation observed near the hotspot evolve into radio lobe fields. Computed profiles of radio-synchrotron lobe emission perpendicular to the jet reveal dramatically limb-brightened emission in excellent agreement with FRII observation, although computed lobe fields exceed those observed. Strong winds flowing from hotspots naturally create kiloparsec-sized spatial offsets between hotspot nonthermal X-ray inverse Compton (IC-CMB) emission and radio-synchrotron emission that peaks 1-2 kpc ahead where the field increases due to wind compression. In our computed version of Cygnus A, nonthermal X-ray emission increases from the hotspot (some IC-CMB, mostly SSC) toward the offset radio-synchrotron peak (mostly SSC).

  14. Use of cluster-graphs from spoligotyping data to study genotype similarities and a comparison of three indices to quantify recent tuberculosis transmission among culture positive cases in French Guiana during a eight year period

    Directory of Open Access Journals (Sweden)

    Brudey Karine

    2008-04-01

    Full Text Available Abstract Background French Guiana has the highest tuberculosis (TB burden among all French departments, with a strong increase in the TB incidence over the last few years. It is now uncertain how best to explain this incidence. The objective of this study was to compare three different methods evaluating the extent of recent TB transmission in French Guiana. Methods We conducted a population-based molecular epidemiology study of tuberculosis in French Guiana based on culture-positive TB strains (1996 to 2003, n = 344 to define molecular relatedness between isolates, i.e. potential transmission events. Phylogenetic relationships were inferred by comparing two methods: a "cluster-graph" method based on spoligotyping results, and a minimum spanning tree method based on both spoligotyping and variable number of tandem DNA repeats (VNTR. Furthermore, three indices attempting to reflect the extent of recent TB transmission (RTIn, RTIn-1 and TMI were compared. Results Molecular analyses showed a total amount of 120 different spoligotyping patterns and 273 clinical isolates (79.4% that were grouped in 49 clusters. The comparison of spoligotypes from French Guiana with an international spoligotype database (SpolDB4 showed that the majority of isolates belonged to major clades of M. tuberculosis (Haarlem, 22.6%; Latin American-Mediterranean, 23.3%; and T, 32.6%. Indices designed to quantify transmission of tuberculosis gave the following values: RTIn = 0.794, RTIn-1 = 0.651, and TMI = 0.146. Conclusion Our data showed a high number of Mycobacterium tuberculosis clusters, suggesting a high level of recent TB transmission, nonetheless an estimation of transmission rate taking into account cluster size and mutation rate of genetic markers showed a low ongoing transmission rate (14.6%. Our results indicate an endemic mode of TB transmission in French Guiana, with both resurgence of old spatially restricted genotypes, and a significant importation of new TB

  15. Terpenes increase the lipid dynamics in the Leishmania plasma membrane at concentrations similar to their IC50 values.

    Directory of Open Access Journals (Sweden)

    Heverton Silva Camargos

    Full Text Available Although many terpenes have shown antitumor, antibacterial, antifungal, and antiparasitic activity, the mechanism of action is not well established. Electron paramagnetic resonance (EPR spectroscopy of the spin-labeled 5-doxyl stearic acid revealed remarkable fluidity increases in the plasma membrane of terpene-treated Leishmania amazonensis promastigotes. For an antiproliferative activity assay using 5×10(6 parasites/mL, the sesquiterpene nerolidol and the monoterpenes (+-limonene, α-terpineol and 1,8-cineole inhibited the growth of the parasites with IC50 values of 0.008, 0.549, 0.678 and 4.697 mM, respectively. The IC50 values of these terpenes increased as the parasite concentration used in the cytotoxicity assay increased, and this behavior was examined using a theoretical treatment of the experimental data. Cytotoxicity tests with the same parasite concentration as in the EPR experiments revealed a correlation between the IC50 values of the terpenes and the concentrations at which they altered the membrane fluidity. In addition, the terpenes induced small amounts of cell lysis (4-9% at their respective IC50 values. For assays with high cell concentrations (2×10(9 parasites/mL, the incorporation of terpene into the cell membrane was very fast, and the IC50 values observed for 24 h and 5 min-incubation periods were not significantly different. Taken together, these results suggest that terpene cytotoxicity is associated with the attack on the plasma membrane of the parasite. The in vitro cytotoxicity of nerolidol was similar to that of miltefosine, and nerolidol has high hydrophobicity; thus, nerolidol might be used in drug delivery systems, such as lipid nanoparticles to treat leishmaniasis.

  16. Terpenes increase the lipid dynamics in the Leishmania plasma membrane at concentrations similar to their IC50 values.

    Science.gov (United States)

    Camargos, Heverton Silva; Moreira, Rodrigo Alves; Mendanha, Sebastião Antonio; Fernandes, Kelly Souza; Dorta, Miriam Leandro; Alonso, Antonio

    2014-01-01

    Although many terpenes have shown antitumor, antibacterial, antifungal, and antiparasitic activity, the mechanism of action is not well established. Electron paramagnetic resonance (EPR) spectroscopy of the spin-labeled 5-doxyl stearic acid revealed remarkable fluidity increases in the plasma membrane of terpene-treated Leishmania amazonensis promastigotes. For an antiproliferative activity assay using 5×10(6) parasites/mL, the sesquiterpene nerolidol and the monoterpenes (+)-limonene, α-terpineol and 1,8-cineole inhibited the growth of the parasites with IC50 values of 0.008, 0.549, 0.678 and 4.697 mM, respectively. The IC50 values of these terpenes increased as the parasite concentration used in the cytotoxicity assay increased, and this behavior was examined using a theoretical treatment of the experimental data. Cytotoxicity tests with the same parasite concentration as in the EPR experiments revealed a correlation between the IC50 values of the terpenes and the concentrations at which they altered the membrane fluidity. In addition, the terpenes induced small amounts of cell lysis (4-9%) at their respective IC50 values. For assays with high cell concentrations (2×10(9) parasites/mL), the incorporation of terpene into the cell membrane was very fast, and the IC50 values observed for 24 h and 5 min-incubation periods were not significantly different. Taken together, these results suggest that terpene cytotoxicity is associated with the attack on the plasma membrane of the parasite. The in vitro cytotoxicity of nerolidol was similar to that of miltefosine, and nerolidol has high hydrophobicity; thus, nerolidol might be used in drug delivery systems, such as lipid nanoparticles to treat leishmaniasis.

  17. Actively heated high-resolution fiber-optic-distributed temperature sensing to quantify streambed flow dynamics in zones of strong groundwater upwelling

    Science.gov (United States)

    Briggs, Martin A.; Buckley, Sean F.; Bagtzoglou, Amvrossios C.; Werkema, Dale D.; Lane, John W.

    2016-07-01

    Zones of strong groundwater upwelling to streams enhance thermal stability and moderate thermal extremes, which is particularly important to aquatic ecosystems in a warming climate. Passive thermal tracer methods used to quantify vertical upwelling rates rely on downward conduction of surface temperature signals. However, moderate to high groundwater flux rates (>-1.5 m d-1) restrict downward propagation of diurnal temperature signals, and therefore the applicability of several passive thermal methods. Active streambed heating from within high-resolution fiber-optic temperature sensors (A-HRTS) has the potential to define multidimensional fluid-flux patterns below the extinction depth of surface thermal signals, allowing better quantification and separation of local and regional groundwater discharge. To demonstrate this concept, nine A-HRTS were emplaced vertically into the streambed in a grid with ˜0.40 m lateral spacing at a stream with strong upward vertical flux in Mashpee, Massachusetts, USA. Long-term (8-9 h) heating events were performed to confirm the dominance of vertical flow to the 0.6 m depth, well below the extinction of ambient diurnal signals. To quantify vertical flux, short-term heating events (28 min) were performed at each A-HRTS, and heat-pulse decay over vertical profiles was numerically modeled in radial two dimension (2-D) using SUTRA. Modeled flux values are similar to those obtained with seepage meters, Darcy methods, and analytical modeling of shallow diurnal signals. We also observed repeatable differential heating patterns along the length of vertically oriented sensors that may indicate sediment layering and hyporheic exchange superimposed on regional groundwater discharge.

  18. Quantifying the flow dynamics of supercritical CO2-water displacement in a 2D porous micromodel using fluorescent microscopy and microscopic PIV

    Science.gov (United States)

    Kazemifar, Farzan; Blois, Gianluca; Kyritsis, Dimitrios C.; Christensen, Kenneth T.

    2016-09-01

    The multi-phase flow of liquid/supercritical CO2 and water (non-wetting and wetting phases, respectively) in a two-dimensional silicon micromodel was investigated at reservoir conditions (80 bar, 24 °C and 40 °C). The fluorescent microscopy and microscopic particle image velocimetry (micro-PIV) techniques were combined to quantify the flow dynamics associated with displacement of water by CO2 (drainage) in the porous matrix. To this end, water was seeded with fluorescent tracer particles, CO2 was tagged with a fluorescent dye and each phase was imaged independently using spectral separation in conjunction with microscopic imaging. This approach allowed simultaneous measurement of the spatially-resolved instantaneous velocity field in the water and quantification of the spatial configuration of the two fluid phases. The results, acquired with sufficient time resolution to follow the dynamic progression of both phases, provide a comprehensive picture of the flow physics during the migration of the CO2 front, the temporal evolution of individual menisci, and the growth of fingers within the porous microstructure. During that growth process, velocity jumps 20-25 times larger in magnitude than the bulk velocity were measured in the water phase and these bursts of water flow occurred both in-line with and against the bulk flow direction. These unsteady velocity events support the notion of pressure bursts and Haines jumps during pore drainage events as previously reported in the literature [1-3]. After passage of the CO2 front, shear-induced flow was detected in the trapped water ganglia in the form of circulation zones near the CO2-water interfaces as well as in the thin water films wetting the surfaces of the silicon micromodel. To our knowledge, the results presented herein represent the first quantitative spatially and temporally resolved velocity-field measurements at high pressure for water displacement by liquid/supercritical CO2 injection in a porous micromodel.

  19. Oligosaccharides from the 3-linked 2-sulfated alpha-L-fucan and alpha-L-galactan show similar conformations but different dynamics.

    Science.gov (United States)

    Queiroz, Ismael N L; Vilela-Silva, Ana-Cristina E S; Pomin, Vitor H

    2016-11-01

    Here we have performed an nuclear magnetic resonance-based study on the ring and chain conformations as well as dynamics of oligosaccharides generated by acid hydrolysis on two structurally related glycans, a 3-linked 2-sulfated alpha-L-galactan and a 3-linked 2-sulfated alpha-L-fucan. Results derived from scalar couplings have confirmed the (1)C4 chair configuration to both alpha-L-fucose and alpha-L-galactose, and a similar solution 3D structure for the oligosaccharide chains of both sulfated glycans as seen on the basis of NOE patterns. Measurements of spin-relaxation rates have suggested, however, a slight difference dynamical property to these glycans. The fucose-based oligosaccharides showed an enhanced dynamical property if compared to the galactose-based oligosaccharides of same anomericity, sugar configuration, glycosidic bond and sulfation type. This distinction solely on the dynamical aspect has been driven therefore by the different sugar composition of the two studied sulfated glycans. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Similarity Scaling

    Science.gov (United States)

    Schnack, Dalton D.

    In Lecture 10, we introduced a non-dimensional parameter called the Lundquist number, denoted by S. This is just one of many non-dimensional parameters that can appear in the formulations of both hydrodynamics and MHD. These generally express the ratio of the time scale associated with some dissipative process to the time scale associated with either wave propagation or transport by flow. These are important because they define regions in parameter space that separate flows with different physical characteristics. All flows that have the same non-dimensional parameters behave in the same way. This property is called similarity scaling.

  1. Segmentation Similarity and Agreement

    CERN Document Server

    Fournier, Chris

    2012-01-01

    We propose a new segmentation evaluation metric, called segmentation similarity (S), that quantifies the similarity between two segmentations as the proportion of boundaries that are not transformed when comparing them using edit distance, essentially using edit distance as a penalty function and scaling penalties by segmentation size. We propose several adapted inter-annotator agreement coefficients which use S that are suitable for segmentation. We show that S is configurable enough to suit a wide variety of segmentation evaluations, and is an improvement upon the state of the art. We also propose using inter-annotator agreement coefficients to evaluate automatic segmenters in terms of human performance.

  2. Quantifying of the Thermal Dynamic Characteristics of the Combustion System for Underground Coal Fire and its Impact on Environment in Xinjiang region, China

    Science.gov (United States)

    ZENG, Qiang; Tiyip, Tashpolat; Wuttke, Manfred; NIE, Jing; PU, Yan

    2015-04-01

    Underground Coal fire (UCF) is one disaster associated with coal mining activities around the world. The UCF not only burns up the coal reservoir, but also causes serious environmental problems, such as the pollution to air, the damage to soils, and the contamination to surface and underground water and consequently the health problem to human beings. In the present paper, the authors attempts to quantify the thermal dynamic characteristics of the combustion system for UCF and its impact on environment by modeling, including delineating the physical boundary of UCF zone, modeling of the capacity of the oxygen supply to UCF, modeling the intensity of heat generation from UCF and modeling the process of heat transfer within UCF and its surrounding environment. From this research, results were obtained as follows: First of all, based on the rock control theory, a model was proposed to depict the physical boundary of UCF zone which is important for coal fire research. Secondly, with analyzing the characteristics of air and smoke flow within UCF zone, an air/smoke flow model was proposed and consequently a method was put forward to calculate the capacity of oxygen supply to the UCF. Thirdly, with analyzing the characteristics of coal combustion within UCF zone, a method of calculating the intensity of heat generation from UCF, i.e., the heat source models, was established. Heat transfer with UCF zone includes the heat conductivity within UCF zone, the heat dissipation by radiation from the surface of fire zone, and the heat dissipation by convection as well as the heat loss taken away by mass transport. The authors also made an effort to depict the process of heat transfer by quantitative methods. Finally, an example of Shuixigou coal fire was given to illustrate parts of above models. Further more, UCF's impact on environment, such as the heavy metals contamination to surface soil of fire zone and the characteristics of gaseous pollutants emission from the UCF also was

  3. The dynamic process of atmospheric water sorption in [BMIM][Ac]: quantifying bulk versus surface sorption and utilizing atmospheric water as a structure probe.

    Science.gov (United States)

    Chen, Yu; Cao, Yuanyuan; Yan, Chuanyu; Zhang, Yuwei; Mu, Tiancheng

    2014-06-19

    The dynamic process of the atmospheric water absorbed in acetate-based ionic liquid 1-butyl-3-methyl-imidazolium acetate ([BMIM][Ac]) within 360 min could be described with three steps by using two-dimensional correlation infrared (IR) spectroscopy technique. In Step 1 (0-120 min), only bulk sorption via hydrogen bonding interaction occurs. In Step 2 (120-320 min), bulk and surface sorption takes place simultaneously via both hydrogen bonding interaction and van der Waals force. In Step 3, from 320 min to steady state, only surface sorption via van der Waals force occurs. Specifically, Step 2 could be divided into three substeps. Most bulk sorption with little surface sorption takes place in Step 2a (120-180 min), comparative bulk and surface sorption happens in Step 2b (180-260 min), and most surface sorption while little bulk sorption occurs in Step 2c (260-320 min). Interestingly, atmospheric water is found for the first time to be able to be used as a probe to detect the chemical structure of [BMIM][Ac]. Results show that one anion is surrounded by three C4,5H molecules and two anions are surrounded by five C2H molecules via hydrogen bonds, which are very susceptible to moisture water especially for the former one. The remaining five anions form a multimer (equilibrating with one dimer and one trimer) via a strong hydrogen bonding interaction, which is not easily affected by the introduction of atmospheric water. The alkyl of the [BMIM][Ac] cation aggregates to some extent by van der Walls force, which is moderately susceptible to the water attack. Furthermore, the proportion of bulk sorption vs surface sorption is quantified as about 70% and 30% within 320 min, 63% and 37% within 360 min, and 11% and 89% until steady-state, respectively.

  4. Algorithms to automatically quantify the geometric similarity of anatomical surfaces

    CERN Document Server

    Boyer, D; Clair, E St; Puente, J; Funkhouser, T; Patel, B; Jernvall, J; Daubechies, I

    2011-01-01

    We describe new approaches for distances between pairs of 2-dimensional surfaces (embedded in 3-dimensional space) that use local structures and global information contained in inter-structure geometric relationships. We present algorithms to automatically determine these distances as well as geometric correspondences. This is motivated by the aspiration of students of natural science to understand the continuity of form that unites the diversity of life. At present, scientists using physical traits to study evolutionary relationships among living and extinct animals analyze data extracted from carefully defined anatomical correspondence points (landmarks). Identifying and recording these landmarks is time consuming and can be done accurately only by trained morphologists. This renders these studies inaccessible to non-morphologists, and causes phenomics to lag behind genomics in elucidating evolutionary patterns. Unlike other algorithms presented for morphological correspondences our approach does not requir...

  5. Inequalities between similarities for numerical data

    NARCIS (Netherlands)

    Warrens, Matthijs J.

    2016-01-01

    Similarity measures are entities that can be used to quantify the similarity between two vectors with real numbers. We present inequalities between seven well known similarities. The inequalities are valid if the vectors contain non-negative real numbers.

  6. Quantifying the adaptive cycle

    Science.gov (United States)

    Angeler, David G.; Allen, Craig R.; Garmestani, Ahjond S.; Gunderson, Lance H.; Hjerne, Olle; Winder, Monika

    2015-01-01

    The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994–2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems.

  7. 相似度质心多层过滤策略的动态文摘方法%The similarity centroid multilayer filtering dynamic summarization method

    Institute of Scientific and Technical Information of China (English)

    于洋; 范文义; 刘美玲; 王慧强

    2014-01-01

    为了研究网络快速有效获取信息的方法,网络动态演化内容的识别和分析成为人们迫切需要解决的关键问题。动态多文档文摘建立在时间信息基础上,从网络数据的动态性能入手,对同一主题不同时段的文摘集合进行分析,在识别信息内容差异性的基础上,对信息的动态演化性进行建模。在提出相似度累加模型基础上,进一步提出了基于质心整体选优的动态文摘模型。分析当前文档集合与历史集合强关联性,以选择出的不同文摘句为首句生成候选文摘集合,然后根据质心多层过滤优选方法从中选出最优文摘结果。这种模型方法消除了因首句选择不当而对文摘性能造成的影响,在国际标准评测Taxt Anynasis Conference 2008的Update task任务语料上进行了测试,并且获得了较好的实验结果。%To research the method for quickly obtaining effective information on the internet, identifying and analy⁃zing dynamic evolution of the network has become a key issue that needs to be resolved urgently. Dynamic multi⁃document summarization is based on the time information starts from dynamic performance analysis of network data, the analyzes the abstracts collect about the same topic in different periods of time, and the models of dynamic evolu⁃tion of information on the basis of identifying differences of information contents. This paper first introduced the text similarity cumulative model and then the dynamic summarization model based on centroid integer selection. The high relevance between the current collection of documents and the historical collection was analyzed and different sentences summaries were selected and used as the first sentences of candidate set of abstracts newly generated. Next, the best abstracts were selected from the results based on the centroid multilayer filtering optimization meth⁃od. These models eliminate the impact on the abstract

  8. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar

    2016-03-21

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  9. On Quantifying Semantic Information

    Directory of Open Access Journals (Sweden)

    Simon D’Alfonso

    2011-01-01

    Full Text Available The purpose of this paper is to look at some existing methods of semantic information quantification and suggest some alternatives. It begins with an outline of Bar-Hillel and Carnap’s theory of semantic information before going on to look at Floridi’s theory of strongly semantic information. The latter then serves to initiate an in-depth investigation into the idea of utilising the notion of truthlikeness to quantify semantic information. Firstly, a couple of approaches to measure truthlikeness are drawn from the literature and explored, with a focus on their applicability to semantic information quantification. Secondly, a similar but new approach to measure truthlikeness/information is presented and some supplementary points are made.

  10. Quantifiers, Anaphora and Intensionality

    CERN Document Server

    Dalrymple, M; Pereira, F C N; Saraswat, V; Dalrymple, Mary; Lamping, John; Pereira, Fernando; Saraswat, Vijay

    1995-01-01

    The relationship between Lexical-Functional Grammar (LFG) {\\em functional structures} (f-structures) for sentences and their semantic interpretations can be expressed directly in a fragment of linear logic in a way that correctly explains the constrained interactions between quantifier scope ambiguity, bound anaphora and intensionality. This deductive approach to semantic interpretaion obviates the need for additional mechanisms, such as Cooper storage, to represent the possible scopes of a quantified NP, and explains the interactions between quantified NPs, anaphora and intensional verbs such as `seek'. A single specification in linear logic of the argument requirements of intensional verbs is sufficient to derive the correct reading predictions for intensional-verb clauses both with nonquantified and with quantified direct objects. In particular, both de dicto and de re readings are derived for quantified objects. The effects of type-raising or quantifying-in rules in other frameworks here just follow as li...

  11. Plant-mediated CH4 transport and C gas dynamics quantified in-situ in a Phalaris arundinacea-dominant wetland

    DEFF Research Database (Denmark)

    Jensen, Louise Askær; Elberling, Bo; Friborg, Thomas;

    2011-01-01

    Abstract Northern peatland methane (CH4) budgets are important for global CH4 emissions. This study aims to determine the ecosystem CH4 budget and specifically to quantify the importance of Phalaris arundinacea by using different chamber techniques in a temperate wetland. Annually, roughly 70...... passive. Thus, diurnal variations are less important in contrast to wetland vascular plants facilitating convective gas flow. Despite of plant-dominant CH4 transport, net CH4 fluxes were low (–0.005–0.016 µmol m-2 s-1) and annually less than 1% of the annual C-CO2 assimilation. This is considered a result...... of an effective root zone oxygenation resulting in increased CH4 oxidation in the rhizosphere at high water levels. This study shows that although CH4, having a global warming potential 25 times greater than CO2, is emitted from this P. arundinacea wetland, less than 9% of the C sequestered counterbalances the CH...

  12. From Recombination Dynamics to Device Performance: Quantifying the Efficiency of Exciton Dissociation, Charge Separation, and Extraction in Bulk Heterojunction Solar Cells with Fluorine-Substituted Polymer Donors

    KAUST Repository

    Gorenflot, Julien

    2017-09-28

    An original set of experimental and modeling tools is used to quantify the yield of each of the physical processes leading to photocurrent generation in organic bulk heterojunction solar cells, enabling evaluation of materials and processing condition beyond the trivial comparison of device performances. Transient absorption spectroscopy, “the” technique to monitor all intermediate states over the entire relevant timescale, is combined with time-delayed collection field experiments, transfer matrix simulations, spectral deconvolution, and parametrization of the charge carrier recombination by a two-pool model, allowing quantification of densities of excitons and charges and extrapolation of their kinetics to device-relevant conditions. Photon absorption, charge transfer, charge separation, and charge extraction are all quantified for two recently developed wide-bandgap donor polymers: poly(4,8-bis((2-ethylhexyl)oxy)benzo[1,2-b:4,5-b′]dithiophene-3,4-difluorothiophene) (PBDT[2F]T) and its nonfluorinated counterpart poly(4,8-bis((2-ethylhexyl)oxy)benzo[1,2-b:4,5-b′]dithiophene-3,4-thiophene) (PBDT[2H]T) combined with PC71BM in bulk heterojunctions. The product of these yields is shown to agree well with the devices\\' external quantum efficiency. This methodology elucidates in the specific case studied here the origin of improved photocurrents obtained when using PBDT[2F]T instead of PBDT[2H]T as well as upon using solvent additives. Furthermore, a higher charge transfer (CT)-state energy is shown to lead to significantly lower energy losses (resulting in higher VOC) during charge generation compared to P3HT:PCBM.

  13. Decomposing generalized quantifiers

    NARCIS (Netherlands)

    Westerståhl, D.

    2008-01-01

    This note explains the circumstances under which a type <1> quantifier can be decomposed into a type <1, 1> quantifier and a set, by fixing the first argument of the former to the latter. The motivation comes from the semantics of Noun Phrases (also called Determiner Phrases) in natural languages, b

  14. Decomposing generalized quantifiers

    NARCIS (Netherlands)

    Westerståhl, D.

    2008-01-01

    This note explains the circumstances under which a type <1> quantifier can be decomposed into a type <1, 1> quantifier and a set, by fixing the first argument of the former to the latter. The motivation comes from the semantics of Noun Phrases (also called Determiner Phrases) in natural languages,

  15. Understanding quantifiers in language

    NARCIS (Netherlands)

    Szymanik, J.; Zajenkowski, M.; Taatgen, N.; van Rijn, H.

    2009-01-01

    We compare time needed for understanding different types of quantifiers. We show that the computational distinction between quantifiers recognized by finite-automata and push-down automata is psychologically relevant. Our research improves upon hypothesis and explanatory power of recent neuroimaging

  16. An automated sleep-state classification algorithm for quantifying sleep timing and sleep-dependent dynamics of electroencephalographic and cerebral metabolic parameters

    Directory of Open Access Journals (Sweden)

    Rempe MJ

    2015-09-01

    Full Text Available Michael J Rempe,1,2 William C Clegern,2 Jonathan P Wisor2 1Mathematics and Computer Science, Whitworth University, Spokane, WA, USA; 2College of Medical Sciences and Sleep and Performance Research Center, Washington State University, Spokane, WA, USAIntroduction: Rodent sleep research uses electroencephalography (EEG and electromyography (EMG to determine the sleep state of an animal at any given time. EEG and EMG signals, typically sampled at >100 Hz, are segmented arbitrarily into epochs of equal duration (usually 2–10 seconds, and each epoch is scored as wake, slow-wave sleep (SWS, or rapid-eye-movement sleep (REMS, on the basis of visual inspection. Automated state scoring can minimize the burden associated with state and thereby facilitate the use of shorter epoch durations.Methods: We developed a semiautomated state-scoring procedure that uses a combination of principal component analysis and naïve Bayes classification, with the EEG and EMG as inputs. We validated this algorithm against human-scored sleep-state scoring of data from C57BL/6J and BALB/CJ mice. We then applied a general homeostatic model to characterize the state-dependent dynamics of sleep slow-wave activity and cerebral glycolytic flux, measured as lactate concentration.Results: More than 89% of epochs scored as wake or SWS by the human were scored as the same state by the machine, whether scoring in 2-second or 10-second epochs. The majority of epochs scored as REMS by the human were also scored as REMS by the machine. However, of epochs scored as REMS by the human, more than 10% were scored as SWS by the machine and 18 (10-second epochs to 28% (2-second epochs were scored as wake. These biases were not strain-specific, as strain differences in sleep-state timing relative to the light/dark cycle, EEG power spectral profiles, and the homeostatic dynamics of both slow waves and lactate were detected equally effectively with the automated method or the manual scoring

  17. Whole-atmosphere aerosol microphysics simulations of the Mt Pinatubo eruption: Part 2: Quantifying the direct and indirect (dynamical) radiative forcings

    Science.gov (United States)

    Mann, Graham; Dhomse, Sandip; Carslaw, Ken; Chipperfield, Martyn; Lee, Lindsay; Emmerson, Kathryn; Abraham, Luke; Telford, Paul; Pyle, John; Braesicke, Peter; Bellouin, Nicolas; Dalvi, Mohit; Johnson, Colin

    2016-04-01

    The Mt Pinatubo volcanic eruption in June 1991 injected between 10 and 20 Tg of sulphur dioxide into the tropical lower stratosphere. Following chemical conversion to sulphuric acid, the stratospheric aerosol layer thickened substantially causing a strong radiative, dynamical and chemical perturbation to the Earth's atmosphere with effects lasting several years. In this presentation we show results from model experiments to isolate the different ways the enhanced stratospheric aerosol from Pinatubo influenced the Earth's climate. The simulations are carried out in the UK Chemistry and Aerosol composition-climate model (UKCA) which extends the high-top (to 80km) version of the UK Met Office Unified Model (UM). The UM-UKCA model uses the GLOMAP-mode aerosol microphysics module coupled with a stratosphere-troposphere chemistry scheme including sulphur chemistry. By running no-feedback and standard integrations, we separate the main radiative forcings due to aerosol-radiation interactions (i.e. the direct forcings) from those induced by dynamical changes which alter meridional heat transport and distributions of aerosol, ozone and water vapour.

  18. Connected Car: Quantified Self becomes Quantified Car

    Directory of Open Access Journals (Sweden)

    Melanie Swan

    2015-02-01

    Full Text Available The automotive industry could be facing a situation of profound change and opportunity in the coming decades. There are a number of influencing factors such as increasing urban and aging populations, self-driving cars, 3D parts printing, energy innovation, and new models of transportation service delivery (Zipcar, Uber. The connected car means that vehicles are now part of the connected world, continuously Internet-connected, generating and transmitting data, which on the one hand can be helpfully integrated into applications, like real-time traffic alerts broadcast to smartwatches, but also raises security and privacy concerns. This paper explores the automotive connected world, and describes five killer QS (Quantified Self-auto sensor applications that link quantified-self sensors (sensors that measure the personal biometrics of individuals like heart rate and automotive sensors (sensors that measure driver and passenger biometrics or quantitative automotive performance metrics like speed and braking activity. The applications are fatigue detection, real-time assistance for parking and accidents, anger management and stress reduction, keyless authentication and digital identity verification, and DIY diagnostics. These kinds of applications help to demonstrate the benefit of connected world data streams in the automotive industry and beyond where, more fundamentally for human progress, the automation of both physical and now cognitive tasks is underway.

  19. A Probabilistic Analysis to Quantify the Effect of March 11, 2004, Attacks in Madrid on the March 14 Elections in Spain: A Dynamic Modelling Approach

    Directory of Open Access Journals (Sweden)

    Juan-Carlos Cortés

    2015-01-01

    Full Text Available The bomb attacks in Madrid three days before the general elections of March 14, 2004, and their possible influence on the victory of PSOE (Spanish Workers Socialist Party, defeating PP (Popular Party, have been a matter of study from several points of view (i.e., sociological, political, or statistical. In this paper, we present a dynamic model based on a system of differential equations such that it, using data from Spanish CIS (National Center of Sociological Research, describes the evolution of voting intention of the Spanish people over time. Using this model, we conclude that the probability is very low that the PSOE would have won had the attack not happened. Moreover, after the attack, the PSOE increased an average of 5.6% in voting on March 14 and an average of 11.2% of the Spanish people changed their vote between March 11 and March 14. These figures are in accordance with other studies.

  20. Quantifying Potential Groundwater Recharge In South Texas

    Science.gov (United States)

    Basant, S.; Zhou, Y.; Leite, P. A.; Wilcox, B. P.

    2015-12-01

    Groundwater in South Texas is heavily relied on for human consumption and irrigation for food crops. Like most of the south west US, woody encroachment has altered the grassland ecosystems here too. While brush removal has been widely implemented in Texas with the objective of increasing groundwater recharge, the linkage between vegetation and groundwater recharge in South Texas is still unclear. Studies have been conducted to understand plant-root-water dynamics at the scale of plants. However, little work has been done to quantify the changes in soil water and deep percolation at the landscape scale. Modeling water flow through soil profiles can provide an estimate of the total water flowing into deep percolation. These models are especially powerful with parameterized and calibrated with long term soil water data. In this study we parameterize the HYDRUS soil water model using long term soil water data collected in Jim Wells County in South Texas. Soil water was measured at every 20 cm intervals up to a depth of 200 cm. The parameterized model will be used to simulate soil water dynamics under a variety of precipitation regimes ranging from well above normal to severe drought conditions. The results from the model will be compared with the changes in soil moisture profile observed in response to vegetation cover and treatments from a study in a similar. Comparative studies like this can be used to build new and strengthen existing hypotheses regarding deep percolation and the role of soil texture and vegetation in groundwater recharge.

  1. Submersible UV-Vis spectroscopy for quantifying streamwater organic carbon dynamics: implementation and challenges before and after forest harvest in a headwater stream.

    Science.gov (United States)

    Jollymore, Ashlee; Johnson, Mark S; Hawthorne, Iain

    2012-01-01

    Organic material, including total and dissolved organic carbon (DOC), is ubiquitous within aquatic ecosystems, playing a variety of important and diverse biogeochemical and ecological roles. Determining how land-use changes affect DOC concentrations and bioavailability within aquatic ecosystems is an important means of evaluating the effects on ecological productivity and biogeochemical cycling. This paper presents a methodology case study looking at the deployment of a submersible UV-Vis absorbance spectrophotometer (UV-Vis spectro::lyzer model, s::can, Vienna, Austria) to determine stream organic carbon dynamics within a headwater catchment located near Campbell River (British Columbia, Canada). Field-based absorbance measurements of DOC were made before and after forest harvest, highlighting the advantages of high temporal resolution compared to traditional grab sampling and laboratory measurements. Details of remote deployment are described. High-frequency DOC data is explored by resampling the 30 min time series with a range of resampling time intervals (from daily to weekly time steps). DOC export was calculated for three months from the post-harvest data and resampled time series, showing that sampling frequency has a profound effect on total DOC export. DOC exports derived from weekly measurements were found to underestimate export by as much as 30% compared to DOC export calculated from high-frequency data. Additionally, the importance of the ability to remotely monitor the system through a recently deployed wireless connection is emphasized by examining causes of prior data losses, and how such losses may be prevented through the ability to react when environmental or power disturbances cause system interruption and data loss.

  2. Submersible UV-Vis Spectroscopy for Quantifying Streamwater Organic Carbon Dynamics: Implementation and Challenges before and after Forest Harvest in a Headwater Stream

    Directory of Open Access Journals (Sweden)

    Iain Hawthorne

    2012-03-01

    Full Text Available Organic material, including total and dissolved organic carbon (DOC, is ubiquitous within aquatic ecosystems, playing a variety of important and diverse biogeochemical and ecological roles. Determining how land-use changes affect DOC concentrations and bioavailability within aquatic ecosystems is an important means of evaluating the effects on ecological productivity and biogeochemical cycling. This paper presents a methodology case study looking at the deployment of a submersible UV-Vis absorbance spectrophotometer (UV-Vis spectro::lyzer model, s::can, Vienna, Austria to determine stream organic carbon dynamics within a headwater catchment located near Campbell River (British Columbia, Canada. Field-based absorbance measurements of DOC were made before and after forest harvest, highlighting the advantages of high temporal resolution compared to traditional grab sampling and laboratory measurements. Details of remote deployment are described. High-frequency DOC data is explored by resampling the 30 min time series with a range of resampling time intervals (from daily to weekly time steps. DOC export was calculated for three months from the post-harvest data and resampled time series, showing that sampling frequency has a profound effect on total DOC export. DOC exports derived from weekly measurements were found to underestimate export by as much as 30% compared to DOC export calculated from high-frequency data. Additionally, the importance of the ability to remotely monitor the system through a recently deployed wireless connection is emphasized by examining causes of prior data losses, and how such losses may be prevented through the ability to react when environmental or power disturbances cause system interruption and data loss.

  3. Vitis labrusca extract effects on cellular dynamics and redox modulations in a SH-SY5Y neuronal cell model: a similar role to lithium.

    Science.gov (United States)

    Scola, Gustavo; Laliberte, Victoria Louise Marina; Kim, Helena Kyunghee; Pinguelo, Arsene; Salvador, Mirian; Young, L Trevor; Andreazza, Ana Cristina

    2014-12-01

    Oxidative stress and calcium imbalance are consistently reported in bipolar disorder (BD). Polymorphism of voltage-dependent calcium channel, L type, alpha 1C subunit (CACNA1c), which is responsible for the regulation of calcium influx, was also shown to have a strong association with BD. These alterations can lead to a number of different consequences in the cell including production of reactive species causing oxidative damage to proteins, lipids and DNA. Lithium is the most frequent medication used for the treatment of BD. Despite lithium's effects, long-term use can result in many negative side effects. Therefore, there is an urgent need for the development of drugs that may have similar biological effects as lithium without the negative consequences. Moreover, polyphenols are secondary metabolites of plants that present multi-faceted molecular abilities, such as regulation of cellular responses. Vitis labrusca extract (VLE), a complex mixture of polyphenols obtained from seeds of winery wastes of V. labrusca, was previously characterized by our group. This extract presented powerful antioxidant and neuroprotective properties. Therefore, the ability of VLE to ameliorate the consequences of hydrogen peroxide (H2O2)-induced redox alterations to cell viability, intracellular calcium levels and the relative levels of the calcium channel CACNA1c in comparison to lithium's effects were evaluated using a neuroblastoma cell model. H2O2 treatment increased cell mortality through apoptotic and necrotic pathways leading to an increase in intracellular calcium levels and alterations to relative CACNA1c levels. VLE and lithium were found to similarly ameliorate cell mortality through regulation of the apoptotic/necrotic pathways, decreasing intracellular calcium levels and preventing alterations to the relative levels of CACNA1c. The findings of this study suggest that VLE exhibits protective properties against oxidative stress-induced alterations similar to that of lithium

  4. Quantifying synergistic mutual information

    CERN Document Server

    Griffith, Virgil

    2012-01-01

    Quantifying cooperation among random variables in predicting a single target random variable is an important problem in many biological systems with 10s to 1000s of co-dependent variables. We review the prior literature of information theoretical measures of synergy and introduce a novel synergy measure, entitled *synergistic mutual information* and compare it against the three existing measures of cooperation. We apply all four measures against a suite of binary circuits to demonstrate our measure alone quantifies the intuitive concept of synergy across all examples.

  5. Is Time Predictability Quantifiable?

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    -case execution time. To compare different approaches we would like to quantify time predictability. That means we need to measure time predictability. In this paper we discuss the different approaches for these measurements and conclude that time predictability is practically not quantifiable. We can only......Computer architects and researchers in the realtime domain start to investigate processors and architectures optimized for real-time systems. Optimized for real-time systems means time predictable, i.e., architectures where it is possible to statically derive a tight bound of the worst...... compare the worst-case execution time bounds of different architectures....

  6. The evolutionary dynamics of autonomous non-LTR retrotransposons in the lizard Anolis carolinensis shows more similarity to fish than mammals.

    Science.gov (United States)

    Novick, Peter A; Basta, Holly; Floumanhaft, Mark; McClure, Marcella A; Boissinot, Stéphane

    2009-08-01

    The genome of the lizard Anolis carolinensis (the green anole) is the first nonavian reptilian genome sequenced. It offers a unique opportunity to comparatively examine the evolution of amniote genomes. We analyzed the abundance and diversity of non-LTR (long terminal repeat) retrotransposons in the anole using the Genome Parsing Suite. We found that the anole genome contains an extraordinary diversity of elements. We identified 46 families of elements representing five clades (L1, L2, CR1, RTE, and R4). Within most families, elements are very similar to each other suggesting that they have been inserted recently. The rarity of old elements suggests a high rate of turnover, the insertion of new elements being offset by the loss of element-containing loci. Consequently, non-LTR retrotransposons accumulate in the anole at a low rate and are found in low copy number. This pattern of diversity shows some striking similarity with the genome of teleostean fish but contrasts greatly with the low diversity and high copy number of mammalian L1 elements, suggesting a fundamental difference in the way mammals and nonmammalian vertebrates interact with their genomic parasites. The scarcity of divergent elements in anoles suggests that insertions have a deleterious effect and are eliminated by natural selection. We propose that the low abundance of non-LTR retrotransposons in the anole is related directly or indirectly to a higher rate of ectopic recombination in the anole relative to mammals.

  7. A new method for quantifying and modeling large scale surface water inundation dynamics and key drivers using multiple time series of Earth observation and river flow data. A case study for Australia's Murray-Darling Basin

    Science.gov (United States)

    Heimhuber, Valentin; Tulbure, Mirela G.; Broich, Mark

    2017-04-01

    Periodically inundated surface water (SW) areas such as floodplains are hotspots of biodiversity and provide a broad range of ecosystem services but have suffered alarming declines in recent history. Large scale flooding events govern the dynamics of these areas and are a critical component of the terrestrial water cycle, but their propagation through river systems and the corresponding long term SW dynamics remain poorly quantified on continental or global scales. In this research, we used an unprecedented Landsat-based time series of SW maps (1986-2011), to develop statistical inundation models and quantify the role of driver variables across the Murray-Darling Basin (MDB) (1 million square-km), which is Australia's bread basket and subject to competing demands over limited water resources. We fitted generalized additive models (GAM) between SW extent as the dependent variable and river flow data from 68 gauges, spatial time series of rainfall (P; interpolated gauge data), evapotranspiration (ET; AWRA-L land surface model) and soil moisture (SM; active passive microwave satellite remote sensing) as predictor variables. We used a fully directed and connected river network (Australian Geofabric) in combination with ancillary data, to develop a spatial modeling framework consisting of 18,521 individual modeling units. We then fitted individual models for all modeling units, which were made up of 10x10 km grid cells split into floodplain, floodplain-lake and non-floodplain areas, depending on the type of water body and its hydrologic connectivity to a gauged river. We applied the framework to quantify flood propagation times for all major river and floodplain systems across the MDB, which were in good accordance with observed travel times. After incorporating these flow lag times into the models, average goodness of fit was high across floodplains and floodplain-lake modeling units (r-squared > 0.65), which were primarily driven by river flow, and lower for non

  8. A family of interaction-adjusted indices of community similarity

    Science.gov (United States)

    Schmidt, Thomas Sebastian Benedikt; Matias Rodrigues, João Frederico; von Mering, Christian

    2017-01-01

    Interactions between taxa are essential drivers of ecological community structure and dynamics, but they are not taken into account by traditional indices of β diversity. In this study, we propose a novel family of indices that quantify community similarity in the context of taxa interaction networks. Using publicly available datasets, we assessed the performance of two specific indices that are Taxa INteraction-Adjusted (TINA, based on taxa co-occurrence networks), and Phylogenetic INteraction-Adjusted (PINA, based on phylogenetic similarities). TINA and PINA outperformed traditional indices when partitioning human-associated microbial communities according to habitat, even for extremely downsampled datasets, and when organising ocean micro-eukaryotic plankton diversity according to geographical and physicochemical gradients. We argue that interaction-adjusted indices capture novel aspects of diversity outside the scope of traditional approaches, highlighting the biological significance of ecological association networks in the interpretation of community similarity. PMID:27935587

  9. Wave operators, similarity and dynamics for a class of Schrödinger operators with generic non-mixed interface conditions in 1D

    Energy Technology Data Exchange (ETDEWEB)

    Mantile, Andrea [Laboratoire de Mathématiques, Université de Reims - FR3399 CNRS, Moulin de la Housse BP 1039, 51687 Reims (France)

    2013-08-15

    We consider a simple modification of the 1D-Laplacian where non-mixed interface conditions occur at the boundaries of a finite interval. It has recently been shown that Schrödinger operators having this form allow a new approach to the transverse quantum transport through resonant heterostructures. In this perspective, it is important to control the deformations effects introduced on the spectrum and on the time propagator by this class of non-selfadjoint perturbations. In order to obtain uniform-in-time estimates of the perturbed semigroup, our strategy consists in constructing stationary wave operators allowing to intertwine the modified non-selfadjoint Schrödinger operator with a “physical” Hamiltonian. For small values of a deformation parameter “θ,” this yields a dynamical comparison between the two models showing that the distance between the corresponding semigroups is dominated by ‖θ‖ uniformly in time in the L{sup 2}-operator norm.

  10. Analysis of Similarity of DNA Sequences Based on Dynamic Time Warping Distabce%基于DTW距离的DNA序列相似性分析

    Institute of Scientific and Technical Information of China (English)

    李梅; 白凤兰

    2009-01-01

    在DNA序列相似性的研究中,通常采用的动态规划算法对空位罚分函数缺乏理论依据而带有主观性,从而取得不同的结果,本文提出了一种基于DTW(Dynamic Time Warping,动态时间弯曲)距离的DNA序列相似性度量方法可以解决这一问题.通过DNA序列的图形表示把DNA序列转化为时间序列,然后计算DTW距离来度量序列相似度以表征DNA序列属性,得到能够比较DNA序列相似性度量方法,并用这个方法比较分析了七种东亚钳蝎神经毒素(Buthus martensi Karsch neurotoxin)基因序列的相似性,验证了该度量方法的有效性和准确性.

  11. Assessing protein kinase target similarity

    DEFF Research Database (Denmark)

    Gani, Osman A; Thakkar, Balmukund; Narayanan, Dilip

    2015-01-01

    : focussed chemical libraries, drug repurposing, polypharmacological design, to name a few. Protein kinase target similarity is easily quantified by sequence, and its relevance to ligand design includes broad classification by key binding sites, evaluation of resistance mutations, and the use of surrogate......" of sequence and crystal structure information, with statistical methods able to identify key correlates to activity but also here, "the devil is in the details." Examples from specific repurposing and polypharmacology applications illustrate these points. This article is part of a Special Issue entitled...

  12. Community Detection by Neighborhood Similarity

    Institute of Scientific and Technical Information of China (English)

    LIU Xu; XIE Zheng; YI Dong-Yun

    2012-01-01

    Detection of the community structure in a network is important for understanding the structure and dynamics of the network.By exploring the neighborhood of vertices,a local similarity metric is proposed,which can be quickly computed.The resulting similarity matrix retains the same support as the adjacency matrix.Based on local similarity,an agglomerative hierarchical clustering algorithm is proposed for community detection.The algorithm is implemented by an efficient max-heap data structure and runs in nearly linear time,thus is capable of dealing with large sparse networks with tens of thousands of nodes.Experiments on synthesized and real-world networks demonstrate that our method is efficient to detect community structures,and the proposed metric is the most suitable one among all the tested similarity indices.%Detection of the community structure in a network is important for understanding the structure and dynamics of the network. By exploring the neighborhood of vertices, a local similarity metric is proposed, which can be quickly computed. The resulting similarity matrix retains the same support as the adjacency matrix. Based on local similarity, an agglomerative hierarchical clustering algorithm is proposed for community detection. The algorithm is implemented by an efficient max-heap data structure and runs in nearly linear time, thus is capable of dealing with large sparse networks with tens of thousands of nodes. Experiments on synthesized and real-world networks demonstrate that our method is efficient to detect community structures, and the proposed metric is the most suitable one among all the tested similarity indices.

  13. “机电相似”在机器人动力学分析中的应用%Application Research of Similarity of Mechanics to Electricity Theory in Robotic Dynamics Analysis

    Institute of Scientific and Technical Information of China (English)

    王丽; 周欣荣; 王金刚

    2001-01-01

    This paper is written to solve the problem of the analysis and simulation of robotic dynamics. Based on the similarity of mechanics to electricity theory, the robotic dynamic model is built into electric network model. The robotic system is changed into electrical network system. A robot with two linkages is analyzed with Runge-Kutta method. It provs that the calculating process is simplified obviously in simulation operation. The method mentioned in the paper is new to solve robotic dynamic problems with electrical circuit theory.%为解决机器人动力学分析及仿真问题,以“机电相似”为基础,将机器人动力学模型做成网络形式,把机器人系统转化成电路网络系统进行研究,应用龙格-库塔方法对二连杆机械手进行动力学分析.通过模拟实践运行,该方法使计算过程简化,是用电路方法解决机器人系统动力学问题的新方法。

  14. 孤立性肺腺癌血流模式定量CT参数相互关系%Correlation of the quantifiable parameters of blood flow pattern derived with dynamic CT in solitary bronchogenic adenocarcinoma

    Institute of Scientific and Technical Information of China (English)

    Shenjiang Li; Xiangsheng Xiao; Shiyuan Liu; Huimin Li; Chengzhou Li; Chenshi Zhang

    2007-01-01

    Objective: To evaluate the correlation of the quantifiable parameters of blood flow pattern derived with dynamic CT in solitary bronchogenic adenocarcinoma (SBA). Methods: 46 patients with solitary bronchogenic adenocarcinomas (SBA) (diameter ≤ 4 cm) underwent multi-location dynamic contrast material-enhanced (nonionic contrast material was administrated via the antecubital vein at a rate of 4 mL/s by using an autoinjector 90 mL, 4 × 5 mm or 4 × 2.5 mm scanning mode with stable table were performed) serial CT. Precontrast and postcontrast attenuation on every scan was recorded. Perfusion (PBA), peak height (PHBA), ratio of peak height of the SPN to that of the aorta (BA-to-A ratio) and mean transit time (MTT) were calculated. The correlation between peak height of the aorta (PHA) and parameters of the SBA (PHBA, BA-to-A ratio, PBA, and MTT) and those among parameters of the SBA were assessed by means of linear regression analysis. Regression equation among parameters of the SBA were obtain by means of stepwise regression. Results: The correlation between the SBA peak height (PHBA, 36.78 HU ± 12.02) and the aortic peak height (PHA) was significant (r = 0.506, P < 0.0001). No significant cor relation was found between the BA-to-A peak height ratio (15.33% ± 4.55) and the aortic peak height (r = 0.130, P = 0.388 >0.05) as it was between the SBA perfusion (PBA, 31.86 mL/min/100 g ± 9.74) and the aortic peak height (r = 0.049, P = 0.749 > 0.05). The SBA perfusion correlated with the PHBA and the BA-to-A peak height ratio (r = 0.394, P = 0.007 < 0.05; r = 0.407, P = 0.005 < 0.05). The PHBA correlated positively with the BA-to-A peak height ratio (r = 0.781, P < 0.0001). Mean transit time was 14.84 s ± 5.52. PBA = 18.500 + 0.872 × BA-to-A ratio. BA-to-A ratio = 4.467 + 0.295 × PHBA. Conclusion: The linear correlation between the SBA perfusion and BA-to-A ratio and that between BA-to-A ratio and PHBA can be expressed by equation.It is possible to

  15. Quantifying economic fluctuations

    Science.gov (United States)

    Stanley, H. Eugene; Nunes Amaral, Luis A.; Gabaix, Xavier; Gopikrishnan, Parameswaran; Plerou, Vasiliki

    2001-12-01

    This manuscript is a brief summary of a talk designed to address the question of whether two of the pillars of the field of phase transitions and critical phenomena-scale invariance and universality-can be useful in guiding research on interpreting empirical data on economic fluctuations. Using this conceptual framework as a guide, we empirically quantify the relation between trading activity-measured by the number of transactions N-and the price change G( t) for a given stock, over a time interval [ t, t+Δ t]. We relate the time-dependent standard deviation of price changes-volatility-to two microscopic quantities: the number of transactions N( t) in Δ t and the variance W2( t) of the price changes for all transactions in Δ t. We find that the long-ranged volatility correlations are largely due to those of N. We then argue that the tail-exponent of the distribution of N is insufficient to account for the tail-exponent of P{ G> x}. Since N and W display only weak inter-dependency, our results show that the fat tails of the distribution P{ G> x} arises from W. Finally, we review recent work on quantifying collective behavior among stocks by applying the conceptual framework of random matrix theory (RMT). RMT makes predictions for “universal” properties that do not depend on the interactions between the elements comprising the system, and deviations from RMT provide clues regarding system-specific properties. We compare the statistics of the cross-correlation matrix C-whose elements Cij are the correlation coefficients of price fluctuations of stock i and j-against a random matrix having the same symmetry properties. It is found that RMT methods can distinguish random and non-random parts of C. The non-random part of C which deviates from RMT results, provides information regarding genuine collective behavior among stocks. We also discuss results that are reminiscent of phase transitions in spin systems, where the divergent behavior of the response function at

  16. Quantifying traffic exposure.

    Science.gov (United States)

    Pratt, Gregory C; Parson, Kris; Shinoda, Naomi; Lindgren, Paula; Dunlap, Sara; Yawn, Barbara; Wollan, Peter; Johnson, Jean

    2014-01-01

    Living near traffic adversely affects health outcomes. Traffic exposure metrics include distance to high-traffic roads, traffic volume on nearby roads, traffic within buffer distances, measured pollutant concentrations, land-use regression estimates of pollution concentrations, and others. We used Geographic Information System software to explore a new approach using traffic count data and a kernel density calculation to generate a traffic density surface with a resolution of 50 m. The density value in each cell reflects all the traffic on all the roads within the distance specified in the kernel density algorithm. The effect of a given roadway on the raster cell value depends on the amount of traffic on the road segment, its distance from the raster cell, and the form of the algorithm. We used a Gaussian algorithm in which traffic influence became insignificant beyond 300 m. This metric integrates the deleterious effects of traffic rather than focusing on one pollutant. The density surface can be used to impute exposure at any point, and it can be used to quantify integrated exposure along a global positioning system route. The traffic density calculation compares favorably with other metrics for assessing traffic exposure and can be used in a variety of applications.

  17. Quantifying loopy network architectures.

    Directory of Open Access Journals (Sweden)

    Eleni Katifori

    Full Text Available Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes from the metric topology (connectivity and edge weight and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs.

  18. Uncertainty quantified trait predictions

    Science.gov (United States)

    Fazayeli, Farideh; Kattge, Jens; Banerjee, Arindam; Schrodt, Franziska; Reich, Peter

    2015-04-01

    Functional traits of organisms are key to understanding and predicting biodiversity and ecological change, which motivates continuous collection of traits and their integration into global databases. Such composite trait matrices are inherently sparse, severely limiting their usefulness for further analyses. On the other hand, traits are characterized by the phylogenetic trait signal, trait-trait correlations and environmental constraints, all of which provide information that could be used to statistically fill gaps. We propose the application of probabilistic models which, for the first time, utilize all three characteristics to fill gaps in trait databases and predict trait values at larger spatial scales. For this purpose we introduce BHPMF, a hierarchical Bayesian extension of Probabilistic Matrix Factorization (PMF). PMF is a machine learning technique which exploits the correlation structure of sparse matrices to impute missing entries. BHPMF additionally utilizes the taxonomic hierarchy for trait prediction. Implemented in the context of a Gibbs Sampler MCMC approach BHPMF provides uncertainty estimates for each trait prediction. We present comprehensive experimental results on the problem of plant trait prediction using the largest database of plant traits, where BHPMF shows strong empirical performance in uncertainty quantified trait prediction, outperforming the state-of-the-art based on point estimates. Further, we show that BHPMF is more accurate when it is confident, whereas the error is high when the uncertainty is high.

  19. Quantifying innovation in surgery.

    Science.gov (United States)

    Hughes-Hallett, Archie; Mayer, Erik K; Marcus, Hani J; Cundy, Thomas P; Pratt, Philip J; Parston, Greg; Vale, Justin A; Darzi, Ara W

    2014-08-01

    The objectives of this study were to assess the applicability of patents and publications as metrics of surgical technology and innovation; evaluate the historical relationship between patents and publications; develop a methodology that can be used to determine the rate of innovation growth in any given health care technology. The study of health care innovation represents an emerging academic field, yet it is limited by a lack of valid scientific methods for quantitative analysis. This article explores and cross-validates 2 innovation metrics using surgical technology as an exemplar. Electronic patenting databases and the MEDLINE database were searched between 1980 and 2010 for "surgeon" OR "surgical" OR "surgery." Resulting patent codes were grouped into technology clusters. Growth curves were plotted for these technology clusters to establish the rate and characteristics of growth. The initial search retrieved 52,046 patents and 1,801,075 publications. The top performing technology cluster of the last 30 years was minimally invasive surgery. Robotic surgery, surgical staplers, and image guidance were the most emergent technology clusters. When examining the growth curves for these clusters they were found to follow an S-shaped pattern of growth, with the emergent technologies lying on the exponential phases of their respective growth curves. In addition, publication and patent counts were closely correlated in areas of technology expansion. This article demonstrates the utility of publically available patent and publication data to quantify innovations within surgical technology and proposes a novel methodology for assessing and forecasting areas of technological innovation.

  20. Similarity transformations of MAPs

    Directory of Open Access Journals (Sweden)

    Andersen Allan T.

    1999-01-01

    Full Text Available We introduce the notion of similar Markovian Arrival Processes (MAPs and show that the event stationary point processes related to two similar MAPs are stochastically equivalent. This holds true for the time stationary point processes too. We show that several well known stochastical equivalences as e.g. that between the H 2 renewal process and the Interrupted Poisson Process (IPP can be expressed by the similarity transformations of MAPs. In the appendix the valid region of similarity transformations for two-state MAPs is characterized.

  1. Clustering by Pattern Similarity

    Institute of Scientific and Technical Information of China (English)

    Hai-xun Wang; Jian Pei

    2008-01-01

    The task of clustering is to identify classes of similar objects among a set of objects. The definition of similarity varies from one clustering model to another. However, in most of these models the concept of similarity is often based on such metrics as Manhattan distance, Euclidean distance or other Lp distances. In other words, similar objects must have close values in at least a set of dimensions. In this paper, we explore a more general type of similarity. Under the pCluster model we proposed, two objects are similar if they exhibit a coherent pattern on a subset of dimensions. The new similarity concept models a wide range of applications. For instance, in DNA microarray analysis, the expression levels of two genes may rise and fall synchronously in response to a set of environmental stimuli. Although the magnitude of their expression levels may not be close, the patterns they exhibit can be very much alike. Discovery of such clusters of genes is essential in revealing significant connections in gene regulatory networks. E-commerce applications, such as collaborative filtering, can also benefit from the new model, because it is able to capture not only the closeness of values of certain leading indicators but also the closeness of (purchasing, browsing, etc.) patterns exhibited by the customers. In addition to the novel similarity model, this paper also introduces an effective and efficient algorithm to detect such clusters, and we perform tests on several real and synthetic data sets to show its performance.

  2. Judgments of brand similarity

    NARCIS (Netherlands)

    Bijmolt, THA; Wedel, M; Pieters, RGM; DeSarbo, WS

    This paper provides empirical insight into the way consumers make pairwise similarity judgments between brands, and how familiarity with the brands, serial position of the pair in a sequence, and the presentation format affect these judgments. Within the similarity judgment process both the

  3. New Similarity Functions

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Kwasnicka, Halina

    2016-01-01

    In data science, there are important parameters that affect the accuracy of the algorithms used. Some of these parameters are: the type of data objects, the membership assignments, and distance or similarity functions. This paper discusses similarity functions as fundamental elements in membership...

  4. Judgments of brand similarity

    NARCIS (Netherlands)

    Bijmolt, THA; Wedel, M; Pieters, RGM; DeSarbo, WS

    1998-01-01

    This paper provides empirical insight into the way consumers make pairwise similarity judgments between brands, and how familiarity with the brands, serial position of the pair in a sequence, and the presentation format affect these judgments. Within the similarity judgment process both the formatio

  5. New Similarity Functions

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Kwasnicka, Halina

    2016-01-01

    In data science, there are important parameters that affect the accuracy of the algorithms used. Some of these parameters are: the type of data objects, the membership assignments, and distance or similarity functions. This paper discusses similarity functions as fundamental elements in membership...... assignments. The paper introduces Weighted Feature Distance (WFD), and Prioritized Weighted Feature Distance (PWFD), two new distance functions that take into account the diversity in feature spaces. WFD functions perform better in supervised and unsupervised methods by comparing data objects on their feature...... spaces, in addition to their similarity in the vector space. Prioritized Weighted Feature Distance (PWFD) works similarly as WFD, but provides the ability to give priorities to desirable features. The accuracy of the proposed functions are compared with other similarity functions on several data sets...

  6. Quantifying Periodicity in Omics Data

    Directory of Open Access Journals (Sweden)

    Cornelia eAmariei

    2014-08-01

    Full Text Available Oscillations play a significant role in biological systems, with many examples in the fast, ultradian, circadian, circalunar and yearly time domains. However, determining periodicity in such data can be problematic. There are a number of computational methods to identify the periodic components in large datasets, such as signal-to-noise based Fourier decomposition, Fisher's g-test and autocorrelation. However, the available methods assume a sinusoidal model and do not attempt to quantify the waveform shape and the presence of multiple periodicities, which provide vital clues in determining the underlying dynamics. Here, we developed a Fourier based measure that generates a de-noised waveform from multiple significant frequencies. This waveform is then correlated with the raw data from the respiratory oscillation found in yeast, to provide oscillation statistics including waveform metrics and multi-periods. The method is compared and contrasted to commonly used statistics. Moreover we show the utility of the program in the analysis of noisy datasets and other high-throughput analyses, such as metabolomics and flow cytometry, respectively.

  7. The semantic similarity ensemble

    Directory of Open Access Journals (Sweden)

    Andrea Ballatore

    2013-12-01

    Full Text Available Computational measures of semantic similarity between geographic terms provide valuable support across geographic information retrieval, data mining, and information integration. To date, a wide variety of approaches to geo-semantic similarity have been devised. A judgment of similarity is not intrinsically right or wrong, but obtains a certain degree of cognitive plausibility, depending on how closely it mimics human behavior. Thus selecting the most appropriate measure for a specific task is a significant challenge. To address this issue, we make an analogy between computational similarity measures and soliciting domain expert opinions, which incorporate a subjective set of beliefs, perceptions, hypotheses, and epistemic biases. Following this analogy, we define the semantic similarity ensemble (SSE as a composition of different similarity measures, acting as a panel of experts having to reach a decision on the semantic similarity of a set of geographic terms. The approach is evaluated in comparison to human judgments, and results indicate that an SSE performs better than the average of its parts. Although the best member tends to outperform the ensemble, all ensembles outperform the average performance of each ensemble's member. Hence, in contexts where the best measure is unknown, the ensemble provides a more cognitively plausible approach.

  8. Similar component analysis

    Institute of Scientific and Technical Information of China (English)

    ZHANG Hong; WANG Xin; LI Junwei; CAO Xianguang

    2006-01-01

    A new unsupervised feature extraction method called similar component analysis (SCA) is proposed in this paper. SCA method has a self-aggregation property that the data objects will move towards each other to form clusters through SCA theoretically,which can reveal the inherent pattern of similarity hidden in the dataset. The inputs of SCA are just the pairwise similarities of the dataset,which makes it easier for time series analysis due to the variable length of the time series. Our experimental results on many problems have verified the effectiveness of SCA on some engineering application.

  9. Gender similarities and differences.

    Science.gov (United States)

    Hyde, Janet Shibley

    2014-01-01

    Whether men and women are fundamentally different or similar has been debated for more than a century. This review summarizes major theories designed to explain gender differences: evolutionary theories, cognitive social learning theory, sociocultural theory, and expectancy-value theory. The gender similarities hypothesis raises the possibility of theorizing gender similarities. Statistical methods for the analysis of gender differences and similarities are reviewed, including effect sizes, meta-analysis, taxometric analysis, and equivalence testing. Then, relying mainly on evidence from meta-analyses, gender differences are reviewed in cognitive performance (e.g., math performance), personality and social behaviors (e.g., temperament, emotions, aggression, and leadership), and psychological well-being. The evidence on gender differences in variance is summarized. The final sections explore applications of intersectionality and directions for future research.

  10. Self-similar signature of the active solar corona within the inertial range of solar-wind turbulence.

    Science.gov (United States)

    Kiyani, K; Chapman, S C; Hnat, B; Nicol, R M

    2007-05-25

    We quantify the scaling of magnetic energy density in the inertial range of solar-wind turbulence seen in situ at 1 AU with respect to solar activity. At solar maximum, when the coronal magnetic field is dynamic and topologically complex, we find self-similar scaling in the solar wind, whereas at solar minimum, when the coronal fields are more ordered, we find multifractality. This quantifies the solar-wind signature that is of direct coronal origin and distinguishes it from that of local MHD turbulence, with quantitative implications for coronal heating of the solar wind.

  11. Compression-based Similarity

    CERN Document Server

    Vitanyi, Paul M B

    2011-01-01

    First we consider pair-wise distances for literal objects consisting of finite binary files. These files are taken to contain all of their meaning, like genomes or books. The distances are based on compression of the objects concerned, normalized, and can be viewed as similarity distances. Second, we consider pair-wise distances between names of objects, like "red" or "christianity." In this case the distances are based on searches of the Internet. Such a search can be performed by any search engine that returns aggregate page counts. We can extract a code length from the numbers returned, use the same formula as before, and derive a similarity or relative semantics between names for objects. The theory is based on Kolmogorov complexity. We test both similarities extensively experimentally.

  12. Analyzing complex networks evolution through Information Theory quantifiers

    Energy Technology Data Exchange (ETDEWEB)

    Carpi, Laura C., E-mail: Laura.Carpi@studentmail.newcastle.edu.a [Civil, Surveying and Environmental Engineering, University of Newcastle, University Drive, Callaghan NSW 2308 (Australia); Departamento de Fisica, Instituto de Ciencias Exatas, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, Belo Horizonte (31270-901), MG (Brazil); Rosso, Osvaldo A., E-mail: rosso@fisica.ufmg.b [Departamento de Fisica, Instituto de Ciencias Exatas, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, Belo Horizonte (31270-901), MG (Brazil); Chaos and Biology Group, Instituto de Calculo, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Pabellon II, Ciudad Universitaria, 1428 Ciudad de Buenos Aires (Argentina); Saco, Patricia M., E-mail: Patricia.Saco@newcastle.edu.a [Civil, Surveying and Environmental Engineering, University of Newcastle, University Drive, Callaghan NSW 2308 (Australia); Departamento de Hidraulica, Facultad de Ciencias Exactas, Ingenieria y Agrimensura, Universidad Nacional de Rosario, Avenida Pellegrini 250, Rosario (Argentina); Ravetti, Martin Gomez, E-mail: martin.ravetti@dep.ufmg.b [Departamento de Engenharia de Producao, Universidade Federal de Minas Gerais, Av. Antonio Carlos, 6627, Belo Horizonte (31270-901), MG (Brazil)

    2011-01-24

    A methodology to analyze dynamical changes in complex networks based on Information Theory quantifiers is proposed. The square root of the Jensen-Shannon divergence, a measure of dissimilarity between two probability distributions, and the MPR Statistical Complexity are used to quantify states in the network evolution process. Three cases are analyzed, the Watts-Strogatz model, a gene network during the progression of Alzheimer's disease and a climate network for the Tropical Pacific region to study the El Nino/Southern Oscillation (ENSO) dynamic. We find that the proposed quantifiers are able not only to capture changes in the dynamics of the processes but also to quantify and compare states in their evolution.

  13. Similarity or difference?

    DEFF Research Database (Denmark)

    Villadsen, Anders Ryom

    2013-01-01

    While the organizational structures and strategies of public organizations have attracted substantial research attention among public management scholars, little research has explored how these organizational core dimensions are interconnected and influenced by pressures for similarity....... In this paper I address this topic by exploring the relation between expenditure strategy isomorphism and structure isomorphism in Danish municipalities. Different literatures suggest that organizations exist in concurrent pressures for being similar to and different from other organizations in their field......-shaped relation exists between expenditure strategy isomorphism and structure isomorphism in a longitudinal quantitative study of Danish municipalities....

  14. Modeling of similar economies

    Directory of Open Access Journals (Sweden)

    Sergey B. Kuznetsov

    2017-06-01

    Full Text Available Objective to obtain dimensionless criteria ndash economic indices characterizing the national economy and not depending on its size. Methods mathematical modeling theory of dimensions processing statistical data. Results basing on differential equations describing the national economy with the account of economical environment resistance two dimensionless criteria are obtained which allow to compare economies regardless of their sizes. With the theory of dimensions we show that the obtained indices are not accidental. We demonstrate the implementation of the obtained dimensionless criteria for the analysis of behavior of certain countriesrsquo economies. Scientific novelty the dimensionless criteria are obtained ndash economic indices which allow to compare economies regardless of their sizes and to analyze the dynamic changes in the economies with time. nbsp Practical significance the obtained results can be used for dynamic and comparative analysis of different countriesrsquo economies regardless of their sizes.

  15. Incremental Similarity and Turbulence

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.; Hedevang, Emil; Schmiegel, Jürgen

    This paper discusses the mathematical representation of an empirically observed phenomenon, referred to as Incremental Similarity. We discuss this feature from the viewpoint of stochastic processes and present a variety of non-trivial examples, including those that are of relevance for turbulence...

  16. Common ecology quantifies human insurgency.

    Science.gov (United States)

    Bohorquez, Juan Camilo; Gourley, Sean; Dixon, Alexander R; Spagat, Michael; Johnson, Neil F

    2009-12-17

    Many collective human activities, including violence, have been shown to exhibit universal patterns. The size distributions of casualties both in whole wars from 1816 to 1980 and terrorist attacks have separately been shown to follow approximate power-law distributions. However, the possibility of universal patterns ranging across wars in the size distribution or timing of within-conflict events has barely been explored. Here we show that the sizes and timing of violent events within different insurgent conflicts exhibit remarkable similarities. We propose a unified model of human insurgency that reproduces these commonalities, and explains conflict-specific variations quantitatively in terms of underlying rules of engagement. Our model treats each insurgent population as an ecology of dynamically evolving, self-organized groups following common decision-making processes. Our model is consistent with several recent hypotheses about modern insurgency, is robust to many generalizations, and establishes a quantitative connection between human insurgency, global terrorism and ecology. Its similarity to financial market models provides a surprising link between violent and non-violent forms of human behaviour.

  17. Nuclear markers reveal that inter-lake cichlids' similar morphologies do not reflect similar genealogy.

    Science.gov (United States)

    Kassam, Daud; Seki, Shingo; Horic, Michio; Yamaoka, Kosaku

    2006-08-01

    The apparent inter-lake morphological similarity among East African Great Lakes' cichlid species/genera has left evolutionary biologists asking whether such similarity is due to sharing of common ancestor or mere convergent evolution. In order to answer such question, we first used Geometric Morphometrics, GM, to quantify morphological similarity and then subsequently used Amplified Fragment Length Polymorphism, AFLP, to determine if similar morphologies imply shared ancestry or convergent evolution. GM revealed that not all presumed morphological similar pairs were indeed similar, and the dendrogram generated from AFLP data indicated distinct clusters corresponding to each lake and not inter-lake morphological similar pairs. Such results imply that the morphological similarity is due to convergent evolution and not shared ancestry. The congruency of GM and AFLP generated dendrograms imply that GM is capable of picking up phylogenetic signal, and thus GM can be potential tool in phylogenetic systematics.

  18. More Similar Than Different

    DEFF Research Database (Denmark)

    Pedersen, Mogens Jin

    2015-01-01

    What role do employee features play into the success of different personnel management practices for serving high performance? Using data from a randomized survey experiment among 5,982 individuals of all ages, this article examines how gender conditions the compliance effects of different...... incentive treatments—each relating to the basic content of distinct types of personnel management practices. The findings show that males and females are more similar than different in terms of the incentive treatments’ effects: Significant average effects are found for three out of five incentive...

  19. Similar dissection of sets

    CERN Document Server

    Akiyama, Shigeki; Okazaki, Ryotaro; Steiner, Wolfgang; Thuswaldner, Jörg

    2010-01-01

    In 1994, Martin Gardner stated a set of questions concerning the dissection of a square or an equilateral triangle in three similar parts. Meanwhile, Gardner's questions have been generalized and some of them are already solved. In the present paper, we solve more of his questions and treat them in a much more general context. Let $D\\subset \\mathbb{R}^d$ be a given set and let $f_1,...,f_k$ be injective continuous mappings. Does there exist a set $X$ such that $D = X \\cup f_1(X) \\cup ... \\cup f_k(X)$ is satisfied with a non-overlapping union? We prove that such a set $X$ exists for certain choices of $D$ and $\\{f_1,...,f_k\\}$. The solutions $X$ often turn out to be attractors of iterated function systems with condensation in the sense of Barnsley. Coming back to Gardner's setting, we use our theory to prove that an equilateral triangle can be dissected in three similar copies whose areas have ratio $1:1:a$ for $a \\ge (3+\\sqrt{5})/2$.

  20. Interneurons targeting similar layers receive synaptic inputs with similar kinetics.

    Science.gov (United States)

    Cossart, Rosa; Petanjek, Zdravko; Dumitriu, Dani; Hirsch, June C; Ben-Ari, Yehezkel; Esclapez, Monique; Bernard, Christophe

    2006-01-01

    GABAergic interneurons play diverse and important roles in controlling neuronal network dynamics. They are characterized by an extreme heterogeneity morphologically, neurochemically, and physiologically, but a functionally relevant classification is still lacking. Present taxonomy is essentially based on their postsynaptic targets, but a physiological counterpart to this classification has not yet been determined. Using a quantitative analysis based on multidimensional clustering of morphological and physiological variables, we now demonstrate a strong correlation between the kinetics of glutamate and GABA miniature synaptic currents received by CA1 hippocampal interneurons and the laminar distribution of their axons: neurons that project to the same layer(s) receive synaptic inputs with similar kinetics distributions. In contrast, the kinetics distributions of GABAergic and glutamatergic synaptic events received by a given interneuron do not depend upon its somatic location or dendritic arborization. Although the mechanisms responsible for this unexpected observation are still unclear, our results suggest that interneurons may be programmed to receive synaptic currents with specific temporal dynamics depending on their targets and the local networks in which they operate.

  1. Quantifying self-organization in fusion plasmas

    Science.gov (United States)

    Rajković, M.; Milovanović, M.; Škorić, M. M.

    2017-05-01

    A multifaceted framework for understanding self-organization in fusion plasma dynamics is presented which concurrently manages several important issues related to the nonlinear and multiscale phenomena involved, namely,(1) it chooses the optimal template wavelet for the analysis of temporal or spatio-temporal plasma dynamics, (2) it detects parameter values at which bifurcations occur, (3) it quantifies complexity and self-organization, (4) it enables short-term prediction of nonlinear dynamics, and (5) it extracts coherent structures in turbulence by separating them from the incoherent component. The first two aspects including the detection of changes in the dynamics of a nonlinear system are illustrated by analyzing Stimulated Raman Scattering in a bounded, weakly dissipative plasma. Self-organization in the fusion plasma is quantitatively analyzed based on the numerical simulations of the Gyrokinetic-Vlasov (GKV) model of plasma dynamics. The parameters for the standard and inward shifted magnetic configurations, relevant for the Large Helical Device, were used in order to quantitatively compare self-organization and complexity in the two configurations. Finally, self-organization is analyzed for three different confinement regimes of the MAST device.

  2. Distances and similarities in intuitionistic fuzzy sets

    CERN Document Server

    Szmidt, Eulalia

    2014-01-01

    This book presents the state-of-the-art in theory and practice regarding similarity and distance measures for intuitionistic fuzzy sets. Quantifying similarity and distances is crucial for many applications, e.g. data mining, machine learning, decision making, and control. The work provides readers with a comprehensive set of theoretical concepts and practical tools for both defining and determining similarity between intuitionistic fuzzy sets. It describes an automatic algorithm for deriving intuitionistic fuzzy sets from data, which can aid in the analysis of information in large databases. The book also discusses other important applications, e.g. the use of similarity measures to evaluate the extent of agreement between experts in the context of decision making.

  3. Stability of similarity measurements for bipartite networks

    CERN Document Server

    Liu, Jian-Guo; Pan, Xue; Guo, Qiang; Zhou, Tao

    2015-01-01

    Similarity is a fundamental measure in network analyses and machine learning algorithms, with wide applications ranging from personalized recommendation to socio-economic dynamics. We argue that an effective similarity measurement should guarantee the stability even under some information loss. With six bipartite networks, we investigate the stabilities of fifteen similarity measurements by comparing the similarity matrixes of two data samples which are randomly divided from original data sets. Results show that, the fifteen measurements can be well classified into three clusters according to their stabilities, and measurements in the same cluster have similar mathematical definitions. In addition, we develop a top-$n$-stability method for personalized recommendation, and find that the unstable similarities would recommend false information to users, and the performance of recommendation would be largely improved by using stable similarity measurements. This work provides a novel dimension to analyze and eval...

  4. Quantifying forest LAI succession in sub-tropical forests using time-series of Landsat data, 1987 -2015

    Science.gov (United States)

    Wu, Q.; Song, J.; Wang, J.; Chen, S.; Yu, B.; Liao, L.

    2016-12-01

    Monitoring the dynamics of leaf area index (LAI) throughout the life-cycle of forests (from seeding to maturity) is vital for simulating forest growth and quantifying carbon sequestration. However, all current global LAI produts show extremely low accuracy in forests and the coarse spatial resolution(nearly 1-km) mismatch with the spatial scale of forest inventory plots (nearly 26m*26m). To date, several studies have explored the possibility of satellite data to classify forest succession or predict stand age. And a few studies have explored the potential of using long term Landsat data to monitor the growing trend of forests, but no studies have quantified the inter-annual and intra-annual LAI dynamics along with forest succession. Vegetation indexes are not perfect variables in quantifying forest foliage dynamics. Hallet (1995) suggested remote sensing of biophysical characteristics should shift away from direct inference from vegetation indices toward more physically based algorithms. This work intends to be a pioneer example for improving the accuracy of forests LAI and providing temporal-spatial matching LAI datasets for monitoring forest processes. We integrates the Geometric-Optical and Radiative Transfer (GORT) model with the Physiological Principles Predicting Growth (3-PG) model to improve the estimation of the forest canopy LAI dynamics. Reflectance time-series data from 1987 to 2015 were collected and preprocessed for forests in southern China, using all available Landsat data (with policy and such methods may be applied in other similar forests.

  5. Differences and similarities between scalar inferences and scalar modifiers: the case of quantifiers

    NARCIS (Netherlands)

    McNabb, Y.

    2015-01-01

    We explore a distinction between ‘high’ and ‘low’ readings in counterfactual donkey sentences and observe three open issues in the current literature on these sentences: (i) van Rooij (2006) and Wang (2009) make different empirical predictions with respect to the availability of ‘high’ donkey readin

  6. Differences and similarities between scalar inferences and scalar modifiers: the case of quantifiers

    NARCIS (Netherlands)

    McNabb, Y.

    2015-01-01

    We explore a distinction between ‘high’ and ‘low’ readings in counterfactual donkey sentences and observe three open issues in the current literature on these sentences: (i) van Rooij (2006) and Wang (2009) make different empirical predictions with respect to the availability of ‘high’ donkey

  7. Dynamic Logics of Dynamical Systems

    CERN Document Server

    Platzer, André

    2012-01-01

    We survey dynamic logics for specifying and verifying properties of dynamical systems, including hybrid systems, distributed hybrid systems, and stochastic hybrid systems. A dynamic logic is a first-order modal logic with a pair of parametrized modal operators for each dynamical system to express necessary or possible properties of their transition behavior. Due to their full basis of first-order modal logic operators, dynamic logics can express a rich variety of system properties, including safety, controllability, reactivity, liveness, and quantified parametrized properties, even about relations between multiple dynamical systems. In this survey, we focus on some of the representatives of the family of differential dynamic logics, which share the ability to express properties of dynamical systems having continuous dynamics described by various forms of differential equations. We explain the dynamical system models, dynamic logics of dynamical systems, their semantics, their axiomatizations, and proof calcul...

  8. Dynamics

    CERN Document Server

    Goodman, Lawrence E

    2001-01-01

    Beginning text presents complete theoretical treatment of mechanical model systems and deals with technological applications. Topics include introduction to calculus of vectors, particle motion, dynamics of particle systems and plane rigid bodies, technical applications in plane motions, theory of mechanical vibrations, and more. Exercises and answers appear in each chapter.

  9. Exploiting Data Similarity to Reduce Memory Footprints

    Science.gov (United States)

    2011-01-01

    leslie3d Fortran Computational Fluid Dynamics (CFD) application 122. tachyon C Parallel Ray Tracing application 128.GAPgeofem C and Fortran Simulates...benefits most from SBLLmalloc; LAMMPS, which shows moderate similarity from primarily zero pages; and 122. tachyon , a parallel ray- tracing application...similarity across MPI tasks. They primarily are zero- pages although a small fraction (≈10%) are non-zero pages. 122. tachyon is an image rendering

  10. Quantifying resource use in computations

    NARCIS (Netherlands)

    van Son, R.J.J.H.

    2009-01-01

    It is currently not possible to quantify the resources needed to perform a computation. As a consequence, it is not possible to reliably evaluate the hardware resources needed for the application of algorithms or the running of programs. This is apparent in both computer science, for in- stance, in

  11. Quantifying resource use in computations

    NARCIS (Netherlands)

    van Son, R.J.J.H.

    2009-01-01

    It is currently not possible to quantify the resources needed to perform a computation. As a consequence, it is not possible to reliably evaluate the hardware resources needed for the application of algorithms or the running of programs. This is apparent in both computer science, for in- stance, in

  12. Mining Object Similarity for Predicting Next Locations

    Institute of Scientific and Technical Information of China (English)

    Meng Chen; Xiaohui Yu; Yang Liu

    2016-01-01

    Next location prediction is of great importance for many location-based applications. With the virtue of solid theoretical foundations, Markov-based approaches have gained success along this direction. In this paper, we seek to enhance the prediction performance by understanding the similarity between objects. In particular, we propose a novel method, called weighted Markov model (weighted-MM), which exploits both the sequence of just-passed locations and the object similarity in mining the mobility patterns. To this end, we first train a Markov model for each object with its own trajectory records, and then quantify the similarities between different objects from two aspects: spatial locality similarity and trajectory similarity. Finally, we incorporate the object similarity into the Markov model by considering the similarity as the weight of the probability of reaching each possible next location, and return the top-rankings as results. We have conducted extensive experiments on a real dataset, and the results demonstrate significant improvements in prediction accuracy over existing solutions.

  13. Groundwater similarity across a watershed derived from time-warped and flow-corrected time series

    Science.gov (United States)

    Rinderer, M.; McGlynn, B. L.; van Meerveld, H. J.

    2017-05-01

    Information about catchment-scale groundwater dynamics is necessary to understand how catchments store and release water and why water quantity and quality varies in streams. However, groundwater level monitoring is often restricted to a limited number of sites. Knowledge of the factors that determine similarity between monitoring sites can be used to predict catchment-scale groundwater storage and connectivity of different runoff source areas. We used distance-based and correlation-based similarity measures to quantify the spatial and temporal differences in shallow groundwater similarity for 51 monitoring sites in a Swiss prealpine catchment. The 41 months long time series were preprocessed using Dynamic Time-Warping and a Flow-corrected Time Transformation to account for small timing differences and bias toward low-flow periods. The mean distance-based groundwater similarity was correlated to topographic indices, such as upslope contributing area, topographic wetness index, and local slope. Correlation-based similarity was less related to landscape position but instead revealed differences between seasons. Analysis of variance and partial Mantel tests showed that landscape position, represented by the topographic wetness index, explained 52% of the variability in mean distance-based groundwater similarity, while spatial distance, represented by the Euclidean distance, explained only 5%. The variability in distance-based similarity and correlation-based similarity between groundwater and streamflow time series was significantly larger for midslope locations than for other landscape positions. This suggests that groundwater dynamics at these midslope sites, which are important to understand runoff source areas and hydrological connectivity at the catchment scale, are most difficult to predict.

  14. Efficient Similarity Retrieval in Music Databases

    DEFF Research Database (Denmark)

    Ruxanda, Maria Magdalena; Jensen, Christian Søndergaard

    2006-01-01

    object is modeled as a time sequence of high-dimensional feature vectors, and dynamic time warping (DTW) is used as the similarity measure. To accomplish this, the paper extends techniques for time-series-length reduction and lower bounding of DTW distance to the multi-dimensional case. Further...

  15. Meditations on Quantified Constraint Satisfaction

    CERN Document Server

    Chen, Hubie

    2012-01-01

    The quantified constraint satisfaction problem (QCSP) is the problem of deciding, given a structure and a first-order prenex sentence whose quantifier-free part is the conjunction of atoms, whether or not the sentence holds on the structure. One obtains a family of problems by defining, for each structure B, the problem QCSP(B) to be the QCSP where the structure is fixed to be B. In this article, we offer a viewpoint on the research program of understanding the complexity of the problems QCSP(B) on finite structures. In particular, we propose and discuss a group of conjectures; throughout, we attempt to place the conjectures in relation to existing results and to emphasize open issues and potential research directions.

  16. Quantifier Elimination by Dependency Sequents

    CERN Document Server

    Goldberg, Eugene

    2012-01-01

    We consider the problem of existential quantifier elimination for Boolean formulas in Conjunctive Normal Form (CNF). We present a new method for solving this problem called Derivation of Dependency-Sequents (DDS). A Dependency-sequent (D-sequent) is used to record that a set of quantified variables is redundant under a partial assignment. We show that D-sequents can be resolved to obtain new, non-trivial D-sequents. We also show that DDS is compositional, i.e. if our input formula is a conjunction of independent formulas, DDS automatically recognizes and exploits this information. We introduce an algorithm based on DDS and present experimental results demonstrating its potential.

  17. Estimating similarity of XML Schemas using path similarity measure

    Directory of Open Access Journals (Sweden)

    Veena Trivedi

    2012-07-01

    Full Text Available In this paper, an attempt has been made to develop an algorithm which estimates the similarity for XML Schemas using multiple similarity measures. For performing the task, the XML Schema element information has been represented in the form of string and four different similarity measure approaches have been employed. To further improve the similarity measure, an overall similarity measure has also been calculated. The approach used in this paper is a distinguished one, as it calculates the similarity between two XML schemas using four approaches and gives an integrated values for the similarity measure. Keywords-componen

  18. Quantifying and measuring cyber resiliency

    Science.gov (United States)

    Cybenko, George

    2016-05-01

    Cyber resliency has become an increasingly attractive research and operational concept in cyber security. While several metrics have been proposed for quantifying cyber resiliency, a considerable gap remains between those metrics and operationally measurable and meaningful concepts that can be empirically determined in a scientific manner. This paper describes a concrete notion of cyber resiliency that can be tailored to meet specific needs of organizations that seek to introduce resiliency into their assessment of their cyber security posture.

  19. Something for Everyone: Quantifying Evolving (Glacial) Landscapes with Your Camera

    Science.gov (United States)

    Welty, E.; Pfeffer, W. T.; Ahn, Y.

    2010-12-01

    At the Columbia, a tidewater glacier in Alaska's Prince William Sound, aerial stereophotography has been flown at least twice each year since 1976. After costly capture and careful processing, the images yield snapshots of the elevations and velocities at the ice surface, recounting the story of the glacier's dynamic retreat in high resolution 3D. Now, recent advances in computer vision may allow us to acquire similar data for much less - we present a technique for quantifying evolving landscapes with nothing more than your camera, with little to no need for ground surveying or existing geospatial data. While well studied, the Columbia Glacier is not alone - the collective response of the countless small glaciers and ice caps has far-reaching impacts on global sea-level rise and the distribution and timing of water resources. As we speak, the millions of camera-bearers among us are undertaking an unprecedented photographic survey of the physical world. Developing scientific applications for digital photography could open the doors for public participation in research on a huge scale, and enable global, detailed and low-budget monitoring of the cryosphere.

  20. Statistical energy analysis of similarly coupled systems

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jian

    2002-01-01

    Based on the principle of Statistical Energy Analysis (SEA) for non-conservatively coupled dynamical systems under non-correlative or correlative excitations, energy relationship between two similar SEA systems is established in the paper. The energy relationship is verified theoretically and experimentally from two similar SEA systems i.e., the structure of a coupled panel-beam and that of a coupled panel-sideframe, in the cases of conservative coupling and non-conservative coupling respectively. As an application of the method, relationship between noise power radiated from two similar cutting systems is studied. Results show that there are good agreements between the theory and the experiments, and the method is valuable to analysis of dynamical problems associated with a complicated system from that with a simple one.

  1. Quantifying Emergent Behavior of Autonomous Robots

    Directory of Open Access Journals (Sweden)

    Georg Martius

    2015-10-01

    Full Text Available Quantifying behaviors of robots which were generated autonomously from task-independent objective functions is an important prerequisite for objective comparisons of algorithms and movements of animals. The temporal sequence of such a behavior can be considered as a time series and hence complexity measures developed for time series are natural candidates for its quantification. The predictive information and the excess entropy are such complexity measures. They measure the amount of information the past contains about the future and thus quantify the nonrandom structure in the temporal sequence. However, when using these measures for systems with continuous states one has to deal with the fact that their values will depend on the resolution with which the systems states are observed. For deterministic systems both measures will diverge with increasing resolution. We therefore propose a new decomposition of the excess entropy in resolution dependent and resolution independent parts and discuss how they depend on the dimensionality of the dynamics, correlations and the noise level. For the practical estimation we propose to use estimates based on the correlation integral instead of the direct estimation of the mutual information based on next neighbor statistics because the latter allows less control of the scale dependencies. Using our algorithm we are able to show how autonomous learning generates behavior of increasing complexity with increasing learning duration.

  2. Towards Quantifying a Wider Reality: Shannon Exonerata

    Directory of Open Access Journals (Sweden)

    Robert E. Ulanowicz

    2011-10-01

    Full Text Available In 1872 Ludwig von Boltzmann derived a statistical formula to represent the entropy (an apophasis of a highly simplistic system. In 1948 Claude Shannon independently formulated the same expression to capture the positivist essence of information. Such contradictory thrusts engendered decades of ambiguity concerning exactly what is conveyed by the expression. Resolution of widespread confusion is possible by invoking the third law of thermodynamics, which requires that entropy be treated in a relativistic fashion. Doing so parses the Boltzmann expression into separate terms that segregate apophatic entropy from positivist information. Possibly more importantly, the decomposition itself portrays a dialectic-like agonism between constraint and disorder that may provide a more appropriate description of the behavior of living systems than is possible using conventional dynamics. By quantifying the apophatic side of evolution, the Shannon approach to information achieves what no other treatment of the subject affords: It opens the window on a more encompassing perception of reality.

  3. Removal of Quantifiers by Elimination of Boundary Points

    CERN Document Server

    Goldberg, Eugene

    2012-01-01

    We consider the problem of elimination of existential quantifiers from a Boolean CNF formula. Our approach is based on the following observation. One can get rid of dependency on a set of variables of a quantified CNF formula F by adding resolvent clauses of F eliminating boundary points. This approach is similar to the method of quantifier elimination described in [9]. The difference of the method described in the present paper is twofold: {\\bullet} branching is performed only on quantified variables, {\\bullet} an explicit search for boundary points is performed by calls to a SAT-solver Although we published the paper [9] before this one, chrono- logically the method of the present report was developed first. Preliminary presentations of this method were made in [10], [11]. We postponed a publication of this method due to preparation of a patent application [8].

  4. Another analytic view about quantifying social forces

    CERN Document Server

    Ausloos, Marcel

    2012-01-01

    Montroll had considered a Verhulst evolution approach for introducing a notion he called "social force", to describe a jump in some economic output when a new technology or product outcompetes a previous one. In fact, Montroll's adaptation of Verhulst equation is more like an economic field description than a "social force". The empirical Verhulst logistic function and the Gompertz double exponential law are used here in order to present an alternative view, within a similar mechanistic physics framework. As an example, a "social force" modifying the rate in the number of temples constructed by a religious movement, the Antoinist community, between 1910 and 1940 in Belgium is found and quantified. Practically, two temple inauguration regimes are seen to exist over different time spans, separated by a gap attributed to a specific "constraint", a taxation system, but allowing for a different, smooth, evolution rather than a jump. The impulse force duration is also emphasized as being better taken into account w...

  5. Quantifying the Cognitive Extent of Science

    CERN Document Server

    Milojević, Staša

    2015-01-01

    While the modern science is characterized by an exponential growth in scientific literature, the increase in publication volume clearly does not reflect the expansion of the cognitive boundaries of science. Nevertheless, most of the metrics for assessing the vitality of science or for making funding and policy decisions are based on productivity. Similarly, the increasing level of knowledge production by large science teams, whose results often enjoy greater visibility, does not necessarily mean that "big science" leads to cognitive expansion. Here we present a novel, big-data method to quantify the extents of cognitive domains of different bodies of scientific literature independently from publication volume, and apply it to 20 million articles published over 60-130 years in physics, astronomy, and biomedicine. The method is based on the lexical diversity of titles of fixed quotas of research articles. Owing to large size of quotas, the method overcomes the inherent stochasticity of article titles to achieve...

  6. Quantifying mixing using equilibrium reactions

    Science.gov (United States)

    Wheat, Philip M.; Posner, Jonathan D.

    2009-03-01

    A method of quantifying equilibrium reactions in a microchannel using a fluorometric reaction of Fluo-4 and Ca2+ ions is presented. Under the proper conditions, equilibrium reactions can be used to quantify fluid mixing without the challenges associated with constituent mixing measures such as limited imaging spatial resolution and viewing angle coupled with three-dimensional structure. Quantitative measurements of CaCl and calcium-indicating fluorescent dye Fluo-4 mixing are measured in Y-shaped microchannels. Reactant and product concentration distributions are modeled using Green's function solutions and a numerical solution to the advection-diffusion equation. Equilibrium reactions provide for an unambiguous, quantitative measure of mixing when the reactant concentrations are greater than 100 times their dissociation constant and the diffusivities are equal. At lower concentrations and for dissimilar diffusivities, the area averaged fluorescence signal reaches a maximum before the species have interdiffused, suggesting that reactant concentrations and diffusivities must be carefully selected to provide unambiguous, quantitative mixing measures. Fluorometric equilibrium reactions work over a wide range of pH and background concentrations such that they can be used for a wide variety of fluid mixing measures including industrial or microscale flows.

  7. Quantifying acoustic damping using flame chemiluminescence

    Science.gov (United States)

    Boujo, E.; Denisov, A.; Schuermans, B.; Noiray, N.

    2016-12-01

    Thermoacoustic instabilities in gas turbines and aeroengine combustors falls within the category of complex systems. They can be described phenomenologically using nonlinear stochastic differential equations, which constitute the grounds for output-only model-based system identification. It has been shown recently that one can extract the governing parameters of the instabilities, namely the linear growth rate and the nonlinear component of the thermoacoustic feedback, using dynamic pressure time series only. This is highly relevant for practical systems, which cannot be actively controlled due to a lack of cost-effective actuators. The thermoacoustic stability is given by the linear growth rate, which results from the combination of the acoustic damping and the coherent feedback from the flame. In this paper, it is shown that it is possible to quantify the acoustic damping of the system, and thus to separate its contribution to the linear growth rate from the one of the flame. This is achieved by post-processing in a simple way simultaneously acquired chemiluminescence and acoustic pressure data. It provides an additional approach to further unravel from observed time series the key mechanisms governing the system dynamics. This straightforward method is illustrated here using experimental data from a combustion chamber operated at several linearly stable and unstable operating conditions.

  8. Geographical variation in mutualistic networks: similarity, turnover and partner fidelity.

    Science.gov (United States)

    Trøjelsgaard, Kristian; Jordano, Pedro; Carstensen, Daniel W; Olesen, Jens M

    2015-03-07

    Although species and their interactions in unison represent biodiversity and all the ecological and evolutionary processes associated with life, biotic interactions have, contrary to species, rarely been integrated into the concepts of spatial β-diversity. Here, we examine β-diversity of ecological networks by using pollination networks sampled across the Canary Islands. We show that adjacent and distant communities are more and less similar, respectively, in their composition of plants, pollinators and interactions than expected from random distributions. We further show that replacement of species is the major driver of interaction turnover and that this contribution increases with distance. Finally, we quantify that species-specific partner compositions (here called partner fidelity) deviate from random partner use, but vary as a result of ecological and geographical variables. In particular, breakdown of partner fidelity was facilitated by increasing geographical distance, changing abundances and changing linkage levels, but was not related to the geographical distribution of the species. This highlights the importance of space when comparing communities of interacting species and may stimulate a rethinking of the spatial interpretation of interaction networks. Moreover, geographical interaction dynamics and its causes are important in our efforts to anticipate effects of large-scale changes, such as anthropogenic disturbances.

  9. Functional Similarity and Interpersonal Attraction.

    Science.gov (United States)

    Neimeyer, Greg J.; Neimeyer, Robert A.

    1981-01-01

    Students participated in dyadic disclosure exercises over a five-week period. Results indicated members of high functional similarity dyads evidenced greater attraction to one another than did members of low functional similarity dyads. "Friendship" pairs of male undergraduates displayed greater functional similarity than did…

  10. Functional Similarity and Interpersonal Attraction.

    Science.gov (United States)

    Neimeyer, Greg J.; Neimeyer, Robert A.

    1981-01-01

    Students participated in dyadic disclosure exercises over a five-week period. Results indicated members of high functional similarity dyads evidenced greater attraction to one another than did members of low functional similarity dyads. "Friendship" pairs of male undergraduates displayed greater functional similarity than did "nominal" pairs from…

  11. Lexical NP and VP quantifiers in Bulgarian

    Directory of Open Access Journals (Sweden)

    Kristina Kalpakchieva

    2015-11-01

    Full Text Available Lexical NP and VP quantifiers in Bulgarian The paper focuses on uniqueness, existential and universal quantification within the Bulgarian noun and verb phrase. Quantifiers scope is considered with respect to whether the quantifiers are used alone or in a group with other expressions. Another factor that affects the strength of quantifiers is the expression’s containing additional specifying functions or setting some circumstance or condition. Quantifiers within the verb phrase are particularly strongly affected by other conditions, while quantifiers within the subject NP have a broad scope and are not affected by the additional conditions of the situation described.

  12. Quantifying Resource Use in Computations

    CERN Document Server

    van Son, R J J H

    2009-01-01

    It is currently not possible to quantify the resources needed to perform a computation. As a consequence, it is not possible to reliably evaluate the hardware resources needed for the application of algorithms or the running of programs. This is apparent in both computer science, for instance, in cryptanalysis, and in neuroscience, for instance, comparative neuro-anatomy. A System versus Environment game formalism is proposed based on Computability Logic that allows to define a computational work function that describes the theoretical and physical resources needed to perform any purely algorithmic computation. Within this formalism, the cost of a computation is defined as the sum of information storage over the steps of the computation. The size of the computational device, eg, the action table of a Universal Turing Machine, the number of transistors in silicon, or the number and complexity of synapses in a neural net, is explicitly included in the computational cost. The proposed cost function leads in a na...

  13. Quantifying and simulating human sensation

    DEFF Research Database (Denmark)

    Quantifying and simulating human sensation – relating science and technology of indoor climate research Abstract In his doctoral thesis from 1970 civil engineer Povl Ole Fanger proposed that the understanding of indoor climate should focus on the comfort of the individual rather than averaged...... archival material related to Lund Madsen’s efforts are preserved at the Technical University of Denmark and I have used these artefacts as the point of departure for my investigation. In this paper I will examine which factors the researchers perceived as important for human indoor comfort and how...... this understanding of human sensation was adjusted to technology. I will look into the construction of the equipment, what it measures and the relationship between theory, equipment and tradition....

  14. Multidimensional Scaling Visualization using Parametric Similarity Indices

    OpenAIRE

    Tenreiro Machado, J. A.; António M. Lopes; Alexandra M. Galhano

    2015-01-01

    In this paper, we apply multidimensional scaling (MDS) and parametric similarity indices (PSI) in the analysis of complex systems (CS). Each CS is viewed as a dynamical system, exhibiting an output time-series to be interpreted as a manifestation of its behavior. We start by adopting a sliding window to sample the original data into several consecutive time periods. Second, we define a given PSI for tracking pieces of data. We then compare the windows for different values of the parameter, an...

  15. Multidimensional Scaling Visualization using Parametric Similarity Indices

    OpenAIRE

    Tenreiro Machado, J. A.; Lopes, António M.; Alexandra M. Galhano

    2015-01-01

    In this paper, we apply multidimensional scaling (MDS) and parametric similarity indices (PSI) in the analysis of complex systems (CS). Each CS is viewed as a dynamical system, exhibiting an output time-series to be interpreted as a manifestation of its behavior. We start by adopting a sliding window to sample the original data into several consecutive time periods. Second, we define a given PSI for tracking pieces of data. We then compare the windows for different values of the parameter, an...

  16. Quantifying Evaporation in a Permeable Pavement System

    Science.gov (United States)

    Studies quantifying evaporation from permeable pavement systems are limited to a few laboratory studies and one field application. This research quantifies evaporation for a larger-scale field application by measuring the water balance from lined permeable pavement sections. Th...

  17. A COMPARISON OF SEMANTIC SIMILARITY MODELS IN EVALUATING CONCEPT SIMILARITY

    Directory of Open Access Journals (Sweden)

    Q. X. Xu

    2012-08-01

    Full Text Available The semantic similarities are important in concept definition, recognition, categorization, interpretation, and integration. Many semantic similarity models have been established to evaluate semantic similarities of objects or/and concepts. To find out the suitability and performance of different models in evaluating concept similarities, we make a comparison of four main types of models in this paper: the geometric model, the feature model, the network model, and the transformational model. Fundamental principles and main characteristics of these models are introduced and compared firstly. Land use and land cover concepts of NLCD92 are employed as examples in the case study. The results demonstrate that correlations between these models are very high for a possible reason that all these models are designed to simulate the similarity judgement of human mind.

  18. a Comparison of Semantic Similarity Models in Evaluating Concept Similarity

    Science.gov (United States)

    Xu, Q. X.; Shi, W. Z.

    2012-08-01

    The semantic similarities are important in concept definition, recognition, categorization, interpretation, and integration. Many semantic similarity models have been established to evaluate semantic similarities of objects or/and concepts. To find out the suitability and performance of different models in evaluating concept similarities, we make a comparison of four main types of models in this paper: the geometric model, the feature model, the network model, and the transformational model. Fundamental principles and main characteristics of these models are introduced and compared firstly. Land use and land cover concepts of NLCD92 are employed as examples in the case study. The results demonstrate that correlations between these models are very high for a possible reason that all these models are designed to simulate the similarity judgement of human mind.

  19. Learning Multi-modal Similarity

    CERN Document Server

    McFee, Brian

    2010-01-01

    In many applications involving multi-media data, the definition of similarity between items is integral to several key tasks, e.g., nearest-neighbor retrieval, classification, and recommendation. Data in such regimes typically exhibits multiple modalities, such as acoustic and visual content of video. Integrating such heterogeneous data to form a holistic similarity space is therefore a key challenge to be overcome in many real-world applications. We present a novel multiple kernel learning technique for integrating heterogeneous data into a single, unified similarity space. Our algorithm learns an optimal ensemble of kernel transfor- mations which conform to measurements of human perceptual similarity, as expressed by relative comparisons. To cope with the ubiquitous problems of subjectivity and inconsistency in multi- media similarity, we develop graph-based techniques to filter similarity measurements, resulting in a simplified and robust training procedure.

  20. Renewing the Respect for Similarity

    Directory of Open Access Journals (Sweden)

    Shimon eEdelman

    2012-07-01

    Full Text Available In psychology, the concept of similarity has traditionally evoked a mixture of respect, stemmingfrom its ubiquity and intuitive appeal, and concern, due to its dependence on the framing of the problemat hand and on its context. We argue for a renewed focus on similarity as an explanatory concept, bysurveying established results and new developments in the theory and methods of similarity-preservingassociative lookup and dimensionality reduction — critical components of many cognitive functions, aswell as of intelligent data management in computer vision. We focus in particular on the growing familyof algorithms that support associative memory by performing hashing that respects local similarity, andon the uses of similarity in representing structured objects and scenes. Insofar as these similarity-basedideas and methods are useful in cognitive modeling and in AI applications, they should be included inthe core conceptual toolkit of computational neuroscience.

  1. Similarity Learning of Manifold Data.

    Science.gov (United States)

    Chen, Si-Bao; Ding, Chris H Q; Luo, Bin

    2015-09-01

    Without constructing adjacency graph for neighborhood, we propose a method to learn similarity among sample points of manifold in Laplacian embedding (LE) based on adding constraints of linear reconstruction and least absolute shrinkage and selection operator type minimization. Two algorithms and corresponding analyses are presented to learn similarity for mix-signed and nonnegative data respectively. The similarity learning method is further extended to kernel spaces. The experiments on both synthetic and real world benchmark data sets demonstrate that the proposed LE with new similarity has better visualization and achieves higher accuracy in classification.

  2. Quantifier Scope in Categorical Compositional Distributional Semantics

    Directory of Open Access Journals (Sweden)

    Mehrnoosh Sadrzadeh

    2016-08-01

    Full Text Available In previous work with J. Hedges, we formalised a generalised quantifiers theory of natural language in categorical compositional distributional semantics with the help of bialgebras. In this paper, we show how quantifier scope ambiguity can be represented in that setting and how this representation can be generalised to branching quantifiers.

  3. Systems chemistry : using thermodynamically controlled networks to assess molecular similarity

    NARCIS (Netherlands)

    Saggiomo, Vittorio; Hristova, Yana R.; Ludlow, R. Frederick; Otto, Sijbren

    2013-01-01

    Background: The assessment of mol. similarity is a key step in the drug discovery process that has thus far relied almost exclusively on computational approaches. We now report an exptl. method for similarity assessment based on dynamic combinatorial chem. Results: In order to assess mol. similarity

  4. Applications of Quantified Constraint Solving over the Reals - Bibliography

    CERN Document Server

    Ratschan, Stefan

    2012-01-01

    Quantified constraints over the reals appear in numerous contexts. Usually existential quantification occurs when some parameter can be chosen by the user of a system, and univeral quantification when the exact value of a parameter is either unknown, or when it occurs in infinitely many, similar versions. The following is a list of application areas and publications that contain applications for solving quantified constraints over the reals. The list is certainly not complete, but grows as the author encounters new items. Contributions are very welcome!

  5. Visual Similarity Based Document Layout Analysis

    Institute of Scientific and Technical Information of China (English)

    Di Wen; Xiao-Qing Ding

    2006-01-01

    In this paper, a visual similarity based document layout analysis (DLA) scheme is proposed, which by using clustering strategy can adaptively deal with documents in different languages, with different layout structures and skew angles. Aiming at a robust and adaptive DLA approach, the authors first manage to find a set of representative filters and statistics to characterize typical texture patterns in document images, which is through a visual similarity testing process.Texture features are then extracted from these filters and passed into a dynamic clustering procedure, which is called visual similarity clustering. Finally, text contents are located from the clustered results. Benefit from this scheme, the algorithm demonstrates strong robustness and adaptability in a wide variety of documents, which previous traditional DLA approaches do not possess.

  6. Similarity searching in large combinatorial chemistry spaces

    Science.gov (United States)

    Rarey, Matthias; Stahl, Martin

    2001-06-01

    We present a novel algorithm, called Ftrees-FS, for similarity searching in large chemistry spaces based on dynamic programming. Given a query compound, the algorithm generates sets of compounds from a given chemistry space that are similar to the query. The similarity search is based on the feature tree similarity measure representing molecules by tree structures. This descriptor allows handling combinatorial chemistry spaces as a whole instead of looking at subsets of enumerated compounds. Within few minutes of computing time, the algorithm is able to find the most similar compound in very large spaces as well as sets of compounds at an arbitrary similarity level. In addition, the diversity among the generated compounds can be controlled. A set of 17 000 fragments of known drugs, generated by the RECAP procedure from the World Drug Index, was used as the search chemistry space. These fragments can be combined to more than 1018 compounds of reasonable size. For validation, known antagonists/inhibitors of several targets including dopamine D4, histamine H1, and COX2 are used as queries. Comparison of the compounds created by Ftrees-FS to other known actives demonstrates the ability of the method to jump between structurally unrelated molecule classes.

  7. A Signal Processing Method to Explore Similarity in Protein Flexibility

    Directory of Open Access Journals (Sweden)

    Simina Vasilache

    2010-01-01

    Full Text Available Understanding mechanisms of protein flexibility is of great importance to structural biology. The ability to detect similarities between proteins and their patterns is vital in discovering new information about unknown protein functions. A Distance Constraint Model (DCM provides a means to generate a variety of flexibility measures based on a given protein structure. Although information about mechanical properties of flexibility is critical for understanding protein function for a given protein, the question of whether certain characteristics are shared across homologous proteins is difficult to assess. For a proper assessment, a quantified measure of similarity is necessary. This paper begins to explore image processing techniques to quantify similarities in signals and images that characterize protein flexibility. The dataset considered here consists of three different families of proteins, with three proteins in each family. The similarities and differences found within flexibility measures across homologous proteins do not align with sequence-based evolutionary methods.

  8. Quantifying Cricket Fast Bowling Skill.

    Science.gov (United States)

    Feros, Simon A; Young, Warren B; O'Brien, Brendan J

    2017-09-27

    To evaluate the current evidence regarding the quantification of cricket fast bowling skill. Studies that assessed fast bowling skill (bowling speed and accuracy) were identified from searches in SPORTDiscus (EBSCO) in June 2017. The reference lists of identified papers were also examined for relevant investigations. Sixteen papers matched the inclusion criteria, and discrepancies in assessment procedures were evident. Differences in: test environment, pitch and cricket ball characteristics, the warm-up prior to test, test familiarisation procedures, permitted run-up lengths, bowling spell length, delivery sequence, test instructions, collection of bowling speed data, collection and reportage of bowling accuracy data were apparent throughout the literature. The reliability and sensitivity of fast bowling skill measures has rarely been reported across the literature. Only one study has attempted to assess the construct validity of their skill measures. There are several discrepancies in how fast bowling skill has been assessed and subsequently quantified in the literature to date. This is a problem, as comparisons between studies are often difficult. Therefore, a strong rationale exists for the creation of match-specific standardised fast bowling assessments that offer greater ecological validity while maintaining acceptable reliability and sensitivity of the skill measures. If prospective research can act on the proposed recommendations from this review, then coaches will be able to make more informed decisions surrounding player selection, talent identification, return to skill following injury, and the efficacy of short- and long-term training interventions for fast bowlers.

  9. Quantifying the vitamin D economy.

    Science.gov (United States)

    Heaney, Robert P; Armas, Laura A G

    2015-01-01

    Vitamin D enters the body through multiple routes and in a variety of chemical forms. Utilization varies with input, demand, and genetics. Vitamin D and its metabolites are carried in the blood on a Gc protein that has three principal alleles with differing binding affinities and ethnic prevalences. Three major metabolites are produced, which act via two routes, endocrine and autocrine/paracrine, and in two compartments, extracellular and intracellular. Metabolic consumption is influenced by physiological controls, noxious stimuli, and tissue demand. When administered as a supplement, varying dosing schedules produce major differences in serum metabolite profiles. To understand vitamin D's role in human physiology, it is necessary both to identify the foregoing entities, mechanisms, and pathways and, specifically, to quantify them. This review was performed to delineate the principal entities and transitions involved in the vitamin D economy, summarize the status of present knowledge of the applicable rates and masses, draw inferences about functions that are implicit in these quantifications, and point out implications for the determination of adequacy.

  10. 陕北近浅埋煤层开采潜水位动态相似模型试验%Similar simulation of dynamic dive table with approximate shallow buried coal mining in Northern Shaanxi Province

    Institute of Scientific and Technical Information of China (English)

    李涛; 李文平; 常金源; 都平平; 高颖

    2011-01-01

    以陕北水位波动区域工作面为背景,采用设计的固液耦合模型进行相似模拟,观测了煤层回采过程中离层发育、导水裂隙带高度、关键隔水黏土层下沉量、潜水位动态变化及煤层停采后水位恢复过程,研究了水位波动区域潜水位动态变化机制.试验结果表明:水位波动区域煤层回采造成基岩弹性变形和隔水黏土层塑性变形是模拟附加应力-应变场的关键,而含水砂层的水理性质是模拟采动渗流场的关键;随着煤层的回采,基岩亚关键层的依次弯曲产生离层并最终破断,而隔水黏土层伴随着覆岩的破断、垮落保持了稳定性并逐渐形成下沉盆地,潜水位随着下沉盆地的逐步形成发生周期性骤降,同时由于含水砂体的较强流动性、补给性,离层随着时间推移逐渐趋于闭合,这使得潜水位进一步下降;煤层停采后,由于潜水的侧向补给,水位在初期恢复较快,后期水位恢复较慢.%Under the background of the working face influenced on the water fluctuation area (WFA) in Northern Shaanxi Province, the dynamic fluid-solid coupling physical model was designed.Based on observation of separation layer development, the water flowing fractured zone heights, subsidence of the water-resisting key clay layer and dynamic dive table while mining, as well as water recovery while mining-stop, the mechanism of dynamic dive table variation in WFA was studied.The results show that elastic deformation to the bedrock and plastic deformation to the aquifuge clay are the key to simulation of additional stress-strain flied, and hydrophysical character of sandy layer is the key to simulation of mining seepage field in WFA.While mining, with successive curve, bed separation appearance as well as fracture of inferior key strata, the aquifuge clay keeps the stabilization and gradually forms the subsidence basin, the dive table rapid drawdown occurs cyclically during the formation of

  11. Wavelet transform in similarity paradigm

    NARCIS (Netherlands)

    Z.R. Struzik; A.P.J.M. Siebes (Arno)

    1998-01-01

    textabstract[INS-R9802] Searching for similarity in time series finds still broader applications in data mining. However, due to the very broad spectrum of data involved, there is no possibility of defining one single notion of similarity suitable to serve all applications. We present a powerful

  12. Quantifying climate changes of the Common Era for Finland

    Science.gov (United States)

    Luoto, Tomi P.; Nevalainen, Liisa

    2016-11-01

    In this study, we aim to quantify summer air temperatures from sediment records from Southern, Central and Northern Finland over the past 2000 years. We use lake sediment archives to estimate paleotemperatures applying fossil Chironomidae assemblages and the transfer function approach. The used enhanced Chironomidae-based temperature calibration set was validated in a 70-year high-resolution sediment record against instrumentally measured temperatures. Since the inferred and observed temperatures showed close correlation, we deduced that the new calibration model is reliable for reconstructions beyond the monitoring records. The 700-year long temperature reconstructions from three sites at multi-decadal temporal resolution showed similar trends, although they had differences in timing of the cold Little Ice Age (LIA) and the initiation of recent warming. The 2000-year multi-centennial reconstructions from three different sites showed resemblance with each other having clear signals of the Medieval Climate Anomaly (MCA) and LIA, but with differences in their timing. The influence of external forcing on climate of the southern and central sites appeared to be complex at the decadal scale, but the North Atlantic Oscillation (NAO) was closely linked to the temperature development of the northern site. Solar activity appears to be synchronous with the temperature fluctuations at the multi-centennial scale in all the sites. The present study provides new insights into centennial and decadal variability in air temperature dynamics in Northern Europe and on the external forcing behind these trends. These results are particularly useful in comparing regional responses and lags of temperature trends between different parts of Scandinavia.

  13. Quantifying uncertainty from material inhomogeneity.

    Energy Technology Data Exchange (ETDEWEB)

    Battaile, Corbett Chandler; Emery, John M.; Brewer, Luke N.; Boyce, Brad Lee

    2009-09-01

    Most engineering materials are inherently inhomogeneous in their processing, internal structure, properties, and performance. Their properties are therefore statistical rather than deterministic. These inhomogeneities manifest across multiple length and time scales, leading to variabilities, i.e. statistical distributions, that are necessary to accurately describe each stage in the process-structure-properties hierarchy, and are ultimately the primary source of uncertainty in performance of the material and component. When localized events are responsible for component failure, or when component dimensions are on the order of microstructural features, this uncertainty is particularly important. For ultra-high reliability applications, the uncertainty is compounded by a lack of data describing the extremely rare events. Hands-on testing alone cannot supply sufficient data for this purpose. To date, there is no robust or coherent method to quantify this uncertainty so that it can be used in a predictive manner at the component length scale. The research presented in this report begins to address this lack of capability through a systematic study of the effects of microstructure on the strain concentration at a hole. To achieve the strain concentration, small circular holes (approximately 100 {micro}m in diameter) were machined into brass tensile specimens using a femto-second laser. The brass was annealed at 450 C, 600 C, and 800 C to produce three hole-to-grain size ratios of approximately 7, 1, and 1/7. Electron backscatter diffraction experiments were used to guide the construction of digital microstructures for finite element simulations of uniaxial tension. Digital image correlation experiments were used to qualitatively validate the numerical simulations. The simulations were performed iteratively to generate statistics describing the distribution of plastic strain at the hole in varying microstructural environments. In both the experiments and simulations, the

  14. Similarity of samples and trimming

    CERN Document Server

    Álvarez-Esteban, Pedro C; Cuesta-Albertos, Juan A; Matrán, Carlos; 10.3150/11-BEJ351

    2012-01-01

    We say that two probabilities are similar at level $\\alpha$ if they are contaminated versions (up to an $\\alpha$ fraction) of the same common probability. We show how this model is related to minimal distances between sets of trimmed probabilities. Empirical versions turn out to present an overfitting effect in the sense that trimming beyond the similarity level results in trimmed samples that are closer than expected to each other. We show how this can be combined with a bootstrap approach to assess similarity from two data samples.

  15. Contextual Bandits with Similarity Information

    CERN Document Server

    Slivkins, Aleksandrs

    2009-01-01

    In a multi-armed bandit (MAB) problem, an online algorithm makes a sequence of choices. In each round it chooses from a time-invariant set of alternatives and receives the payoff associated with this alternative. While the case of small strategy sets is by now well-understood, a lot of recent work has focused on MAB problems with exponentially or infinitely large strategy sets, where one needs to assume extra structure in order to make the problem tractable. In particular, recent literature considered information on similarity between arms. We consider similarity information in the setting of "contextual bandits", a natural extension of the basic MAB problem where before each round an algorithm is given the "context" -- a hint about the payoffs in this round. Contextual bandits are directly motivated by placing advertisements on webpages, one of the crucial problems in sponsored search. A particularly simple way to represent similarity information in the contextual bandit setting is via a "similarity distance...

  16. Self-similar aftershock rates

    Science.gov (United States)

    Davidsen, Jörn; Baiesi, Marco

    2016-08-01

    In many important systems exhibiting crackling noise—an intermittent avalanchelike relaxation response with power-law and, thus, self-similar distributed event sizes—the "laws" for the rate of activity after large events are not consistent with the overall self-similar behavior expected on theoretical grounds. This is particularly true for the case of seismicity, and a satisfying solution to this paradox has remained outstanding. Here, we propose a generalized description of the aftershock rates which is both self-similar and consistent with all other known self-similar features. Comparing our theoretical predictions with high-resolution earthquake data from Southern California we find excellent agreement, providing particularly clear evidence for a unified description of aftershocks and foreshocks. This may offer an improved framework for time-dependent seismic hazard assessment and earthquake forecasting.

  17. Unmixing of spectrally similar minerals

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-01-01

    Full Text Available -bearing oxide/hydroxide/sulfate minerals in complex mixtures be obtained using hyperspectral data? Debba (CSIR) Unmixing of spectrally similar minerals MERAKA 2009 3 / 18 Method of spectral unmixing Old method: problem Linear Spectral Mixture Analysis (LSMA...

  18. Self-similar aftershock rates

    CERN Document Server

    Davidsen, Jörn

    2016-01-01

    In many important systems exhibiting crackling noise --- intermittent avalanche-like relaxation response with power-law and, thus, self-similar distributed event sizes --- the "laws" for the rate of activity after large events are not consistent with the overall self-similar behavior expected on theoretical grounds. This is in particular true for the case of seismicity and a satisfying solution to this paradox has remained outstanding. Here, we propose a generalized description of the aftershock rates which is both self-similar and consistent with all other known self-similar features. Comparing our theoretical predictions with high resolution earthquake data from Southern California we find excellent agreement, providing in particular clear evidence for a unified description of aftershocks and foreshocks. This may offer an improved way of time-dependent seismic hazard assessment and earthquake forecasting.

  19. Similarity and singularity in adhesive elastohydrodynamic touchdown

    CERN Document Server

    Carlson, Andreas

    2015-01-01

    We consider the touchdown of an elastic sheet as it adheres to a wall, which has a dynamics that is limited by the viscous resistance provided by the squeeze flow of the intervening liquid trapped between the two solid surfaces. The dynamics of the sheet is described mathematically by elastohydrodynamic lubrication theory, coupling the elastic deformation of the sheet, the microscopic van der Waals adhesion and the viscous thin film flow. We use a combination of numerical simulations of the governing partial differential equation and a scaling analysis to describe the self-similar solution of the touchdown of the sheet as it approaches the wall. An analysis of the equation satisfied by the similarity variables in the vicinity of the touchdown event shows that an entire sequence of solutions are allowed. However, a comparison of these shows that only the fundamental similarity solution is observed in the time-dependent numerical simulations, consistent with the fact that it alone is stable. Our analysis genera...

  20. Cross-linguistic patterns in the acquisition of quantifiers.

    Science.gov (United States)

    Katsos, Napoleon; Cummins, Chris; Ezeizabarrena, Maria-José; Gavarró, Anna; Kuvač Kraljević, Jelena; Hrzica, Gordana; Grohmann, Kleanthes K; Skordi, Athina; Jensen de López, Kristine; Sundahl, Lone; van Hout, Angeliek; Hollebrandse, Bart; Overweg, Jessica; Faber, Myrthe; van Koert, Margreet; Smith, Nafsika; Vija, Maigi; Zupping, Sirli; Kunnari, Sari; Morisseau, Tiffany; Rusieshvili, Manana; Yatsushiro, Kazuko; Fengler, Anja; Varlokosta, Spyridoula; Konstantzou, Katerina; Farby, Shira; Guasti, Maria Teresa; Vernice, Mirta; Okabe, Reiko; Isobe, Miwa; Crosthwaite, Peter; Hong, Yoonjee; Balčiūnienė, Ingrida; Ahmad Nizar, Yanti Marina; Grech, Helen; Gatt, Daniela; Cheong, Win Nee; Asbjørnsen, Arve; Torkildsen, Janne von Koss; Haman, Ewa; Miękisz, Aneta; Gagarina, Natalia; Puzanova, Julia; Anđelković, Darinka; Savić, Maja; Jošić, Smiljana; Slančová, Daniela; Kapalková, Svetlana; Barberán, Tania; Özge, Duygu; Hassan, Saima; Chan, Cecilia Yuet Hung; Okubo, Tomoya; van der Lely, Heather; Sauerland, Uli; Noveck, Ira

    2016-08-16

    Learners of most languages are faced with the task of acquiring words to talk about number and quantity. Much is known about the order of acquisition of number words as well as the cognitive and perceptual systems and cultural practices that shape it. Substantially less is known about the acquisition of quantifiers. Here, we consider the extent to which systems and practices that support number word acquisition can be applied to quantifier acquisition and conclude that the two domains are largely distinct in this respect. Consequently, we hypothesize that the acquisition of quantifiers is constrained by a set of factors related to each quantifier's specific meaning. We investigate competence with the expressions for "all," "none," "some," "some…not," and "most" in 31 languages, representing 11 language types, by testing 768 5-y-old children and 536 adults. We found a cross-linguistically similar order of acquisition of quantifiers, explicable in terms of four factors relating to their meaning and use. In addition, exploratory analyses reveal that language- and learner-specific factors, such as negative concord and gender, are significant predictors of variation.

  1. Identifying mechanistic similarities in drug responses

    KAUST Repository

    Zhao, C.

    2012-05-15

    Motivation: In early drug development, it would be beneficial to be able to identify those dynamic patterns of gene response that indicate that drugs targeting a particular gene will be likely or not to elicit the desired response. One approach would be to quantitate the degree of similarity between the responses that cells show when exposed to drugs, so that consistencies in the regulation of cellular response processes that produce success or failure can be more readily identified.Results: We track drug response using fluorescent proteins as transcription activity reporters. Our basic assumption is that drugs inducing very similar alteration in transcriptional regulation will produce similar temporal trajectories on many of the reporter proteins and hence be identified as having similarities in their mechanisms of action (MOA). The main body of this work is devoted to characterizing similarity in temporal trajectories/signals. To do so, we must first identify the key points that determine mechanistic similarity between two drug responses. Directly comparing points on the two signals is unrealistic, as it cannot handle delays and speed variations on the time axis. Hence, to capture the similarities between reporter responses, we develop an alignment algorithm that is robust to noise, time delays and is able to find all the contiguous parts of signals centered about a core alignment (reflecting a core mechanism in drug response). Applying the proposed algorithm to a range of real drug experiments shows that the result agrees well with the prior drug MOA knowledge. © The Author 2012. Published by Oxford University Press. All rights reserved.

  2. Domain similarity based orthology detection.

    Science.gov (United States)

    Bitard-Feildel, Tristan; Kemena, Carsten; Greenwood, Jenny M; Bornberg-Bauer, Erich

    2015-05-13

    Orthologous protein detection software mostly uses pairwise comparisons of amino-acid sequences to assert whether two proteins are orthologous or not. Accordingly, when the number of sequences for comparison increases, the number of comparisons to compute grows in a quadratic order. A current challenge of bioinformatic research, especially when taking into account the increasing number of sequenced organisms available, is to make this ever-growing number of comparisons computationally feasible in a reasonable amount of time. We propose to speed up the detection of orthologous proteins by using strings of domains to characterize the proteins. We present two new protein similarity measures, a cosine and a maximal weight matching score based on domain content similarity, and new software, named porthoDom. The qualities of the cosine and the maximal weight matching similarity measures are compared against curated datasets. The measures show that domain content similarities are able to correctly group proteins into their families. Accordingly, the cosine similarity measure is used inside porthoDom, the wrapper developed for proteinortho. porthoDom makes use of domain content similarity measures to group proteins together before searching for orthologs. By using domains instead of amino acid sequences, the reduction of the search space decreases the computational complexity of an all-against-all sequence comparison. We demonstrate that representing and comparing proteins as strings of discrete domains, i.e. as a concatenation of their unique identifiers, allows a drastic simplification of search space. porthoDom has the advantage of speeding up orthology detection while maintaining a degree of accuracy similar to proteinortho. The implementation of porthoDom is released using python and C++ languages and is available under the GNU GPL licence 3 at http://www.bornberglab.org/pages/porthoda .

  3. Quantifying Heartbeat Dynamics by Magnitude and Sign Correlations

    Science.gov (United States)

    Ivanov, Plamen Ch.; Ashkenazy, Yosef; Kantelhardt, Jan W.; Stanley, H. Eugene

    2003-05-01

    We review a recently developed approach for analyzing time series with long-range correlations by decomposing the signal increment series into magnitude and sign series and analyzing their scaling properties. We show that time series with identical long-range correlations can exhibit different time organization for the magnitude and sign. We apply our approach to series of time intervals between consecutive heartbeats. Using the detrended fluctuation analysis method we find that the magnitude series is long-range correlated, while the sign series is anticorrelated and that both magnitude and sign series may have clinical applications. Further, we study the heartbeat magnitude and sign series during different sleep stages — light sleep, deep sleep, and REM sleep. For the heartbeat sign time series we find short-range anticorrelations, which are strong during deep sleep, weaker during light sleep and even weaker during REM sleep. In contrast, for the heartbeat magnitude time series we find long-range positive correlations, which are strong during REM sleep and weaker during light sleep. Thus, the sign and the magnitude series provide information which is also useful for distinguishing between different sleep stages.

  4. Methodology Aspects of Quantifying Stochastic Climate Variability with Dynamic Models

    Science.gov (United States)

    Nuterman, Roman; Jochum, Markus; Solgaard, Anna

    2015-04-01

    The paleoclimatic records show that climate has changed dramatically through time. For the past few millions of years it has been oscillating between ice ages, with large parts of the continents covered with ice, and warm interglacial periods like the present one. It is commonly assumed that these glacial cycles are related to changes in insolation due to periodic changes in Earth's orbit around Sun (Milankovitch theory). However, this relationship is far from understood. The insolation changes are so small that enhancing feedbacks must be at play. It might even be that the external perturbation only plays a minor role in comparison to internal stochastic variations or internal oscillations. This claim is based on several shortcomings in the Milankovitch theory: Prior to one million years ago, the duration of the glacial cycles was indeed 41,000 years, in line with the obliquity cycle of Earth's orbit. This duration changed at the so-called Mid-Pleistocene transition to approximately 100,000 years. Moreover, according to Milankovitch's theory the interglacial of 400,000 years ago should not have happened. Thus, while prior to one million years ago the pacing of these glacial cycles may be tied to changes in Earth's orbit, we do not understand the current magnitude and phasing of the glacial cycles. In principle it is possible that the glacial/interglacial cycles are not due to variations in Earth's orbit, but due to stochastic forcing or internal modes of variability. We present a new method and preliminary results for a unified framework using a fully coupled Earth System Model (ESM), in which the leading three ice age hypotheses will be investigated together. Was the waxing and waning of ice sheets due to an internal mode of variability, due to variations in Earth's orbit, or simply due to a low-order auto-regressive process (i.e., noise integrated by system with memory)? The central idea is to use the Generalized Linear Models (GLM), which can handle both continuous and discrete weather/ climate variables and stochastic processes. GLM permits the inclusion of annual cycles as well as conditions the model on large-scale atmospheric or oceanic circulation. Such modeling framework will be built from large sets of ESM model integrations and paleoclimatic records and will represent synthetic climate time series of any length. It will be further used to force an ice sheet model and to create a synthetic time series of ice volume.

  5. Quantifying and Assuring Information Transfer in Dynamic Heterogeneous Wireless Networks

    Science.gov (United States)

    2012-07-31

    Kumar, Estimating the state of a Markov chain over a noisy communication channel: A bound and an encoder. To appear in Proceedings of 49th IEEE...Transactions on Information Theory. 4. I-Hong Hou and P. R. Kumar, Queueing Systems with Hard Delay Constraints: A Framework andSolutions for Real-Time...J. Garcia-Haro, Z.J. Haas, A stochastic model for chain collisions of vehicles equipped with vehicular communications, accepted for publications in

  6. QUANTIFYING LIFE STYLE IMPACT ON LIFESPAN

    Directory of Open Access Journals (Sweden)

    Antonello Lorenzini

    2012-12-01

    Full Text Available A healthy diet, physical activity and avoiding dangerous habits such as smoking are effective ways of increasing health and lifespan. Although a significant portion of the world's population still suffers from malnutrition, especially children, the most common cause of death in the world today is non-communicable diseases. Overweight and obesity significantly increase the relative risk for the most relevant non communicable diseases: cardiovascular disease, type II diabetes and some cancers. Childhood overweight also seems to increase the likelihood of disease in adulthood through epigenetic mechanisms. This worrisome trend now termed "globesity" will deeply impact society unless preventive strategies are put into effect. Researchers of the basic biology of aging have clearly established that animals with short lifespans live longer when their diet is calorie restricted. Although similar experiments carried on rhesus monkeys, a longer-lived species more closely related to humans, yielded mixed results, overall the available scientific data suggest keeping the body mass index in the "normal" range increases the chances of living a longer, healthier life. This can be successfully achieved both by maintaining a healthy diet and by engaging in physical activity. In this review we will try to quantify the relative impact of life style choices on lifespan.

  7. Similarity of atoms in molecules

    Energy Technology Data Exchange (ETDEWEB)

    Cioslowski, J.; Nanayakkara, A. (Florida State Univ., Tallahassee, FL (United States))

    1993-12-01

    Similarity of atoms in molecules is quantitatively assessed with a measure that employs electron densities within respective atomic basins. This atomic similarity measure does not rely on arbitrary assumptions concerning basis functions or 'atomic orbitals', is relatively inexpensive to compute, and has straightforward interpretation. Inspection of similarities between pairs of carbon, hydrogen, and fluorine atoms in the CH[sub 4], CH[sub 3]F, CH[sub 2]F[sub 2], CHF[sub 3], CF[sub 4], C[sub 2]H[sub 2], C[sub 2]H[sub 4], and C[sub 2]H[sub 6] molecules, calculated at the MP2/6-311G[sup **] level of theory, reveals that the atomic similarity is greatly reduced by a change in the number or the character of ligands (i.e. the atoms with nuclei linked through bond paths to the nucleus of the atom in question). On the other hand, atoms with formally identical (i.e. having the same nuclei and numbers of ligands) ligands resemble each other to a large degree, with the similarity indices greater than 0.95 for hydrogens and 0.99 for non-hydrogens. 19 refs., 6 tabs.

  8. Predicting the performance of fingerprint similarity searching.

    Science.gov (United States)

    Vogt, Martin; Bajorath, Jürgen

    2011-01-01

    Fingerprints are bit string representations of molecular structure that typically encode structural fragments, topological features, or pharmacophore patterns. Various fingerprint designs are utilized in virtual screening and their search performance essentially depends on three parameters: the nature of the fingerprint, the active compounds serving as reference molecules, and the composition of the screening database. It is of considerable interest and practical relevance to predict the performance of fingerprint similarity searching. A quantitative assessment of the potential that a fingerprint search might successfully retrieve active compounds, if available in the screening database, would substantially help to select the type of fingerprint most suitable for a given search problem. The method presented herein utilizes concepts from information theory to relate the fingerprint feature distributions of reference compounds to screening libraries. If these feature distributions do not sufficiently differ, active database compounds that are similar to reference molecules cannot be retrieved because they disappear in the "background." By quantifying the difference in feature distribution using the Kullback-Leibler divergence and relating the divergence to compound recovery rates obtained for different benchmark classes, fingerprint search performance can be quantitatively predicted.

  9. Nuclear multifragmentation and fission: similarity and differences

    CERN Document Server

    Karnaukhov, V; Avdeyev, S; Rodionov, V; Kirakosyan, V; Simonenko, A; Rukoyatkin, P; Budzanowski, A; Karcz, W; Skwirczynska, I; Czech, B; Chulkov, L; Kuzmin, E; Norbeck, E; Botvina, A

    2006-01-01

    Thermal multifragmentation of hot nuclei is interpreted as the nuclear liquid--fog phase transition deep inside the spinodal region. The experimental data for p(8.1GeV) + Au collisions are analyzed. It is concluded that the decay process of hot nuclei is characterized by two size parameters: transition state and freeze-out volumes. The similarity between dynamics of fragmentation and ordinary fission is discussed. The IMF emission time is related to the mean rupture time at the multi-scission point, which corresponds to the kinetic freeze-out configuration.

  10. Similarity measures for face recognition

    CERN Document Server

    Vezzetti, Enrico

    2015-01-01

    Face recognition has several applications, including security, such as (authentication and identification of device users and criminal suspects), and in medicine (corrective surgery and diagnosis). Facial recognition programs rely on algorithms that can compare and compute the similarity between two sets of images. This eBook explains some of the similarity measures used in facial recognition systems in a single volume. Readers will learn about various measures including Minkowski distances, Mahalanobis distances, Hansdorff distances, cosine-based distances, among other methods. The book also summarizes errors that may occur in face recognition methods. Computer scientists "facing face" and looking to select and test different methods of computing similarities will benefit from this book. The book is also useful tool for students undertaking computer vision courses.

  11. Gibbs Paradox and Similarity Principle

    CERN Document Server

    Lin, Shu-Kun

    2008-01-01

    Adding -lnN! term to the accepted entropy formula will immediately make the entropy function nonadditive and sometimes negative. As no heat effect and mechanical work are observed, we have a simple experimental resolution of the Gibbs paradox: the thermodynamic entropy of mixing is always zero and the Gibbs free energy change is also always zero during the formation of any ideal mixture of gases, liquids, solids or solutions, whether their components are different or identical. However, information loss is observed and must be the exclusive driving force of these spontaneous processes. Information is defined and calculated as the amount of the compressed data. Information losses due to dynamic motion and static symmetric structure formation are defined as two kinds of entropies-dynamic entropy and static entropy, respectively. Entropy is defined and calculated as the logarithm of the symmetry number. There are three laws of information theory, where the first and the second laws are analogs of the two thermod...

  12. Quantifying coordination among the rearfoot, midfoot, and forefoot segments during running.

    Science.gov (United States)

    Takabayashi, Tomoya; Edama, Mutsuaki; Yokoyama, Erika; Kanaya, Chiaki; Kubo, Masayoshi

    2017-02-28

    Because previous studies have suggested that there is a relationship between injury risk and inter-segment coordination, quantifying coordination between the segments is essential. Even though the midfoot and forefoot segments play important roles in dynamic tasks, previous studies have mostly focused on coordination between the shank and rearfoot segments. This study aimed to quantify coordination among rearfoot, midfoot, and forefoot segments during running. Eleven healthy young men ran on a treadmill. The coupling angle, representing inter-segment coordination, was calculated using a modified vector coding technique. The coupling angle was categorised into four coordination patterns. During the absorption phase, rearfoot-midfoot coordination in the frontal planes was mostly in-phase (rearfoot and midfoot eversion with similar amplitudes). The present study found that the eversion of the midfoot with respect to the rearfoot was comparable in magnitude to the eversion of the rearfoot with respect to the shank. A previous study has suggested that disruption of the coordination between the internal rotation of the shank and eversion of the rearfoot leads to running injuries such as anterior knee pain. Thus, these data might be used in the future to compare to individuals with foot deformities or running injuries.

  13. Distance learning for similarity estimation

    NARCIS (Netherlands)

    Yu, J.; Amores, J.; Sebe, N.; Radeva, P.; Tian, Q.

    2008-01-01

    In this paper, we present a general guideline to find a better distance measure for similarity estimation based on statistical analysis of distribution models and distance functions. A new set of distance measures are derived from the harmonic distance, the geometric distance, and their generalized

  14. Distance learning for similarity estimation.

    Science.gov (United States)

    Yu, Jie; Amores, Jaume; Sebe, Nicu; Radeva, Petia; Tian, Qi

    2008-03-01

    In this paper, we present a general guideline to find a better distance measure for similarity estimation based on statistical analysis of distribution models and distance functions. A new set of distance measures are derived from the harmonic distance, the geometric distance, and their generalized variants according to the Maximum Likelihood theory. These measures can provide a more accurate feature model than the classical Euclidean and Manhattan distances. We also find that the feature elements are often from heterogeneous sources that may have different influence on similarity estimation. Therefore, the assumption of single isotropic distribution model is often inappropriate. To alleviate this problem, we use a boosted distance measure framework that finds multiple distance measures which fit the distribution of selected feature elements best for accurate similarity estimation. The new distance measures for similarity estimation are tested on two applications: stereo matching and motion tracking in video sequences. The performance of boosted distance measure is further evaluated on several benchmark data sets from the UCI repository and two image retrieval applications. In all the experiments, robust results are obtained based on the proposed methods.

  15. Revisiting Inter-Genre Similarity

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Gouyon, Fabien

    2013-01-01

    We revisit the idea of ``inter-genre similarity'' (IGS) for machine learning in general, and music genre recognition in particular. We show analytically that the probability of error for IGS is higher than naive Bayes classification with zero-one loss (NB). We show empirically that IGS does...

  16. Comparison of hydrological similarity measures

    Science.gov (United States)

    Rianna, Maura; Ridolfi, Elena; Manciola, Piergiorgio; Napolitano, Francesco; Russo, Fabio

    2016-04-01

    The use of a traditional at site approach for the statistical characterization and simulation of spatio-temporal precipitation fields has a major recognized drawback. Indeed, the weakness of the methodology is related to the estimation of rare events and it involves the uncertainty of the at-site sample statistical inference, because of the limited length of records. In order to overcome the lack of at-site observations, regional frequency approach uses the idea of substituting space for time to estimate design floods. The conventional regional frequency analysis estimates quantile values at a specific site from multi-site analysis. The main idea is that homogeneous sites, once pooled together, have similar probability distribution curves of extremes, except for a scaling factor. The method for pooling groups of sites can be based on geographical or climatological considerations. In this work the region of influence (ROI) pooling method is compared with an entropy-based one. The ROI is a flexible pooling group approach which defines for each site its own "region" formed by a unique set of similar stations. The similarity is found through the Euclidean distance metric in the attribute space. Here an alternative approach based on entropy is introduced to cluster homogeneous sites. The core idea is that homogeneous sites share a redundant (i.e. similar) amount of information. Homogeneous sites are pooled through a hierarchical selection based on the mutual information index (i.e. a measure of redundancy). The method is tested on precipitation data in Central Italy area.

  17. HOW DISSIMILARLY SIMILAR ARE BIOSIMILARS?

    Directory of Open Access Journals (Sweden)

    Ramshankar Vijayalakshmi

    2012-05-01

    Full Text Available Recently Biopharmaceuticals are the new chemotherapeutical agents that are called as “Biosimilars” or “follow on protein products” by the European Medicines Agency (EMA and the American regulatory agencies (Food and Drug Administration respectively. Biosimilars are extremely similar to the reference molecule but not identical, however close their similarities may be. A regulatory framework is therefore in place to assess the application for marketing authorisation of biosimilars. When a biosimilar is similar to the reference biopharmaceutical in terms of safety, quality, and efficacy, it can be registered. It is important to document data from clinical trials with a view of similar safety and efficacy. If the development time for a generic medicine is around 3 years, a biosimilar takes about 6-9 years. Generic medicines need to demonstrate bioequivalence only unlike biosimilars that need to conduct phase I and Phase III clinical trials. In this review, different biosimilars that are already being used successfully in the field on Oncology is discussed. Their similarity, differences and guidelines to be followed before a clinically informed decision to be taken, is discussed. More importantly the regulatory guidelines that are operational in India with a work flow of making a biosimilar with relevant dos and dont’s are discussed. For a large populous country like India, where with improved treatments in all sectors including oncology, our ageing population is increasing. For the health care of this sector, we need more newer, cheaper and effective biosimilars in the market. It becomes therefore important to understand the regulatory guidelines and steps to come up with more biosimilars for the existing population and also more information is mandatory for the practicing clinicians to translate these effectively into clinical practice.

  18. Quantifying convergence in the sciences

    Directory of Open Access Journals (Sweden)

    Sara Lumbreras

    2016-02-01

    Full Text Available Traditional epistemological models classify knowledge into separate disciplines with different objects of study and specific techniques, with some frameworks even proposing hierarchies (such as Comte’s. According to thinkers such as John Holland or Teilhard de Chardin, the advancement of science involves the convergence of disciplines. This proposed convergence can be studied in a number of ways, such as how works impact research outside a specific area (citation networks or how authors collaborate with other researchers in different fields (collaboration networks. While these studies are delivering significant new insights, they cannot easily show the convergence of different topics within a body of knowledge. This paper attempts to address this question in a quantitative manner, searching for evidence that supports the idea of convergence in the content of the sciences themselves (that is, whether the sciences are dealing with increasingly the same topics. We use Latent Dirichlet Analysis (LDA, a technique that is able to analyze texts and estimate the relative contributions of the topics that were used to generate them. We apply this tool to the corpus of the Santa Fe Institute (SFI working papers, which spans research on Complexity Science from 1989 to 2015. We then analyze the relatedness of the different research areas, the rise and demise of these sub-disciplines over time and, more broadly, the convergence of the research body as a whole. Combining the topic structure obtained from the collected publication history of the SFI community with techniques to infer hierarchy and clustering, we reconstruct a picture of a dynamic community which experiences trends, periodically recurring topics, and shifts in the closeness of scholarship over time. We find that there is support for convergence, and that the application of quantitative methods such as LDA to the study of knowledge can provide valuable insights that can help

  19. Quantifying synergistic information remains an unsolved problem

    CERN Document Server

    Griffith, Virgil

    2011-01-01

    We review the prior literature of information theoretical measures of synergy or synergistic information. We draw the hereto unnamed conceptual distinction between synergistic and holistic information and analyze six prior measures based on whether they aim to quantify synergy or holism. We apply all measures against a suite of examples to demonstrate no existing measure correctly quantifies synergy under all circumstances.

  20. Self-similarity Driven Demosaicking

    Directory of Open Access Journals (Sweden)

    Antoni Buades

    2011-06-01

    Full Text Available Digital cameras record only one color component per pixel, red, green or blue. Demosaicking is the process by which one can infer a whole color matrix from such a matrix of values, thus interpolating the two missing color values per pixel. In this article we propose a demosaicking method based on the property of non-local self-similarity of images.

  1. Sparse Similarity-Based Fisherfaces

    DEFF Research Database (Denmark)

    Fagertun, Jens; Gomez, David Delgado; Hansen, Mads Fogtmann;

    2011-01-01

    In this work, the effect of introducing Sparse Principal Component Analysis within the Similarity-based Fisherfaces algorithm is examined. The technique aims at mimicking the human ability to discriminate faces by projecting the faces in a highly discriminative and easy interpretative way. Pixel...... obtain the same recognition results as the technique in a dense version using only a fraction of the input data. Furthermore, the presented results suggest that using SPCA in the technique offers robustness to occlusions....

  2. Roget's Thesaurus and Semantic Similarity

    CERN Document Server

    Jarmasz, Mario

    2012-01-01

    We have implemented a system that measures semantic similarity using a computerized 1987 Roget's Thesaurus, and evaluated it by performing a few typical tests. We compare the results of these tests with those produced by WordNet-based similarity measures. One of the benchmarks is Miller and Charles' list of 30 noun pairs to which human judges had assigned similarity measures. We correlate these measures with those computed by several NLP systems. The 30 pairs can be traced back to Rubenstein and Goodenough's 65 pairs, which we have also studied. Our Roget's-based system gets correlations of .878 for the smaller and .818 for the larger list of noun pairs; this is quite close to the .885 that Resnik obtained when he employed humans to replicate the Miller and Charles experiment. We further evaluate our measure by using Roget's and WordNet to answer 80 TOEFL, 50 ESL and 300 Reader's Digest questions: the correct synonym must be selected amongst a group of four words. Our system gets 78.75%, 82.00% and 74.33% of ...

  3. Active browsing using similarity pyramids

    Science.gov (United States)

    Chen, Jau-Yuen; Bouman, Charles A.; Dalton, John C.

    1998-12-01

    In this paper, we describe a new approach to managing large image databases, which we call active browsing. Active browsing integrates relevance feedback into the browsing environment, so that users can modify the database's organization to suit the desired task. Our method is based on a similarity pyramid data structure, which hierarchically organizes the database, so that it can be efficiently browsed. At coarse levels, the similarity pyramid allows users to view the database as large clusters of similar images. Alternatively, users can 'zoom into' finer levels to view individual images. We discuss relevance feedback for the browsing process, and argue that it is fundamentally different from relevance feedback for more traditional search-by-query tasks. We propose two fundamental operations for active browsing: pruning and reorganization. Both of these operations depend on a user-defined relevance set, which represents the image or set of images desired by the user. We present statistical methods for accurately pruning the database, and we propose a new 'worm hole' distance metric for reorganizing the database, so that members of the relevance set are grouped together.

  4. Self-Similar Collisionless Shocks

    CERN Document Server

    Katz, B; Waxman, E; Katz, Boaz; Keshet, Uri; Waxman, Eli

    2006-01-01

    Observations of gamma-ray burst afterglows suggest that the correlation length of magnetic field fluctuations downstream of relativistic non-magnetized collisionless shocks grows with distance from the shock to scales much larger than the plasma skin depth. We argue that this indicates that the plasma properties are described by a self-similar solution, and derive constraints on the scaling properties of the solution. For example, we find that the scaling of the characteristic magnetic field amplitude with distance from the shock is B \\propto D^{s_B} with -1 \\propto x^{2s_B} (for x>>D). We show that the plasma may be approximated as a combination of two self-similar components: a kinetic component of energetic particles and an MHD-like component representing "thermal" particles. We argue that the latter may be considered as infinitely conducting, in which case s_B=0 and the scalings are completely determined (e.g. dn/dE \\propto E^{-2} and B \\propto D^0). Similar claims apply to non- relativistic shocks such a...

  5. A Photometric Method for Quantifying Asymmetries in Disk Galaxies

    CERN Document Server

    Kornreich, D A; Lovelace, R V E; Kornreich, David A.; Haynes, Martha P.; Lovelace, Richard V.E.

    1998-01-01

    A photometric method for quantifying deviations from axisymmetry in optical images of disk galaxies is applied to a sample of 32 face-on and nearly face-on spirals. The method involves comparing the relative fluxes contained within trapezoidal sectors arranged symmetrically about the galaxy center of light, excluding the bulge and/or barred regions. Such a method has several advantages over others, especially when quantifying asymmetry in flocculent galaxies. Specifically, the averaging of large regions improves the signal-to-noise in the measurements; the method is not strongly affected by the presence of spiral arms; and it identifies the kinds of asymmetry that are likely to be dynamically important. Application of this "method of sectors" to R-band images of 32 disk galaxies indicates that about 30% of spirals show deviations from axisymmetry at the 5-sigma level.

  6. Self-Similar Fluid Dynamic Limits for the Broadwell System

    Science.gov (United States)

    1992-10-01

    bounded variation such that f" -+ f pointwise on the reals. The function f is a local Maxwellian, that is f3 = flf2 for a.e. C, and satisfies the balance... bounded variation . The functions f’- admit constant values f+ outside [-1, 1i; therefore, first f’" ---+ f pointwise on (-oo, o0), and second f(ý) = f- for...2.4) 23 Thus (2.1) holds in the sense of distributions and, because fi, f2 are of bounded variation , it also holds in the sense of measures. Passing

  7. Quantifying human vitamin kinetics using AMS

    Energy Technology Data Exchange (ETDEWEB)

    Hillegonds, D; Dueker, S; Ognibene, T; Buchholz, B; Lin, Y; Vogel, J; Clifford, A

    2004-02-19

    Tracing vitamin kinetics at physiologic concentrations has been hampered by a lack of quantitative sensitivity for chemically equivalent tracers that could be used safely in healthy people. Instead, elderly or ill volunteers were sought for studies involving pharmacologic doses with radioisotopic labels. These studies fail to be relevant in two ways: vitamins are inherently micronutrients, whose biochemical paths are saturated and distorted by pharmacological doses; and while vitamins remain important for health in the elderly or ill, their greatest effects may be in preventing slow and cumulative diseases by proper consumption throughout youth and adulthood. Neither the target dose nor the target population are available for nutrient metabolic studies through decay counting of radioisotopes at high levels. Stable isotopic labels are quantified by isotope ratio mass spectrometry at levels that trace physiologic vitamin doses, but the natural background of stable isotopes severely limits the time span over which the tracer is distinguishable. Indeed, study periods seldom ranged over a single biological mean life of the labeled nutrients, failing to provide data on the important final elimination phase of the compound. Kinetic data for the absorption phase is similarly rare in micronutrient research because the phase is rapid, requiring many consecutive plasma samples for accurate representation. However, repeated blood samples of sufficient volume for precise stable or radio-isotope quantitations consume an indefensible amount of the volunteer's blood over a short period. Thus, vitamin pharmacokinetics in humans has often relied on compartmental modeling based upon assumptions and tested only for the short period of maximal blood circulation, a period that poorly reflects absorption or final elimination kinetics except for the most simple models.

  8. Quantifying antimicrobial resistance at veal calf farms.

    Directory of Open Access Journals (Sweden)

    Angela B Bosman

    Full Text Available This study was performed to determine a sampling strategy to quantify the prevalence of antimicrobial resistance on veal calf farms, based on the variation in antimicrobial resistance within and between calves on five farms. Faecal samples from 50 healthy calves (10 calves/farm were collected. From each individual sample and one pooled faecal sample per farm, 90 selected Escherichia coli isolates were tested for their resistance against 25 mg/L amoxicillin, 25 mg/L tetracycline, 0.5 mg/L cefotaxime, 0.125 mg/L ciprofloxacin and 8/152 mg/L trimethoprim/sulfamethoxazole (tmp/s by replica plating. From each faecal sample another 10 selected E. coli isolates were tested for their resistance by broth microdilution as a reference. Logistic regression analysis was performed to compare the odds of testing an isolate resistant between both test methods (replica plating vs. broth microdilution and to evaluate the effect of pooling faecal samples. Bootstrap analysis was used to investigate the precision of the estimated prevalence of resistance to each antimicrobial obtained by several simulated sampling strategies. Replica plating showed similar odds of E. coli isolates tested resistant compared to broth microdilution, except for ciprofloxacin (OR 0.29, p ≤ 0.05. Pooled samples showed in general lower odds of an isolate being resistant compared to individual samples, although these differences were not significant. Bootstrap analysis showed that within each antimicrobial the various compositions of a pooled sample provided consistent estimates for the mean proportion of resistant isolates. Sampling strategies should be based on the variation in resistance among isolates within faecal samples and between faecal samples, which may vary by antimicrobial. In our study, the optimal sampling strategy from the perspective of precision of the estimated levels of resistance and practicality consists of a pooled faecal sample from 20 individual animals, of which

  9. A New Trajectory Similarity Measure for GPS Data

    KAUST Repository

    Ismail, Anas

    2016-08-08

    We present a new algorithm for measuring the similarity between trajectories, and in particular between GPS traces. We call this new similarity measure the Merge Distance (MD). Our approach is robust against subsampling and supersampling. We perform experiments to compare this new similarity measure with the two main approaches that have been used so far: Dynamic Time Warping (DTW) and the Euclidean distance. © 2015 ACM.

  10. Quantifying hypoxia in human cancers using static PET imaging

    Science.gov (United States)

    Taylor, Edward; Yeung, Ivan; Keller, Harald; Wouters, Bradley G.; Milosevic, Michael; Hedley, David W.; Jaffray, David A.

    2016-11-01

    Compared to FDG, the signal of 18F-labelled hypoxia-sensitive tracers in tumours is low. This means that in addition to the presence of hypoxic cells, transport properties contribute significantly to the uptake signal in static PET images. This sensitivity to transport must be minimized in order for static PET to provide a reliable standard for hypoxia quantification. A dynamic compartmental model based on a reaction-diffusion formalism was developed to interpret tracer pharmacokinetics and applied to static images of FAZA in twenty patients with pancreatic cancer. We use our model to identify tumour properties—well-perfused without substantial necrosis or partitioning—for which static PET images can reliably quantify hypoxia. Normalizing the measured activity in a tumour voxel by the value in blood leads to a reduction in the sensitivity to variations in ‘inter-corporal’ transport properties—blood volume and clearance rate—as well as imaging study protocols. Normalization thus enhances the correlation between static PET images and the FAZA binding rate K 3, a quantity which quantifies hypoxia in a biologically significant way. The ratio of FAZA uptake in spinal muscle and blood can vary substantially across patients due to long muscle equilibration times. Normalized static PET images of hypoxia-sensitive tracers can reliably quantify hypoxia for homogeneously well-perfused tumours with minimal tissue partitioning. The ideal normalizing reference tissue is blood, either drawn from the patient before PET scanning or imaged using PET. If blood is not available, uniform, homogeneously well-perfused muscle can be used. For tumours that are not homogeneously well-perfused or for which partitioning is significant, only an analysis of dynamic PET scans can reliably quantify hypoxia.

  11. Quantifying hypoxia in human cancers using static PET imaging.

    Science.gov (United States)

    Taylor, Edward; Yeung, Ivan; Keller, Harald; Wouters, Bradley G; Milosevic, Michael; Hedley, David W; Jaffray, David A

    2016-11-21

    Compared to FDG, the signal of (18)F-labelled hypoxia-sensitive tracers in tumours is low. This means that in addition to the presence of hypoxic cells, transport properties contribute significantly to the uptake signal in static PET images. This sensitivity to transport must be minimized in order for static PET to provide a reliable standard for hypoxia quantification. A dynamic compartmental model based on a reaction-diffusion formalism was developed to interpret tracer pharmacokinetics and applied to static images of FAZA in twenty patients with pancreatic cancer. We use our model to identify tumour properties-well-perfused without substantial necrosis or partitioning-for which static PET images can reliably quantify hypoxia. Normalizing the measured activity in a tumour voxel by the value in blood leads to a reduction in the sensitivity to variations in 'inter-corporal' transport properties-blood volume and clearance rate-as well as imaging study protocols. Normalization thus enhances the correlation between static PET images and the FAZA binding rate K 3, a quantity which quantifies hypoxia in a biologically significant way. The ratio of FAZA uptake in spinal muscle and blood can vary substantially across patients due to long muscle equilibration times. Normalized static PET images of hypoxia-sensitive tracers can reliably quantify hypoxia for homogeneously well-perfused tumours with minimal tissue partitioning. The ideal normalizing reference tissue is blood, either drawn from the patient before PET scanning or imaged using PET. If blood is not available, uniform, homogeneously well-perfused muscle can be used. For tumours that are not homogeneously well-perfused or for which partitioning is significant, only an analysis of dynamic PET scans can reliably quantify hypoxia.

  12. Mechanisms for similarity based cooperation

    Science.gov (United States)

    Traulsen, A.

    2008-06-01

    Cooperation based on similarity has been discussed since Richard Dawkins introduced the term “green beard” effect. In these models, individuals cooperate based on an aribtrary signal (or tag) such as the famous green beard. Here, two different models for such tag based cooperation are analysed. As neutral drift is important in both models, a finite population framework is applied. The first model, which we term “cooperative tags” considers a situation in which groups of cooperators are formed by some joint signal. Defectors adopting the signal and exploiting the group can lead to a breakdown of cooperation. In this case, conditions are derived under which the average abundance of the more cooperative strategy exceeds 50%. The second model considers a situation in which individuals start defecting towards others that are not similar to them. This situation is termed “defective tags”. It is shown that in this case, individuals using tags to cooperate exclusively with their own kind dominate over unconditional cooperators.

  13. Identifying Cover Songs Using Information-Theoretic Measures of Similarity

    OpenAIRE

    Foster, Peter; Dixon, Simon; Klapuri, Anssi

    2014-01-01

    This paper investigates methods for quantifying similarity between audio signals, specifically for the task of of cover song detection. We consider an information-theoretic approach, where we compute pairwise measures of predictability between time series. We compare discrete-valued approaches operating on quantised audio features, to continuous-valued approaches. In the discrete case, we propose a method for computing the normalised compression distance, where we account for correlation betw...

  14. Spousal similarity in coping and depressive symptoms over 10 years.

    Science.gov (United States)

    Holahan, Charles J; Moos, Rudolf H; Moerkbak, Marie L; Cronkite, Ruth C; Holahan, Carole K; Kenney, Brent A

    2007-12-01

    Following a baseline sample of 184 married couples over 10 years, the present study develops a broadened conceptualization of linkages in spouses' functioning by examining similarity in coping as well as in depressive symptoms. Consistent with hypotheses, results demonstrated (a) similarity in depressive symptoms within couples across 10 years, (b) similarity in coping within couples over 10 years, and (c) the role of coping similarity in strengthening depressive similarity between spouses. Spousal similarity in coping was evident for a composite measure of percent approach coping as well as for component measures of approach and avoidance coping. The role of coping similarity in strengthening depressive symptom similarity was observed for percent approach coping and for avoidance coping. These findings support social contextual models of psychological adjustment that emphasize the importance of dynamic interdependencies between individuals in close relationships.

  15. Semantically enabled image similarity search

    Science.gov (United States)

    Casterline, May V.; Emerick, Timothy; Sadeghi, Kolia; Gosse, C. A.; Bartlett, Brent; Casey, Jason

    2015-05-01

    Georeferenced data of various modalities are increasingly available for intelligence and commercial use, however effectively exploiting these sources demands a unified data space capable of capturing the unique contribution of each input. This work presents a suite of software tools for representing geospatial vector data and overhead imagery in a shared high-dimension vector or embedding" space that supports fused learning and similarity search across dissimilar modalities. While the approach is suitable for fusing arbitrary input types, including free text, the present work exploits the obvious but computationally difficult relationship between GIS and overhead imagery. GIS is comprised of temporally-smoothed but information-limited content of a GIS, while overhead imagery provides an information-rich but temporally-limited perspective. This processing framework includes some important extensions of concepts in literature but, more critically, presents a means to accomplish them as a unified framework at scale on commodity cloud architectures.

  16. SIMILARITIES AND DIFFERENCES BETWEEN COMPANIES

    Directory of Open Access Journals (Sweden)

    NAGY CRISTINA MIHAELA

    2015-05-01

    Full Text Available act: Similarities between the accounting of companies and territorial administrative units accounting are the following: organizing double entry accounting; accounting method both in terms of fundamental theoretical principles and specific practical tools. The differences between the accounting of companies and of territorial administrative units refer to: the accounting of territorial administrative units includes besides general accounting (financial also budgetary accounting, and the accounts system of the budgetary accounting is completely different from that of companies; financial statements of territorial administrative units to which leaders are not main authorizing officers are submitted to the hierarchically superior body (not at MPF; the accounts of territorial administrative units are opened at treasury and financial institutions, accounts at commercial banks being prohibited; equity accounts in territorial administrative units are structured into groups of funds; long term debts have a specific structure in territorial administrative units (internal local public debt and external local public debt.

  17. Performance Indexes: Similarities and Differences

    Directory of Open Access Journals (Sweden)

    André Machado Caldeira

    2013-06-01

    Full Text Available The investor of today is more rigorous on monitoring a financial assets portfolio. He no longer thinks only in terms of the expected return (one dimension, but in terms of risk-return (two dimensions. Thus new perception is more complex, since the risk measurement can vary according to anyone’s perception; some use the standard deviation for that, others disagree with this measure by proposing others. In addition to this difficulty, there is the problem of how to consider these two dimensions. The objective of this essay is to study the main performance indexes through an empirical study in order to verify the differences and similarities for some of the selected assets. One performance index proposed in Caldeira (2005 shall be included in this analysis.

  18. Features Based Text Similarity Detection

    CERN Document Server

    Kent, Chow Kok

    2010-01-01

    As the Internet help us cross cultural border by providing different information, plagiarism issue is bound to arise. As a result, plagiarism detection becomes more demanding in overcoming this issue. Different plagiarism detection tools have been developed based on various detection techniques. Nowadays, fingerprint matching technique plays an important role in those detection tools. However, in handling some large content articles, there are some weaknesses in fingerprint matching technique especially in space and time consumption issue. In this paper, we propose a new approach to detect plagiarism which integrates the use of fingerprint matching technique with four key features to assist in the detection process. These proposed features are capable to choose the main point or key sentence in the articles to be compared. Those selected sentence will be undergo the fingerprint matching process in order to detect the similarity between the sentences. Hence, time and space usage for the comparison process is r...

  19. Similarity of Symbol Frequency Distributions with Heavy Tails

    Science.gov (United States)

    Gerlach, Martin; Font-Clos, Francesc; Altmann, Eduardo G.

    2016-04-01

    Quantifying the similarity between symbolic sequences is a traditional problem in information theory which requires comparing the frequencies of symbols in different sequences. In numerous modern applications, ranging from DNA over music to texts, the distribution of symbol frequencies is characterized by heavy-tailed distributions (e.g., Zipf's law). The large number of low-frequency symbols in these distributions poses major difficulties to the estimation of the similarity between sequences; e.g., they hinder an accurate finite-size estimation of entropies. Here, we show analytically how the systematic (bias) and statistical (fluctuations) errors in these estimations depend on the sample size N and on the exponent γ of the heavy-tailed distribution. Our results are valid for the Shannon entropy (α =1 ), its corresponding similarity measures (e.g., the Jensen-Shanon divergence), and also for measures based on the generalized entropy of order α . For small α 's, including α =1 , the errors decay slower than the 1 /N decay observed in short-tailed distributions. For α larger than a critical value α*=1 +1 /γ ≤2 , the 1 /N decay is recovered. We show the practical significance of our results by quantifying the evolution of the English language over the last two centuries using a complete α spectrum of measures. We find that frequent words change more slowly than less frequent words and that α =2 provides the most robust measure to quantify language change.

  20. Quantifying drug-protein binding in vivo.

    Energy Technology Data Exchange (ETDEWEB)

    Buchholz, B; Bench, G; Keating III, G; Palmblad, M; Vogel, J; Grant, P G; Hillegonds, D

    2004-02-17

    Accelerator mass spectrometry (AMS) provides precise quantitation of isotope labeled compounds that are bound to biological macromolecules such as DNA or proteins. The sensitivity is high enough to allow for sub-pharmacological (''micro-'') dosing to determine macromolecular targets without inducing toxicities or altering the system under study, whether it is healthy or diseased. We demonstrated an application of AMS in quantifying the physiologic effects of one dosed chemical compound upon the binding level of another compound in vivo at sub-toxic doses [4].We are using tissues left from this study to develop protocols for quantifying specific binding to isolated and identified proteins. We also developed a new technique to quantify nanogram to milligram amounts of isolated protein at precisions that are comparable to those for quantifying the bound compound by AMS.

  1. QUANTIFIED COST-BALANCED ROUTING SCHEME FOR OVERLAY MULTICAST

    Institute of Scientific and Technical Information of China (English)

    Lu Jun; Ruan Qiuqi

    2006-01-01

    This paper focuses on the quantitative analysis issue of the routing metrics tradeoff problem, and presents a Quantified Cost-Balanced overlay multicast routing scheme (QCost-Balanced) to the metric tradeoff problem between overlay path delay and access bandwidth at Multicast Server Nodes (MSN) for real-time applications over Internet. Besides implementing a dynamic priority to MSNs by weighing the size of its service clients for better efficiency, QCost-Balanced tradeoffs these two metrics by a unified tradeoff metric based on quantitative analysis. Simulation experiments demonstrate that the scheme achieves a better tradeoff gain in both two metrics, and effective performance in metric quantitative control.

  2. Quantifying Time Dependent Moisture Storage and Transport Properties

    DEFF Research Database (Denmark)

    Peuhkuri, Ruut H

    2003-01-01

    This paper describes an experimental and numerical approach to quantify the time dependence of sorption mechanisms for some hygroscopic building - mostly insulation - materials. Some investigations of retarded sorption and non-Fickian phenomena, mostly on wood, have given inspiration to the present...... analysis on these other materials. The true moisture capacity of a material can not be described by the slope of the sorption isotherms alone, when the material is exposed to dynamic changes in the moisture conditions. Still, the assumption of an immediate equilibrium is well accepted in the simulation...

  3. Stability of Self-Similar Spherical Accretion

    CERN Document Server

    Gaite, J

    2006-01-01

    Spherical accretion flows are simple enough for analytical study, by solution of the corresponding fluid dynamic equations. The solutions of stationary spherical flow are due to Bondi. The questions of the choice of a physical solution and of stability have been widely discussed. The answer to these questions is very dependent on the problem of boundary conditions, which vary according to whether the accretor is a compact object or a black hole. We introduce a particular, simple form of stationary spherical flow, namely, self-similar Bondi flow, as a case with physical interest in which analytic solutions for perturbations can be found. With suitable no matter-flux-perturbation boundary conditions, we will show that acoustic modes are stable in time and have no spatial instability at r=0. Furthermore, their evolution eventually becomes ergodic-like and shows no trace of instability or of acquiring any remarkable pattern.

  4. A stochastic approach for quantifying immigrant integration: the Spanish test case

    CERN Document Server

    Agliari, Elena; Contucci, Pierluigi; Sandell, Rickard; Vernia, Cecilia

    2014-01-01

    We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999-2010. As opposed to the classic time-series approach, by letting immigrant density play the role of "time", and the quantifier the role of "space" it become possible to analyze the behavior of the quantifiers by means of continuous time random walks. Two classes of results are obtained. First we show that social integration quantifiers evolve following pure diffusion law, while the evolution of economic quantifiers exhibit ballistic dynamics. Second we make predictions of best and worst case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. F...

  5. Quantifying the step following level of an operator in proceduralized scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yoc Han; Jung, Wondea [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    In nuclear power plants, operators of the main control rooms cope with abnormal or emergency situations using relevant procedures. However, there exist several features that challenge step by step following of the procedures: ambiguous descriptions of procedures, dynamic situations of power plants, operational tendencies of operators, and so on. To identify and manage the features, it is useful to quantifying how compliantly an operator follows the steps of the procedures. There was little research that measures the step following level of an operator. Kim et al. presented a measure, VPP (variability of procedure progression), to estimate a variability of ways to follow a given procedure. However, VPP has some characteristics to be evaluated. First, VPP score is related with the number of step in each procedure progression. Second, VPP score is also affected by the number of progressions. In addition, VPP score implies a variability of procedure progressions, not a compliance of a progression with the given procedure. If all operators do not follow a sequence of steps in a procedure but show similar procedure progressions, the VPP score of the progressions will be low. Therefore, it is necessary to develop another measure that is capable to explain more closely the step following level. In this light, we propose a new measure, PCL (procedure compliance level) to estimate how a procedure progression is similar to the standard progressions. This paper introduces the algorithm of this measure and shows its applicability by presenting a result that is applied to a digitalized control room.

  6. Study of Similarity Law for Bird Impact on Structure

    Institute of Scientific and Technical Information of China (English)

    Li Yulong; Zhang Yongkang; Xue Pu

    2008-01-01

    With dimensional analysis and similarity theory, the model similarity law of aircraft structures under bird impact load is investi-gated. Numerical calculations by means of nonlinear dynamic software ANSYS/LS-DYNA are conducted on the finite element models constructed with different scaling factors. The influence of strain rate on the model similarity law is found to be dependent on the swain rate sensitivity of materials and scale factors. Specifically, materials that are not sensitive to strain rate obey the model similarity law in the bird impact process. The conclusions obtained are supposed to provide a theoretical basis for the experimental work of bird impact on aircraft structure.

  7. Exploiting Data Similarity to Reduce Memory Footprints

    Energy Technology Data Exchange (ETDEWEB)

    Biswas, S; de Supinski, B R; Schulz, M; Franklin, D; Sherwood, T; Chong, F T

    2011-01-28

    Memory size has long limited large-scale applications on high-performance computing (HPC) systems. Since compute nodes frequently do not have swap space, physical memory often limits problem sizes. Increasing core counts per chip and power density constraints, which limit the number of DIMMs per node, have exacerbated this problem. Further, DRAM constitutes a significant portion of overall HPC system cost. Therefore, instead of adding more DRAM to the nodes, mechanisms to manage memory usage more efficiently - preferably transparently - could increase effective DRAM capacity and thus the benefit of multicore nodes for HPC systems. MPI application processes often exhibit significant data similarity. These data regions occupy multiple physical locations across the individual rank processes within a multicore node and thus offer a potential savings in memory capacity. These regions, primarily residing in heap, are dynamic, which makes them difficult to manage statically. Our novel memory allocation library, SBLLmalloc, automatically identifies identical memory blocks and merges them into a single copy. SBLLmalloc does not require application or OS changes since we implement it as a user-level library. Overall, we demonstrate that SBLLmalloc reduces the memory footprint of a range of MPI applications by 32.03% on average and up to 60.87%. Further, SBLLmalloc supports problem sizes for IRS over 21.36% larger than using standard memory management techniques, thus significantly increasing effective system size. Similarly, SBLLmalloc requires 43.75% fewer nodes than standard memory management techniques to solve an AMG problem.

  8. Gait Recognition Using Image Self-Similarity

    Directory of Open Access Journals (Sweden)

    Cutler Ross G

    2004-01-01

    Full Text Available Gait is one of the few biometrics that can be measured at a distance, and is hence useful for passive surveillance as well as biometric applications. Gait recognition research is still at its infancy, however, and we have yet to solve the fundamental issue of finding gait features which at once have sufficient discrimination power and can be extracted robustly and accurately from low-resolution video. This paper describes a novel gait recognition technique based on the image self-similarity of a walking person. We contend that the similarity plot encodes a projection of gait dynamics. It is also correspondence-free, robust to segmentation noise, and works well with low-resolution video. The method is tested on multiple data sets of varying sizes and degrees of difficulty. Performance is best for fronto-parallel viewpoints, whereby a recognition rate of 98% is achieved for a data set of 6 people, and 70% for a data set of 54 people.

  9. On the similarity of variable viscosity flows

    Science.gov (United States)

    Voivenel, L.; Danaila, L.; Varea, E.; Renou, B.; Cazalens, M.

    2016-08-01

    Turbulent mixing is ubiquitous in both nature and industrial applications. Most of them concern different fluids, therefore with variable physical properties (density and/or viscosity). The focus here is on variable viscosity flows and mixing, involving density-matched fluids. The issue is whether or not these flows may be self-similar, or self-preserving. The importance of this question stands on the predictability of these flows; self-similar dynamical systems are easier tractable from an analytical viewpoint. More specifically, self-similar analysis is applied to the scale-by-scale energy transport equations, which represent the transport of energy at each scale and each point of the flow. Scale-by-scale energy budget equations are developed for inhomogeneous and anisotropic flows, in which the viscosity varies as a result of heterogeneous mixture or temperature variations. Additional terms are highlighted, accounting for the viscosity gradients, or fluctuations. These terms are present at both small and large scales, thus rectifying the common belief that viscosity is a small-scale quantity. Scale-by-scale energy budget equations are then adapted for the particular case of a round jet evolving in a more viscous host fluid. It is further shown that the condition of self-preservation is not necessarily satisfied in variable-viscosity jets. Indeed, the jet momentum conservation, as well as the constancy of the Reynolds number in the central region of the jet, cannot be satisfied simultaneously. This points to the necessity of considering less stringent conditions (with respect to classical, single-fluid jets) when analytically tackling these flows and reinforces the idea that viscosity variations must be accounted for when modelling these flows.

  10. Quantifying renewable groundwater stress with GRACE

    Science.gov (United States)

    Richey, Alexandra S.; Thomas, Brian F.; Lo, Min‐Hui; Reager, John T.; Voss, Katalyn; Swenson, Sean; Rodell, Matthew

    2015-01-01

    Abstract Groundwater is an increasingly important water supply source globally. Understanding the amount of groundwater used versus the volume available is crucial to evaluate future water availability. We present a groundwater stress assessment to quantify the relationship between groundwater use and availability in the world's 37 largest aquifer systems. We quantify stress according to a ratio of groundwater use to availability, which we call the Renewable Groundwater Stress ratio. The impact of quantifying groundwater use based on nationally reported groundwater withdrawal statistics is compared to a novel approach to quantify use based on remote sensing observations from the Gravity Recovery and Climate Experiment (GRACE) satellite mission. Four characteristic stress regimes are defined: Overstressed, Variable Stress, Human‐dominated Stress, and Unstressed. The regimes are a function of the sign of use (positive or negative) and the sign of groundwater availability, defined as mean annual recharge. The ability to mitigate and adapt to stressed conditions, where use exceeds sustainable water availability, is a function of economic capacity and land use patterns. Therefore, we qualitatively explore the relationship between stress and anthropogenic biomes. We find that estimates of groundwater stress based on withdrawal statistics are unable to capture the range of characteristic stress regimes, especially in regions dominated by sparsely populated biome types with limited cropland. GRACE‐based estimates of use and stress can holistically quantify the impact of groundwater use on stress, resulting in both greater magnitudes of stress and more variability of stress between regions. PMID:26900185

  11. Quantifying Fire Cycle from Dendroecological Records Using Survival Analyses

    Directory of Open Access Journals (Sweden)

    Dominic Cyr

    2016-06-01

    Full Text Available Quantifying fire regimes in the boreal forest ecosystem is crucial for understanding the past and present dynamics, as well as for predicting its future dynamics. Survival analyses have often been used to estimate the fire cycle in eastern Canada because they make it possible to take into account the censored information that is made prevalent by the typically long fire return intervals and the limited scope of the dendroecological methods that are used to quantify them. Here, we assess how the true length of the fire cycle, the short-term temporal variations in fire activity, and the sampling effort affect the accuracy and precision of estimates obtained from two types of parametric survival models, the Weibull and the exponential models, and one non-parametric model obtained with the Cox regression. Then, we apply those results in a case area located in eastern Canada. Our simulation experiment confirms some documented concerns regarding the detrimental effects of temporal variations in fire activity on parametric estimation of the fire cycle. Cox regressions appear to provide the most accurate and robust estimator, being by far the least affected by temporal variations in fire activity. The Cox-based estimate of the fire cycle for the last 300 years in the case study area is 229 years (CI95: 162–407, compared with the likely overestimated 319 years obtained with the commonly used exponential model.

  12. Quantifying robustness of biochemical network models

    Directory of Open Access Journals (Sweden)

    Iglesias Pablo A

    2002-12-01

    Full Text Available Abstract Background Robustness of mathematical models of biochemical networks is important for validation purposes and can be used as a means of selecting between different competing models. Tools for quantifying parametric robustness are needed. Results Two techniques for describing quantitatively the robustness of an oscillatory model were presented and contrasted. Single-parameter bifurcation analysis was used to evaluate the stability robustness of the limit cycle oscillation as well as the frequency and amplitude of oscillations. A tool from control engineering – the structural singular value (SSV – was used to quantify robust stability of the limit cycle. Using SSV analysis, we find very poor robustness when the model's parameters are allowed to vary. Conclusion The results show the usefulness of incorporating SSV analysis to single parameter sensitivity analysis to quantify robustness.

  13. Scoring dynamics across professional team sports: tempo, balance and predictability

    CERN Document Server

    Merritt, Sears

    2013-01-01

    Despite growing interest in quantifying and modeling the scoring dynamics within professional sports games, relative little is known about what patterns or principles, if any, cut across different sports. Using a comprehensive data set of scoring events in nearly a dozen consecutive seasons of college and professional (American) football, professional hockey, and professional basketball, we identify several common patterns in scoring dynamics. Across these sports, scoring tempo---when scoring events occur---closely follows a common Poisson process, with a sport-specific rate. Similarly, scoring balance---how often a team wins an event---follows a common Bernoulli process, with a parameter that effectively varies with the size of the lead. Combining these processes within a generative model of gameplay, we find they both reproduce the observed dynamics in all four sports and accurately predict game outcomes. These results demonstrate common dynamical patterns underlying within-game scoring dynamics across prof...

  14. Quantifying brain microstructure with diffusion MRI

    DEFF Research Database (Denmark)

    Novikov, Dmitry S.; Jespersen, Sune N.; Kiselev, Valerij G.

    2016-01-01

    We review, systematize and discuss models of diffusion in neuronal tissue, by putting them into an overarching physical context of coarse-graining over an increasing diffusion length scale. From this perspective, we view research on quantifying brain microstructure as occurring along the three ma...

  15. Quantifying the Reuse of Learning Objects

    Science.gov (United States)

    Elliott, Kristine; Sweeney, Kevin

    2008-01-01

    This paper reports the findings of one case study from a larger project, which aims to quantify the claimed efficiencies of reusing learning objects to develop e-learning resources. The case study describes how an online inquiry project "Diabetes: A waste of energy" was developed by searching for, evaluating, modifying and then…

  16. QS Spiral: Visualizing Periodic Quantified Self Data

    DEFF Research Database (Denmark)

    Larsen, Jakob Eg; Cuttone, Andrea; Jørgensen, Sune Lehmann

    2013-01-01

    In this paper we propose an interactive visualization technique QS Spiral that aims to capture the periodic properties of quantified self data and let the user explore those recurring patterns. The approach is based on time-series data visualized as a spiral structure. The interactivity includes ...

  17. Periodontal inflamed surface area : quantifying inflammatory burden

    NARCIS (Netherlands)

    Nesse, Willem; Abbas, Frank; van der Ploeg, Ids; Spijkervet, Frederik Karst Lucien; Dijkstra, Pieter Ubele; Vissink, Arjan

    2008-01-01

    Background: Currently, a large variety of classifications is used for periodontitis as a risk factor for other diseases. None of these classifications quantifies the amount of inflamed periodontal tissue, while this information is needed to assess the inflammatory burden posed by periodontitis. Aim:

  18. Periodontal inflamed surface area : quantifying inflammatory burden

    NARCIS (Netherlands)

    Nesse, Willem; Abbas, Frank; van der Ploeg, Ids; Spijkervet, Frederik Karst Lucien; Dijkstra, Pieter Ubele; Vissink, Arjan

    2008-01-01

    Background: Currently, a large variety of classifications is used for periodontitis as a risk factor for other diseases. None of these classifications quantifies the amount of inflamed periodontal tissue, while this information is needed to assess the inflammatory burden posed by periodontitis. Aim:

  19. The Emergence of the Quantified Child

    Science.gov (United States)

    Smith, Rebecca

    2017-01-01

    Using document analysis, this paper examines the historical emergence of the quantified child, revealing how the collection and use of data has become normalized through legitimizing discourses. First, following in the traditions of Foucault's genealogy and studies examining the sociology of numbers, this paper traces the evolution of data…

  20. A simplified score to quantify comorbidity in COPD.

    Directory of Open Access Journals (Sweden)

    Nirupama Putcha

    Full Text Available Comorbidities are common in COPD, but quantifying their burden is difficult. Currently there is a COPD-specific comorbidity index to predict mortality and another to predict general quality of life. We sought to develop and validate a COPD-specific comorbidity score that reflects comorbidity burden on patient-centered outcomes.Using the COPDGene study (GOLD II-IV COPD, we developed comorbidity scores to describe patient-centered outcomes employing three techniques: 1 simple count, 2 weighted score, and 3 weighted score based upon statistical selection procedure. We tested associations, area under the Curve (AUC and calibration statistics to validate scores internally with outcomes of respiratory disease-specific quality of life (St. George's Respiratory Questionnaire, SGRQ, six minute walk distance (6MWD, modified Medical Research Council (mMRC dyspnea score and exacerbation risk, ultimately choosing one score for external validation in SPIROMICS.Associations between comorbidities and all outcomes were comparable across the three scores. All scores added predictive ability to models including age, gender, race, current smoking status, pack-years smoked and FEV1 (p<0.001 for all comparisons. Area under the curve (AUC was similar between all three scores across outcomes: SGRQ (range 0·7624-0·7676, MMRC (0·7590-0·7644, 6MWD (0·7531-0·7560 and exacerbation risk (0·6831-0·6919. Because of similar performance, the comorbidity count was used for external validation. In the SPIROMICS cohort, the comorbidity count performed well to predict SGRQ (AUC 0·7891, MMRC (AUC 0·7611, 6MWD (AUC 0·7086, and exacerbation risk (AUC 0·7341.Quantifying comorbidity provides a more thorough understanding of the risk for patient-centered outcomes in COPD. A comorbidity count performs well to quantify comorbidity in a diverse population with COPD.

  1. Multidimensional Scaling Visualization Using Parametric Similarity Indices

    Directory of Open Access Journals (Sweden)

    J. A. Tenreiro Machado

    2015-03-01

    Full Text Available In this paper, we apply multidimensional scaling (MDS and parametric similarity indices (PSI in the analysis of complex systems (CS. Each CS is viewed as a dynamical system, exhibiting an output time-series to be interpreted as a manifestation of its behavior. We start by adopting a sliding window to sample the original data into several consecutive time periods. Second, we define a given PSI for tracking pieces of data. We then compare the windows for different values of the parameter, and we generate the corresponding MDS maps of ‘points’. Third, we use Procrustes analysis to linearly transform the MDS charts for maximum superposition and to build a globalMDS map of “shapes”. This final plot captures the time evolution of the phenomena and is sensitive to the PSI adopted. The generalized correlation, theMinkowski distance and four entropy-based indices are tested. The proposed approach is applied to the Dow Jones Industrial Average stock market index and the Europe Brent Spot Price FOB time-series.

  2. Hierarchical Self-Similarity in Group and Crowd Behaviors

    Science.gov (United States)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    In this Chapter, a nonlinear, complex, Hamiltonian description of socio-cognio-physical dynamics at the oscopic, classical, inter-personal crowd level and microscopic, quantum, intra-personal agent level, is presented, uniquely, in the form of the open Liouville equation. At the microscopic level, this can be considered to be a nonlinear extension of the linear correlation and factor dynamics. This implies the arrow of time in both microscopic and oscopic processes and shows the existence of the formal crowd-agent space-time self-similarity. This in itself shows the existence of a unique control law, which acts on different scales of agent functioning. This self-similar socio-cognio-physical control law enables us to use the crowd dynamics simulator (previously developed at Defence Science & Technology Organisation, Australia), for recursive simulation of individual agents' representation spaces on a cluster of computers.

  3. Modeling of Hysteresis in Piezoelectric Actuator Based on Segment Similarity

    Directory of Open Access Journals (Sweden)

    Rui Xiong

    2015-11-01

    Full Text Available To successfully exploit the full potential of piezoelectric actuators in micro/nano positioning systems, it is essential to model their hysteresis behavior accurately. A novel hysteresis model for piezoelectric actuator is proposed in this paper. Firstly, segment-similarity, which describes the similarity relationship between hysteresis curve segments with different turning points, is proposed. Time-scale similarity, which describes the similarity relationship between hysteresis curves with different rates, is used to solve the problem of dynamic effect. The proposed model is formulated using these similarities. Finally, the experiments are performed with respect to a micro/nano-meter movement platform system. The effectiveness of the proposed model is verified as compared with the Preisach model. The experimental results show that the proposed model is able to precisely predict the hysteresis trajectories of piezoelectric actuators and performs better than the Preisach model.

  4. Quantifying Stock Return Distributions in Financial Markets.

    Science.gov (United States)

    Botta, Federico; Moat, Helen Susannah; Stanley, H Eugene; Preis, Tobias

    2015-01-01

    Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales.

  5. Power Curve Measurements, quantify the production increase

    DEFF Research Database (Denmark)

    Gómez Arranz, Paula; Vesth, Allan

    The purpose of this report is to quantify the production increase on a given turbine with respect to another given turbine. The used methodology is the “side by side” comparison method, provided by the client. This method involves the use of two neighboring turbines and it is based on the assumpt......The purpose of this report is to quantify the production increase on a given turbine with respect to another given turbine. The used methodology is the “side by side” comparison method, provided by the client. This method involves the use of two neighboring turbines and it is based...... on the assumption that the wind field in front of the tested turbines is statistically the same (i.e. has in average the same mean wind speed conditions in front of both turbines). The method is only used for the evaluation of a relative change in the AEP, not the AEP itself....

  6. Quantifying the robustness of metro networks

    CERN Document Server

    Wang, Xiangrong; Derrible, Sybil; Ahmad, Sk Nasir; Kooij, Robert E

    2015-01-01

    Metros (heavy rail transit systems) are integral parts of urban transportation systems. Failures in their operations can have serious impacts on urban mobility, and measuring their robustness is therefore critical. Moreover, as physical networks, metros can be viewed as network topological entities, and as such they possess measurable network properties. In this paper, by using network science and graph theoretical concepts, we investigate both theoretical and experimental robustness metrics (i.e., the robustness indicator, the effective graph conductance, and the critical thresholds) and their performance in quantifying the robustness of metro networks under random failures or targeted attacks. We find that the theoretical metrics quantify different aspects of the robustness of metro networks. In particular, the robustness indicator captures the number of alternative paths and the effective graph conductance focuses on the length of each path. Moreover, the high positive correlation between the theoretical m...

  7. Quantifying Shannon's Work Function for Cryptanalytic Attacks

    CERN Document Server

    van Son, R J J H

    2010-01-01

    Attacks on cryptographic systems are limited by the available computational resources. A theoretical understanding of these resource limitations is needed to evaluate the security of cryptographic primitives and procedures. This study uses an Attacker versus Environment game formalism based on computability logic to quantify Shannon's work function and evaluate resource use in cryptanalysis. A simple cost function is defined which allows to quantify a wide range of theoretical and real computational resources. With this approach the use of custom hardware, e.g., FPGA boards, in cryptanalysis can be analyzed. Applied to real cryptanalytic problems, it raises, for instance, the expectation that the computer time needed to break some simple 90 bit strong cryptographic primitives might theoretically be less than two years.

  8. Quantifying reliability uncertainty : a proof of concept.

    Energy Technology Data Exchange (ETDEWEB)

    Diegert, Kathleen V.; Dvorack, Michael A.; Ringland, James T.; Mundt, Michael Joseph; Huzurbazar, Aparna (Los Alamos National Laboratory, Los Alamos, NM); Lorio, John F.; Fatherley, Quinn (Los Alamos National Laboratory, Los Alamos, NM); Anderson-Cook, Christine (Los Alamos National Laboratory, Los Alamos, NM); Wilson, Alyson G. (Los Alamos National Laboratory, Los Alamos, NM); Zurn, Rena M.

    2009-10-01

    This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.

  9. Quantifying energy condition violations in traversable wormholes

    Indian Academy of Sciences (India)

    Sayan Kar; Naresh Dadhich; Matt Visser

    2004-10-01

    The `theoretical' existence of traversable Lorentzian wormholes in the classical, macroscopic world is plagued by the violation of the well-known energy conditions of general relativity. In this brief article we show: (i) how the extent of violation can be quantified using certain volume integrals and (ii) whether this `amount of violation' can be minimised for some specific cut-and-paste geometric constructions. Examples and possibilities are also outlined.

  10. Quantifying sediment production in steepland environments

    OpenAIRE

    2009-01-01

    Five published contributions to our understanding of the impacts of erosion processes on sustainable land management are reviewed and discussed. These focus on rapid shallow landsliding and gully erosion which are among the most prevalent forms of environmental degradation in New Zealand's hill country. The over-arching goal of this research has been to quantify the on-site (e.g., soil erosion, land productivity) impacts of these processes. Rather than measure erosion rates over long periods ...

  11. Quantifying effects of land use change on soil organic matter at the landscape scale

    NARCIS (Netherlands)

    Sonneveld, M.P.W.; Apeldoorn, van D.F.; Pepers, K.H.; Hanegraaf, M.C.

    2012-01-01

    Geophysical Research Abstracts Vol. 14, EGU2012-8153, 2012 EGU General Assembly 2012 © Author(s) 2012 Quantifying effects of land use change on soil organic matter at the landscape scale M.P.W. Sonneveld (1), D.F. Van Apeldoorn (1), K.H. Pepers (1), and M.C. Hanegraaf (2) (1) Land Dynamics Group, Wa

  12. Local observability of state variables and parameters in nonlinear modeling quantified by delay reconstruction

    CERN Document Server

    Parlitz, Ulrich; Luther, Stefan

    2015-01-01

    Features of the Jacobian matrix of the delay coordinates map are exploited for quantifying the robustness and reliability of state and parameter estimations for a given dynamical model using an observed time series. Relevant concepts of this approach are introduced and illustrated for discrete and continuous time systems employing a filtered H\\'enon map and a R\\"ossler system.

  13. Quantifying effects of land use change on soil organic matter at the landscape scale

    NARCIS (Netherlands)

    Sonneveld, M.P.W.; Apeldoorn, van D.F.; Pepers, K.H.; Hanegraaf, M.C.

    2012-01-01

    Geophysical Research Abstracts Vol. 14, EGU2012-8153, 2012 EGU General Assembly 2012 © Author(s) 2012 Quantifying effects of land use change on soil organic matter at the landscape scale M.P.W. Sonneveld (1), D.F. Van Apeldoorn (1), K.H. Pepers (1), and M.C. Hanegraaf (2) (1) Land Dynamics Group,

  14. Quantifying the degradation of organic matter in marine sediments: A review and synthesis

    NARCIS (Netherlands)

    Arndt, S.; Jørgensen, B.B.; LaRowe, D.E.; Middelburg, J.J.; Pancost, R.D.; Regnier, P.

    2013-01-01

    Quantifying the rates of biogeochemical processes in marine sediments is essential for understanding global element cycles and climate change. Because organic matter degradation is the engine behind benthic dynamics, deciphering the impact that various forces have on this process is central to deter

  15. 基于网页内容相似度和链接关系的社区发现及动态添加%An Algorithm for Community Identification and Dynamical Addition Based on Web Pages Contents Similarity and Link Relation

    Institute of Scientific and Technical Information of China (English)

    云颖; 袁方; 刘宇; 王传豹

    2011-01-01

    An algorithm for community identification based on the Web pages contents similarity and the link relation between the Web pages was proposed. The algorithm not only considered the hyperlinks between Web pages but focused on the content similarity of Web pages. This method overcame the limitations of ignoring the content of Web pages in traditional community discovery algorithms, so that the communities founded in the content were more relevant. In addition, the paper added the new members based on the original community dynamically, and added the new Web pages which linked to the Web pages of original community related to the theme into the original community. Experiments showed that the method was applied to community discovery in the network, and the community was more relevant in the content.%给出了一种基于网页内容相似度和网页之间链接关系的社区发现方法.该方法不仅考虑了网页之间的超链接关系,而且着重考虑了网页在内容上的相似度并克服了传统社区发现算法忽略网页内容的局限性,使发现的社区在内容上更相关.在原始社区的基础上对其进行动态添加,将网络中新出现的与原始社区中的网页存在链接关系同时与主题相关的网页加入到原始社区.实验表明,此方法可以有效地应用于网络的社区发现,使发现的社区在内容上更相关.

  16. Quantifying wetland–aquifer interactions in a humid subtropical climate region: An integrated approach

    Science.gov (United States)

    Mendoza-Sanchez, Itza; Phanikumar, Mantha S.; Niu, Jie; Masoner, Jason R.; Cozzarelli, Isabelle M.; McGuire, Jennifer T.

    2013-01-01

    Wetlands are widely recognized as sentinels of global climate change. Long-term monitoring data combined with process-based modeling has the potential to shed light on key processes and how they change over time. This paper reports the development and application of a simple water balance model based on long-term climate, soil, vegetation and hydrological dynamics to quantify groundwater–surface water (GW–SW) interactions at the Norman landfill research site in Oklahoma, USA. Our integrated approach involved model evaluation by means of the following independent measurements: (a) groundwater inflow calculation using stable isotopes of oxygen and hydrogen (16O, 18O, 1H, 2H); (b) seepage flux measurements in the wetland hyporheic sediment; and (c) pan evaporation measurements on land and in the wetland. The integrated approach was useful for identifying the dominant hydrological processes at the site, including recharge and subsurface flows. Simulated recharge compared well with estimates obtained using isotope methods from previous studies and allowed us to identify specific annual signatures of this important process during the period of study (1997–2007). Similarly, observations of groundwater inflow and outflow rates to and from the wetland using seepage meters and isotope methods were found to be in good agreement with simulation results. Results indicate that subsurface flow components in the system are seasonal and readily respond to rainfall events. The wetland water balance is dominated by local groundwater inputs and regional groundwater flow contributes little to the overall water balance.

  17. Quantifying Contributions of Climate Feedbacks to Global Warming Pattern Formation

    Science.gov (United States)

    Song, X.; Zhang, G. J.; Cai, M.

    2013-12-01

    The ';';climate feedback-response analysis method'' (CFRAM) was applied to the NCAR CCSM3.0 simulation to analyze the strength and spatial distribution of climate feedbacks and to quantify their contributions to global and regional surface temperature changes in response to a doubling of CO2. Instead of analyzing the climate sensitivity, the CFRAM directly attributes the temperature change to individual radiative and non-radiative feedbacks. The radiative feedback decomposition is based on hourly model output rather than monthly mean data that are commonly used in climate feedback analysis. This gives a more accurate quantification of the cloud and albedo feedbacks. The process-based decomposition of non-radiative feedback enables us to understand the roles of GCM physical and dynamic processes in climate change. The pattern correlation, the centered root-mean-square (RMS) difference and the ratio of variations (represented by standard deviations) between the partial surface temperature change due to each feedback process and the total surface temperature change in CCSM3.0 simulation are examined to quantify the roles of each feedback process in the global warming pattern formation. The contributions of climate feedbacks to the regional warming are also discussed.

  18. Quantifying Urban Fragmentation under Economic Transition in Shanghai City, China

    Directory of Open Access Journals (Sweden)

    Heyuan You

    2015-12-01

    Full Text Available Urban fragmentation affects sustainability through multiple impacts on economic, social, and environmental cost. Characterizing the dynamics of urban fragmentation in relation to economic transition should provide implications for sustainability. However, rather few efforts have been made in this issue. Using the case of Shanghai (China, this paper quantifies urban fragmentation in relation to economic transition. In particular, urban fragmentation is quantified by a time-series of remotely sensed images and a set of landscape metrics; and economic transition is described by a set of indicators from three aspects (globalization, decentralization, and marketization. Results show that urban fragmentation presents an increasing linear trend. Multivariate regression identifies positive linear correlation between urban fragmentation and economic transition. More specifically, the relative influence is different for the three components of economic transition. The relative influence of decentralization is stronger than that of globalization and marketization. The joint influences of decentralization and globalization are the strongest for urban fragmentation. The demonstrated methodology can be applicable to other places after making suitable adjustment of the economic transition indicators and fragmentation metrics.

  19. Quantifying the Use of Gestures in Autism Spectrum Disorder

    DEFF Research Database (Denmark)

    Lambrechts, Anna; Yarrow, K.; Maras, Katie

    Background: Autism Spectrum Disorder (ASD) is characterized by difficulties in communication and social interaction. In the absence of a biomarker, a diagnosis of Autism Spectrum Disorder (ASD) is reached in settings such as the ADOS (Lord et al., 2000) by observing disturbances of social...... interaction such as abnormalities in the use of gestures or flow of conversation. These observations rely exclusively on clinical judgement and are thus prone to error and inconsistency across contexts and clinicians. While studies in children show that co-speech gestures are fewer (e.g. Wetherby et al., 1998...... that abnormal temporal processes contribute to impaired social skills in ASD (Allman, 2011). Objectives: - Quantify the production of gestures in ASD in naturally occurring language - Characterise the temporal dynamics of speech and gesture coordination in ASD using two acoustic indices; pitch and volume...

  20. Novel Material to Quantify Sharpness and Traction of Vitreous Cutters

    Directory of Open Access Journals (Sweden)

    Nazafarin Honarpisheh

    2011-01-01

    Full Text Available There is no available method for evaluating cutting quality of vitreotomes. The available methods of assessment allow only indirect judgment of their quality and are difficult to apply in clinical practice. We propose using a collagen film with maximum thickness of 1-2 micron to test the sharpness of instruments under conditions resembling clinical ones. The collagen film is fixed by a special device, then place in physiological saline, and then cut under the operation microscope. It shows whether the cutting edges are sharp enough, and the collagen film is cut smoothly. We also use an Electroforce 3100 machine and Dynamic Mechanical Analysis software to quantify the vitreoretinal force applied to the retina during vitrectomy.

  1. Quantifying social vs. antisocial behavior in email networks

    CERN Document Server

    Gomes, L H; Almeida, V A F; Bettencourt, L M A; Castro, F D O; Almeida, Jussara M.; Almeida, Virgilio A. F.; Bettencourt, Luis M. A.; Castro, Fernando D. O.; Gomes, Luiz H.

    2006-01-01

    Email graphs have been used to illustrate general properties of social networks of communication and collaboration. However, increasingly, the majority of email traffic reflects opportunistic, rather than symbiotic social relations. Here we use e-mail data drawn from a large university to construct directed graphs of email exchange that quantify the differences between social and antisocial behaviors in networks of communication. We show that while structural characteristics typical of other social networks are shared to a large extent by the legitimate component they are not characteristic of antisocial traffic. Interestingly, opportunistic patterns of behavior do create nontrivial graphs with certain general characteristics that we identify. To complement the graph analysis, which suffers from incomplete knowledge of users external to the domain, we study temporal patterns of communication to show that the dynamical properties of email traffic can, in principle, distinguish different types of social relatio...

  2. Front tracking for characterizing and quantifying reactive mixing

    Science.gov (United States)

    Kelley, Douglas; Nevins, Thomas

    2016-11-01

    Mixing in industrial chemical reactors involves complicated interactions between advection, reaction, and diffusion that are difficult to simulate or measure in detail. However, in large-Damköhler-number systems which show sharp fronts between reacted and unreacted regions, reactor dynamics might be more simply and usefully characterized in terms of the reaction fronts themselves. In fact, prior work has already shown that the reaction rate and material diffusivity can be calculated directly if front speed and front thickness are known. We have developed methods to optically track reaction fronts, measuring their speed and thickness throughout space and time. We will present such measurements in both simulation and experiment, consider their statistics, and discuss future efforts to characterize and quantify mixing in chemical reactors.

  3. Protein structure similarity from principle component correlation analysis

    Directory of Open Access Journals (Sweden)

    Chou James

    2006-01-01

    Full Text Available Abstract Background Owing to rapid expansion of protein structure databases in recent years, methods of structure comparison are becoming increasingly effective and important in revealing novel information on functional properties of proteins and their roles in the grand scheme of evolutionary biology. Currently, the structural similarity between two proteins is measured by the root-mean-square-deviation (RMSD in their best-superimposed atomic coordinates. RMSD is the golden rule of measuring structural similarity when the structures are nearly identical; it, however, fails to detect the higher order topological similarities in proteins evolved into different shapes. We propose new algorithms for extracting geometrical invariants of proteins that can be effectively used to identify homologous protein structures or topologies in order to quantify both close and remote structural similarities. Results We measure structural similarity between proteins by correlating the principle components of their secondary structure interaction matrix. In our approach, the Principle Component Correlation (PCC analysis, a symmetric interaction matrix for a protein structure is constructed with relationship parameters between secondary elements that can take the form of distance, orientation, or other relevant structural invariants. When using a distance-based construction in the presence or absence of encoded N to C terminal sense, there are strong correlations between the principle components of interaction matrices of structurally or topologically similar proteins. Conclusion The PCC method is extensively tested for protein structures that belong to the same topological class but are significantly different by RMSD measure. The PCC analysis can also differentiate proteins having similar shapes but different topological arrangements. Additionally, we demonstrate that when using two independently defined interaction matrices, comparison of their maximum

  4. Density-based similarity measures for content based search

    Energy Technology Data Exchange (ETDEWEB)

    Hush, Don R [Los Alamos National Laboratory; Porter, Reid B [Los Alamos National Laboratory; Ruggiero, Christy E [Los Alamos National Laboratory

    2009-01-01

    We consider the query by multiple example problem where the goal is to identify database samples whose content is similar to a coUection of query samples. To assess the similarity we use a relative content density which quantifies the relative concentration of the query distribution to the database distribution. If the database distribution is a mixture of the query distribution and a background distribution then it can be shown that database samples whose relative content density is greater than a particular threshold {rho} are more likely to have been generated by the query distribution than the background distribution. We describe an algorithm for predicting samples with relative content density greater than {rho} that is computationally efficient and possesses strong performance guarantees. We also show empirical results for applications in computer network monitoring and image segmentation.

  5. Gains and Pitfalls of Quantifier Elimination as a Teaching Tool

    Science.gov (United States)

    Oldenburg, Reinhard

    2015-01-01

    Quantifier Elimination is a procedure that allows simplification of logical formulas that contain quantifiers. Many mathematical concepts are defined in terms of quantifiers and especially in calculus their use has been identified as an obstacle in the learning process. The automatic deduction provided by quantifier elimination thus allows…

  6. Issues in the study of floating universal numeric quantifiers

    NARCIS (Netherlands)

    R. Cirillo

    2010-01-01

    In the Germanic and Romance languages (among others) a universal quantifier can combine with a numeral and form a floating quantifier. I refer to these quantifiers as universal numeric quantifiers or simply ∀NumQ. The following examples from Dutch and Romanian demonstrate this phenomenon: The aim of

  7. Quantifying the effects of land use and climate on Holocene vegetation in Europe

    Science.gov (United States)

    Marquer, Laurent; Gaillard, Marie-José; Sugita, Shinya; Poska, Anneli; Trondman, Anna-Kari; Mazier, Florence; Nielsen, Anne Birgitte; Fyfe, Ralph M.; Jönsson, Anna Maria; Smith, Benjamin; Kaplan, Jed O.; Alenius, Teija; Birks, H. John B.; Bjune, Anne E.; Christiansen, Jörg; Dodson, John; Edwards, Kevin J.; Giesecke, Thomas; Herzschuh, Ulrike; Kangur, Mihkel; Koff, Tiiu; Latałowa, Małgorzata; Lechterbeck, Jutta; Olofsson, Jörgen; Seppä, Heikki

    2017-09-01

    Early agriculture can be detected in palaeovegetation records, but quantification of the relative importance of climate and land use in influencing regional vegetation composition since the onset of agriculture is a topic that is rarely addressed. We present a novel approach that combines pollen-based REVEALS estimates of plant cover with climate, anthropogenic land-cover and dynamic vegetation modelling results. This is used to quantify the relative impacts of land use and climate on Holocene vegetation at a sub-continental scale, i.e. northern and western Europe north of the Alps. We use redundancy analysis and variation partitioning to quantify the percentage of variation in vegetation composition explained by the climate and land-use variables, and Monte Carlo permutation tests to assess the statistical significance of each variable. We further use a similarity index to combine pollen-based REVEALS estimates with climate-driven dynamic vegetation modelling results. The overall results indicate that climate is the major driver of vegetation when the Holocene is considered as a whole and at the sub-continental scale, although land use is important regionally. Four critical phases of land-use effects on vegetation are identified. The first phase (from 7000 to 6500 BP) corresponds to the early impacts on vegetation of farming and Neolithic forest clearance and to the dominance of climate as a driver of vegetation change. During the second phase (from 4500 to 4000 BP), land use becomes a major control of vegetation. Climate is still the principal driver, although its influence decreases gradually. The third phase (from 2000 to 1500 BP) is characterised by the continued role of climate on vegetation as a consequence of late-Holocene climate shifts and specific climate events that influence vegetation as well as land use. The last phase (from 500 to 350 BP) shows an acceleration of vegetation changes, in particular during the last century, caused by new farming

  8. Currents connecting communities: nearshore community similarity and ocean circulation.

    Science.gov (United States)

    Watson, J R; Hays, C G; Raimondi, P T; Mitarai, S; Dong, C; McWilliams, J C; Blanchette, C A; Caselle, J E; Siegel, D A

    2011-06-01

    Understanding the mechanisms that create spatial heterogeneity in species distributions is fundamental to ecology. For nearshore marine systems, most species have a pelagic larval stage where dispersal is strongly influenced by patterns of ocean circulation. Concomitantly, nearshore habitats and the local environment are also influenced by ocean circulation. Because of the shared dependence on the seascape, distinguishing the relative importance of the local environment from regional patterns of dispersal for community structure remains a challenge. Here, we quantify the "oceanographic distance" and "oceanographic asymmetry" between nearshore sites using ocean circulation modeling results. These novel metrics quantify spatial separation based on realistic patterns of ocean circulation, and we explore their explanatory power for intertidal and subtidal community similarity in the Southern California Bight. We find that these metrics show significant correspondence with patterns of community similarity and that their combined explanatory power exceeds that of the thermal structure of the domain. Our approach identifies the unique influence of ocean circulation on community structure and provides evidence for oceanographically mediated dispersal limitation in nearshore marine communities.

  9. Quantifying uncertainty in observational rainfall datasets

    Science.gov (United States)

    Lennard, Chris; Dosio, Alessandro; Nikulin, Grigory; Pinto, Izidine; Seid, Hussen

    2015-04-01

    rainfall datasets available over Africa on monthly, daily and sub-daily time scales as appropriate to quantify spatial and temporal differences between the datasets. We find regional wet and dry biases between datasets (using the ensemble mean as a reference) with generally larger biases in reanalysis products. Rainfall intensity is poorly represented in some datasets which demonstrates some datasets should not be used for rainfall intensity analyses. Using 10 CORDEX models we show in east Africa that the spread between observed datasets is often similar to the spread between models. We recommend that specific observational rainfall datasets datasets be used for specific investigations and also that where many datasets are applicable to an investigation, a probabilistic view be adopted for rainfall studies over Africa. Endris, H. S., P. Omondi, S. Jain, C. Lennard, B. Hewitson, L. Chang'a, J. L. Awange, A. Dosio, P. Ketiem, G. Nikulin, H-J. Panitz, M. Büchner, F. Stordal, and L. Tazalika (2013) Assessment of the Performance of CORDEX Regional Climate Models in Simulating East African Rainfall. J. Climate, 26, 8453-8475. DOI: 10.1175/JCLI-D-12-00708.1 Gbobaniyi, E., A. Sarr, M. B. Sylla, I. Diallo, C. Lennard, A. Dosio, A. Dhie ?diou, A. Kamga, N. A. B. Klutse, B. Hewitson, and B. Lamptey (2013) Climatology, annual cycle and interannual variability of precipitation and temperature in CORDEX simulations over West Africa. Int. J. Climatol., DOI: 10.1002/joc.3834 Hernández-Díaz, L., R. Laprise, L. Sushama, A. Martynov, K. Winger, and B. Dugas (2013) Climate simulation over CORDEX Africa domain using the fifth-generation Canadian Regional Climate Model (CRCM5). Clim. Dyn. 40, 1415-1433. DOI: 10.1007/s00382-012-1387-z Kalognomou, E., C. Lennard, M. Shongwe, I. Pinto, A. Favre, M. Kent, B. Hewitson, A. Dosio, G. Nikulin, H. Panitz, and M. Büchner (2013) A diagnostic evaluation of precipitation in CORDEX models over southern Africa. Journal of Climate, 26, 9477-9506. DOI:10

  10. Quantifier spreading: children misled by ostensive cues

    Directory of Open Access Journals (Sweden)

    Katalin É. Kiss

    2017-04-01

    Full Text Available This paper calls attention to a methodological problem of acquisition experiments. It shows that the economy of the stimulus employed in child language experiments may lend an increased ostensive effect to the message communicated to the child. Thus, when the visual stimulus in a sentence-picture matching task is a minimal model abstracting away from the details of the situation, children often regard all the elements of the stimulus as ostensive clues to be represented in the corresponding sentence. The use of such minimal stimuli is mistaken when the experiment aims to test whether or not a certain element of the stimulus is relevant for the linguistic representation or interpretation. The paper illustrates this point by an experiment involving quantifier spreading. It is claimed that children find a universally quantified sentence like 'Every girl is riding a bicycle 'to be a false description of a picture showing three girls riding bicycles and a solo bicycle because they are misled to believe that all the elements in the visual stimulus are relevant, hence all of them are to be represented by the corresponding linguistic description. When the iconic drawings were replaced by photos taken in a natural environment rich in accidental details, the occurrence of quantifier spreading was radically reduced. It is shown that an extra object in the visual stimulus can lead to the rejection of the sentence also in the case of sentences involving no quantification, which gives further support to the claim that the source of the problem is not (or not only the grammatical or cognitive difficulty of quantification but the unintended ostensive effect of the extra object.  This article is part of the special collection: Acquisition of Quantification

  11. Reconstructing propagation networks with temporal similarity metrics

    CERN Document Server

    Liao, Hao

    2014-01-01

    Node similarity is a significant property driving the growth of real networks. In this paper, based on the observed spreading results we apply the node similarity metrics to reconstruct propagation networks. We find that the reconstruction accuracy of the similarity metrics is strongly influenced by the infection rate of the spreading process. Moreover, there is a range of infection rate in which the reconstruction accuracy of some similarity metrics drops to nearly zero. In order to improve the similarity-based reconstruction method, we finally propose a temporal similarity metric to take into account the time information of the spreading. The reconstruction results are remarkably improved with the new method.

  12. Quantifying graininess of glossy food products

    DEFF Research Database (Denmark)

    Møller, Flemming; Carstensen, Jens Michael

    The sensory quality of yoghurt can be altered when changing the milk composition or processing conditions. Part of the sensory quality may be assessed visually. It is described how a non-contact method for quantifying surface gloss and grains in yoghurt can be made. It was found that the standard...... deviation of the entire image evaluated at different scales in a Gaussian Image Pyramid was a measure for graininess of yoghurt. This methodology is used to predict graininess (or grittiness) and to evaluate effect of yoghurt composition and processing....

  13. Quantifying information leakage of randomized protocols

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Malacaria, Pasquale;

    2015-01-01

    capable to observe the internal behavior of the system, and quantify the information leakage of such systems. We also use our method to obtain an algorithm for the computation of channel capacity from our Markovian models. Finally, we show how to use the method to analyze timed and non-timed attacks......The quantification of information leakage provides a quantitative evaluation of the security of a system. We propose the usage of Markovian processes to model deterministic and probabilistic systems. By using a methodology generalizing the lattice of information approach we model refined attackers...

  14. Quantifying Heuristic Bias: Anchoring, Availability, and Representativeness.

    Science.gov (United States)

    Richie, Megan; Josephson, S Andrew

    2017-07-28

    Construct: Authors examined whether a new vignette-based instrument could isolate and quantify heuristic bias. Heuristics are cognitive shortcuts that may introduce bias and contribute to error. There is no standardized instrument available to quantify heuristic bias in clinical decision making, limiting future study of educational interventions designed to improve calibration of medical decisions. This study presents validity data to support a vignette-based instrument quantifying bias due to the anchoring, availability, and representativeness heuristics. Participants completed questionnaires requiring assignment of probabilities to potential outcomes of medical and nonmedical scenarios. The instrument randomly presented scenarios in one of two versions: Version A, encouraging heuristic bias, and Version B, worded neutrally. The primary outcome was the difference in probability judgments for Version A versus Version B scenario options. Of 167 participants recruited, 139 enrolled. Participants assigned significantly higher mean probability values to Version A scenario options (M = 9.56, SD = 3.75) than Version B (M = 8.98, SD = 3.76), t(1801) = 3.27, p = .001. This result remained significant analyzing medical scenarios alone (Version A, M = 9.41, SD = 3.92; Version B, M = 8.86, SD = 4.09), t(1204) = 2.36, p = .02. Analyzing medical scenarios by heuristic revealed a significant difference between Version A and B for availability (Version A, M = 6.52, SD = 3.32; Version B, M = 5.52, SD = 3.05), t(404) = 3.04, p = .003, and representativeness (Version A, M = 11.45, SD = 3.12; Version B, M = 10.67, SD = 3.71), t(396) = 2.28, p = .02, but not anchoring. Stratifying by training level, students maintained a significant difference between Version A and B medical scenarios (Version A, M = 9.83, SD = 3.75; Version B, M = 9.00, SD = 3.98), t(465) = 2.29, p = .02, but not residents or attendings. Stratifying by heuristic and training level, availability maintained

  15. Quantifying information leakage of randomized protocols

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Malacaria, Pasquale

    2015-01-01

    The quantification of information leakage provides a quantitative evaluation of the security of a system. We propose the usage of Markovian processes to model deterministic and probabilistic systems. By using a methodology generalizing the lattice of information approach we model refined attackers...... capable to observe the internal behavior of the system, and quantify the information leakage of such systems. We also use our method to obtain an algorithm for the computation of channel capacity from our Markovian models. Finally, we show how to use the method to analyze timed and non-timed attacks...

  16. Visualizing and quantifying Fusarium oxysporum in the plant host.

    Science.gov (United States)

    Diener, Andrew

    2012-12-01

    Host-specific forms of Fusarium oxysporum infect the roots of numerous plant species. I present a novel application of familiar methodology to visualize and quantify F. oxysporum in roots. Infection in the roots of Arabidopsis thaliana, tomato, and cotton was detected with colorimetric reagents that are substrates for Fusarium spp.-derived arabinofuranosidase and N-acetyl-glucosaminidase activities and without the need for genetic modification of either plant host or fungal pathogen. Similar patterns of blue precipitation were produced by treatment with 5-bromo-4-chloro-3-indoxyl-α-l-arabinofuranoside and 5-bromo-4-chloro-3-indoxyl-2-acetamido-2-deoxy-β-d-glucopyranoside, and these patterns were consistent with prior histological descriptions of F. oxysporum in roots. Infection was quantified in roots of wild-type and mutant Arabidopsis using 4-nitrophenyl-α-l-arabinofuranoside. In keeping with an expectation that disease severity above ground is correlated with F. oxysporum infection below ground, elevated levels of arabinofuranosidase activity were measured in the roots of susceptible agb1 and rfo1 while a reduced level was detected in the resistant eir1. In contrast, disease severity and F. oxysporum infection were uncoupled in tir3. The distribution of staining patterns in roots suggests that AGB1 and RFO1 restrict colonization of the vascular cylinder by F. oxysporum whereas EIR1 promotes colonization of root apices.

  17. Quantifying population recovery rates for ecological risk assessment.

    Science.gov (United States)

    Barnthouse, Lawrence W

    2004-02-01

    Ecological effects of modern agrochemicals are typically limited to brief episodes of increased mortality or reduced growth that are qualitatively similar to natural disturbance regimes. The long-term ecological consequences of agrochemical exposures depend on the intensity and frequency of the exposures relative to the rates of recovery of the exposed populations. This paper explores the feasibility of using readily available life history information to quantify recovery rates of aquatic populations. A simple modeling framework based on the logistic population growth model is used to compare population recovery rates for different types of organisms and to evaluate the influence of life history, initial percent reduction, disturbance frequency, and immigration on the time required for populations to recover from simulated agrochemical exposures. Recovery models are developed for aquatic biota ranging in size and longevity from unicellular algae to fish and turtles. Population growth rates and recovery times derived from life history data are consistent with measured recovery times reported in mesocosm and enclosure experiments, thus supporting the use of the models for quantifying population recovery rates for ecological risk assessment.

  18. Quantifying complexity in translational research: an integrated approach

    Science.gov (United States)

    Munoz, David A.; Nembhard, Harriet Black; Kraschnewski, Jennifer L.

    2014-01-01

    Purpose This article quantifies complexity in translational research. The impact of major operational steps and technical requirements (TR) is calculated with respect to their ability to accelerate moving new discoveries into clinical practice. Design/Methodology/Approach A three-phase integrated Quality Function Deployment (QFD) and Analytic Hierarchy Process (AHP) method was used to quantify complexity in translational research. A case study in obesity was used to usability. Findings Generally, the evidence generated was valuable for understanding various components in translational research. Particularly, we found that collaboration networks, multidisciplinary team capacity and community engagement are crucial for translating new discoveries into practice. Research limitations/implications As the method is mainly based on subjective opinion, some argue that the results may be biased. However, a consistency ratio is calculated and used as a guide to subjectivity. Alternatively, a larger sample may be incorporated to reduce bias. Practical implications The integrated QFD-AHP framework provides evidence that could be helpful to generate agreement, develop guidelines, allocate resources wisely, identify benchmarks and enhance collaboration among similar projects. Originality/value Current conceptual models in translational research provide little or no clue to assess complexity. The proposed method aimed to fill this gap. Additionally, the literature review includes various features that have not been explored in translational research. PMID:25417380

  19. Methods to Quantify Nickel in Soils and Plant Tissues

    Directory of Open Access Journals (Sweden)

    Bruna Wurr Rodak

    2015-06-01

    Full Text Available In comparison with other micronutrients, the levels of nickel (Ni available in soils and plant tissues are very low, making quantification very difficult. The objective of this paper is to present optimized determination methods of Ni availability in soils by extractants and total content in plant tissues for routine commercial laboratory analyses. Samples of natural and agricultural soils were processed and analyzed by Mehlich-1 extraction and by DTPA. To quantify Ni in the plant tissues, samples were digested with nitric acid in a closed system in a microwave oven. The measurement was performed by inductively coupled plasma/optical emission spectrometry (ICP-OES. There was a positive and significant correlation between the levels of available Ni in the soils subjected to Mehlich-1 and DTPA extraction, while for plant tissue samples the Ni levels recovered were high and similar to the reference materials. The availability of Ni in some of the natural soil and plant tissue samples were lower than the limits of quantification. Concentrations of this micronutrient were higher in the soil samples in which Ni had been applied. Nickel concentration differed in the plant parts analyzed, with highest levels in the grains of soybean. The grain, in comparison with the shoot and leaf concentrations, were better correlated with the soil available levels for both extractants. The methods described in this article were efficient in quantifying Ni and can be used for routine laboratory analysis of soils and plant tissues.

  20. Certain Verbs Are Syntactically Explicit Quantifiers

    Directory of Open Access Journals (Sweden)

    Anna Szabolcsi

    2010-12-01

    Full Text Available Quantification over individuals, times, and worlds can in principle be made explicit in the syntax of the object language, or left to the semantics and spelled out in the meta-language. The traditional view is that quantification over individuals is syntactically explicit, whereas quantification over times and worlds is not. But a growing body of literature proposes a uniform treatment. This paper examines the scopal interaction of aspectual raising verbs (begin, modals (can, and intensional raising verbs (threaten with quantificational subjects in Shupamem, Dutch, and English. It appears that aspectual raising verbs and at least modals may undergo the same kind of overt or covert scope-changing operations as nominal quantifiers; the case of intensional raising verbs is less clear. Scope interaction is thus shown to be a new potential diagnostic of object-linguistic quantification, and the similarity in the scope behavior of nominal and verbal quantifiers supports the grammatical plausibility of ontological symmetry, explored in Schlenker (2006.ReferencesBen-Shalom, D. 1996. Semantic Trees. Ph.D. thesis, UCLA.Bittner, M. 1993. Case, Scope, and Binding. Dordrecht: Reidel.Cresswell, M. 1990. Entities and Indices. Dordrecht: Kluwer.Cresti, D. 1995. ‘Extraction and reconstruction’. Natural Language Semantics 3: 79–122.http://dx.doi.org/10.1007/BF01252885Curry, B. H. & Feys, R. 1958. Combinatory Logic I. Dordrecht: North-Holland.Dowty, D. R. 1988. ‘Type raising, functional composition, and non-constituent conjunction’. In Richard T. Oehrle, Emmon W. Bach & Deirdre Wheeler (eds. ‘Categorial Grammars and Natural Language Structures’, 153–197. Dordrecht: Reidel.Fox, D. 2002. ‘TOn Logical Form’. In Randall Hendrick (ed. ‘Minimalist Syntax’, 82–124. Oxford: Blackwell.Gallin, D. 1975. Intensional and higher-order modal logic: with applications to Montague semantics. North Holland Pub. Co.; American Elsevier Pub. Co., Amsterdam

  1. Quantifying Variability of Manual Annotation in Cryo-Electron Tomograms.

    Science.gov (United States)

    Hecksel, Corey W; Darrow, Michele C; Dai, Wei; Galaz-Montoya, Jesús G; Chin, Jessica A; Mitchell, Patrick G; Chen, Shurui; Jakana, Jemba; Schmid, Michael F; Chiu, Wah

    2016-06-01

    Although acknowledged to be variable and subjective, manual annotation of cryo-electron tomography data is commonly used to answer structural questions and to create a "ground truth" for evaluation of automated segmentation algorithms. Validation of such annotation is lacking, but is critical for understanding the reproducibility of manual annotations. Here, we used voxel-based similarity scores for a variety of specimens, ranging in complexity and segmented by several annotators, to quantify the variation among their annotations. In addition, we have identified procedures for merging annotations to reduce variability, thereby increasing the reliability of manual annotation. Based on our analyses, we find that it is necessary to combine multiple manual annotations to increase the confidence level for answering structural questions. We also make recommendations to guide algorithm development for automated annotation of features of interest.

  2. Process Fairness and Dynamic Consistency

    NARCIS (Netherlands)

    S.T. Trautmann (Stefan); P.P. Wakker (Peter)

    2010-01-01

    textabstractAbstract: When process fairness deviates from outcome fairness, dynamic inconsistencies can arise as in nonexpected utility. Resolute choice (Machina) can restore dynamic consistency under nonexpected utility without using Strotz's precommitment. It can similarly justify dynamically

  3. Conditional Similarity Solutions of the Boussinesq Equation

    Institute of Scientific and Technical Information of China (English)

    TANG Xiao-Yan; LIN Ji; LOU Sen-Yue

    2001-01-01

    The direct method proposed by Clarkson and Kruskal is modified to obtain some conditional similarity solutions of a nonlinear physics model. Taking the (1+ 1 )-dimensional Boussinesq equation as a simple example, six types of conditional similarity reductions are obtained.

  4. Similarity between chaos analysis and frequency analysis of pressure fluctuations in fluidized beds

    NARCIS (Netherlands)

    van der Schaaf, J; van Ommen, [No Value; Takens, F; Schouten, JC; van den Bleek, CM

    2004-01-01

    In literature the dynamic behavior of fluidized beds is frequently characterized by spectral analysis and chaos analysis of the pressure fluctuations that are caused by the gas-solids flow. In case of spectral analysis, most often the power spectral density (PSD) function is quantified, for example,

  5. Shape Similarity Measures of Linear Entities

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The essential of feature matching technology lies in how to measure the similarity of spatial entities.Among all the possible similarity measures,the shape similarity measure is one of the most important measures because it is easy to collect the necessary parameters and it is also well matched with the human intuition.In this paper a new shape similarity measure of linear entities based on the differences of direction change along each line is presented and its effectiveness is illustrated.

  6. Quantifying meta-correlations in financial markets

    Science.gov (United States)

    Kenett, Dror Y.; Preis, Tobias; Gur-Gershgoren, Gitit; Ben-Jacob, Eshel

    2012-08-01

    Financial markets are modular multi-level systems, in which the relationships between the individual components are not constant in time. Sudden changes in these relationships significantly affect the stability of the entire system, and vice versa. Our analysis is based on historical daily closing prices of the 30 components of the Dow Jones Industrial Average (DJIA) from March 15th, 1939 until December 31st, 2010. We quantify the correlation among these components by determining Pearson correlation coefficients, to investigate whether mean correlation of the entire portfolio can be used as a precursor for changes in the index return. To this end, we quantify the meta-correlation - the correlation of mean correlation and index return. We find that changes in index returns are significantly correlated with changes in mean correlation. Furthermore, we study the relationship between the index return and correlation volatility - the standard deviation of correlations for a given time interval. This parameter provides further evidence of the effect of the index on market correlations and their fluctuations. Our empirical findings provide new information and quantification of the index leverage effect, and have implications to risk management, portfolio optimization, and to the increased stability of financial markets.

  7. Quantifying the synchronizability of externally driven oscillators.

    Science.gov (United States)

    Stefański, Andrzej

    2008-03-01

    This paper is focused on the problem of complete synchronization in arrays of externally driven identical or slightly different oscillators. These oscillators are coupled by common driving which makes an occurrence of generalized synchronization between a driving signal and response oscillators possible. Therefore, the phenomenon of generalized synchronization is also analyzed here. The research is concentrated on the cases of an irregular (chaotic or stochastic) driving signal acting on continuous-time (Duffing systems) and discrete-time (Henon maps) response oscillators. As a tool for quantifying the robustness of the synchronized state, response (conditional) Lyapunov exponents are applied. The most significant result presented in this paper is a novel method of estimation of the largest response Lyapunov exponent. This approach is based on the complete synchronization of two twin response subsystems via additional master-slave coupling between them. Examples of the method application and its comparison with the classical algorithm for calculation of Lyapunov exponents are widely demonstrated. Finally, the idea of effective response Lyapunov exponents, which allows us to quantify the synchronizability in case of slightly different response oscillators, is introduced.

  8. An optimised method for quantifying glenoid orientation

    Directory of Open Access Journals (Sweden)

    Amadi Hippolite

    2008-01-01

    Full Text Available A robust quantification method is essential for inter-subject glenoid comparison and planning of total shoulder arthroplasty. This study compared various scapular and glenoid axes with each other in order to optimally define the most appropriate method of quantifying glenoid version and inclination. Six glenoid and eight scapular axes were defined and quantified from identifiable landmarks of twenty-one scapular image scans. Pathology independency and insensitivity of each axis to inter-subject morphological variation within its region was tested. Glenoid version and inclination were calculated using the best axes from the two regions. The best glenoid axis was the normal to a least-square plane fit on the glenoid rim, directed approximately medio-laterally. The best scapular axis was the normal to a plane formed by the spine root and lateral border ridge. Glenoid inclination was 15.7° ± 5.1° superiorly and version was 4.9° ± 6.1°, retroversion. The choice of axes in the present technique makes it insensitive to pathology and scapular morphological variabilities. Its application would effectively improve inter-subject glenoid version comparison, surgical planning and design of prostheses for shoulder arthroplasty.

  9. Quantifying chemical reactions by using mixing analysis.

    Science.gov (United States)

    Jurado, Anna; Vázquez-Suñé, Enric; Carrera, Jesús; Tubau, Isabel; Pujades, Estanislao

    2015-01-01

    This work is motivated by a sound understanding of the chemical processes that affect the organic pollutants in an urban aquifer. We propose an approach to quantify such processes using mixing calculations. The methodology consists of the following steps: (1) identification of the recharge sources (end-members) and selection of the species (conservative and non-conservative) to be used, (2) identification of the chemical processes and (3) evaluation of mixing ratios including the chemical processes. This methodology has been applied in the Besòs River Delta (NE Barcelona, Spain), where the River Besòs is the main aquifer recharge source. A total number of 51 groundwater samples were collected from July 2007 to May 2010 during four field campaigns. Three river end-members were necessary to explain the temporal variability of the River Besòs: one river end-member is from the wet periods (W1) and two are from dry periods (D1 and D2). This methodology has proved to be useful not only to compute the mixing ratios but also to quantify processes such as calcite and magnesite dissolution, aerobic respiration and denitrification undergone at each observation point.

  10. Quantifying the efficiency of river regulation

    Directory of Open Access Journals (Sweden)

    R. Rödel

    2005-01-01

    Full Text Available Dam-affected hydrologic time series give rise to uncertainties when they are used for calibrating large-scale hydrologic models or for analysing runoff records. It is therefore necessary to identify and to quantify the impact of impoundments on runoff time series. Two different approaches were employed. The first, classic approach compares the volume of the dams that are located upstream from a station with the annual discharge. The catchment areas of the stations are calculated and then related to geo-referenced dam attributes. The paper introduces a data set of geo-referenced dams linked with 677 gauging stations in Europe. Second, the intensity of the impoundment impact on runoff times series can be quantified more exactly and directly when long-term runoff records are available. Dams cause a change in the variability of flow regimes. This effect can be measured using the model of linear single storage. The dam-caused storage change ΔS can be assessed through the volume of the emptying process between two flow regimes. As an example, the storage change ΔS is calculated for regulated long-term series of the Luleälven in northern Sweden.

  11. Quantifying lateral tissue heterogeneities in hadron therapy.

    Science.gov (United States)

    Pflugfelder, D; Wilkens, J J; Szymanowski, H; Oelfke, U

    2007-04-01

    In radiotherapy with scanned particle beams, tissue heterogeneities lateral to the beam direction are problematic in two ways: they pose a challenge to dose calculation algorithms, and they lead to a high sensitivity to setup errors. In order to quantify and avoid these problems, a heterogeneity number H(i) as a method to quantify lateral tissue heterogeneities of single beam spot i is introduced. To evaluate this new concept, two kinds of potential errors were investigated for single beam spots: First, the dose calculation error has been obtained by comparing the dose distribution computed by a simple pencil beam algorithm to more accurate Monte Carlo simulations. The resulting error is clearly correlated with H(i). Second, the analysis of the sensitivity to setup errors of single beam spots also showed a dependence on H(i). From this data it is concluded that H(i) can be used as a criterion to assess the risks of a compromised delivered dose due to lateral tissue heterogeneities. Furthermore, a method how to incorporate this information into the inverse planning process for intensity modulated proton therapy is presented. By suppressing beam spots with a high value of H(i), the unfavorable impact of lateral tissue heterogeneities can be reduced, leading to treatment plans which are more robust to dose calculation errors of the pencil beam algorithm. Additional possibilities to use the information of H(i) are outlined in the discussion.

  12. Computed tomography to quantify tooth abrasion

    Science.gov (United States)

    Kofmehl, Lukas; Schulz, Georg; Deyhle, Hans; Filippi, Andreas; Hotz, Gerhard; Berndt-Dagassan, Dorothea; Kramis, Simon; Beckmann, Felix; Müller, Bert

    2010-09-01

    Cone-beam computed tomography, also termed digital volume tomography, has become a standard technique in dentistry, allowing for fast 3D jaw imaging including denture at moderate spatial resolution. More detailed X-ray images of restricted volumes for post-mortem studies in dental anthropology are obtained by means of micro computed tomography. The present study evaluates the impact of the pipe smoking wear on teeth morphology comparing the abraded tooth with its contra-lateral counterpart. A set of 60 teeth, loose or anchored in the jaw, from 12 dentitions have been analyzed. After the two contra-lateral teeth were scanned, one dataset has been mirrored before the two datasets were registered using affine and rigid registration algorithms. Rigid registration provides three translational and three rotational parameters to maximize the overlap of two rigid bodies. For the affine registration, three scaling factors are incorporated. Within the present investigation, affine and rigid registrations yield comparable values. The restriction to the six parameters of the rigid registration is not a limitation. The differences in size and shape between the tooth and its contra-lateral counterpart generally exhibit only a few percent in the non-abraded volume, validating that the contralateral tooth is a reasonable approximation to quantify, for example, the volume loss as the result of long-term clay pipe smoking. Therefore, this approach allows quantifying the impact of the pipe abrasion on the internal tooth morphology including root canal, dentin, and enamel volumes.

  13. Do behavioral foraging responses of prey to predators function similarly in restored and pristine foodwebs?

    Directory of Open Access Journals (Sweden)

    Elizabeth M P Madin

    Full Text Available Efforts to restore top predators in human-altered systems raise the question of whether rebounds in predator populations are sufficient to restore pristine foodweb dynamics. Ocean ecosystems provide an ideal system to test this question. Removal of fishing in marine reserves often reverses declines in predator densities and size. However, whether this leads to restoration of key functional characteristics of foodwebs, especially prey foraging behavior, is unclear. The question of whether restored and pristine foodwebs function similarly is nonetheless critically important for management and restoration efforts. We explored this question in light of one important determinant of ecosystem function and structure--herbivorous prey foraging behavior. We compared these responses for two functionally distinct herbivorous prey fishes (the damselfish Plectroglyphidodon dickii and the parrotfish Chlorurus sordidus within pairs of coral reefs in pristine and restored ecosystems in two regions of these species' biogeographic ranges, allowing us to quantify the magnitude and temporal scale of this key ecosystem variable's recovery. We demonstrate that restoration of top predator abundances also restored prey foraging excursion behaviors to a condition closely resembling those of a pristine ecosystem. Increased understanding of behavioral aspects of ecosystem change will greatly improve our ability to predict the cascading consequences of conservation tools aimed at ecological restoration, such as marine reserves.

  14. Void Coalescence Processes Quantified Through Atomistic and Multiscale Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Rudd, R E; Seppala, E T; Dupuy, L M; Belak, J

    2007-01-12

    Simulation of ductile fracture at the atomic scale reveals many aspects of the fracture process including specific mechanisms associated with void nucleation and growth as a precursor to fracture and the plastic deformation of the material surrounding the voids and cracks. Recently we have studied void coalescence in ductile metals using large-scale atomistic and continuum simulations. Here we review that work and present some related investigations. The atomistic simulations involve three-dimensional strain-controlled multi-million atom molecular dynamics simulations of copper. The correlated growth of two voids during the coalescence process leading to fracture is investigated, both in terms of its onset and the ensuing dynamical interactions. Void interactions are quantified through the rate of reduction of the distance between the voids, through the correlated directional growth of the voids, and through correlated shape evolution of the voids. The critical inter-void ligament distance marking the onset of coalescence is shown to be approximately one void radius based on the quantification measurements used, independent of the initial separation distance between the voids and the strain-rate of the expansion of the system. No pronounced shear flow is found in the coalescence process. We also discuss a technique for optimizing the calculation of fine-scale information on the fly for use in a coarse-scale simulation, and discuss the specific case of a fine-scale model that calculates void growth explicitly feeding into a coarse-scale mechanics model to study damage localization.

  15. Void Coalescence Processes Quantified through Atomistic and Multiscale Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Rudd, R E; Seppala, E T; Dupuy, L M; Belak, J

    2005-12-31

    Simulation of ductile fracture at the atomic scale reveals many aspects of the fracture process including specific mechanisms associated with void nucleation and growth as a precursor to fracture and the plastic deformation of the material surrounding the voids and cracks. Recently we have studied void coalescence in ductile metals using large-scale atomistic and continuum simulations. Here we review that work and present some related investigations. The atomistic simulations involve three-dimensional strain-controlled multi-million atom molecular dynamics simulations of copper. The correlated growth of two voids during the coalescence process leading to fracture is investigated, both in terms of its onset and the ensuing dynamical interactions. Void interactions are quantified through the rate of reduction of the distance between the voids, through the correlated directional growth of the voids, and through correlated shape evolution of the voids. The critical inter-void ligament distance marking the onset of coalescence is shown to be approximately one void radius based on the quantification measurements used, independent of the initial separation distance between the voids and the strain-rate of the expansion of the system. No pronounced shear flow is found in the coalescence process.

  16. A class of self-similar hydrodynamics test problems

    Energy Technology Data Exchange (ETDEWEB)

    Ramsey, Scott D [Los Alamos National Laboratory; Brown, Lowell S [Los Alamos National Laboratory; Nelson, Eric M [Los Alamos National Laboratory; Alme, Marv L [Los Alamos National Laboratory

    2010-12-08

    We consider self-similar solutions to the gas dynamics equations. One such solution - a spherical geometry Gaussian density profile - has been analyzed in the existing literature, and a connection between it, a linear velocity profile, and a uniform specific internal energy profile has been identified. In this work, we assume the linear velocity profile to construct an entire class of self-similar sol utions in both cylindrical and spherical geometry, of which the Gaussian form is one possible member. After completing the derivation, we present some results in the context of a test problem for compressible flow codes.

  17. The many faces of graph dynamics

    Science.gov (United States)

    Pignolet, Yvonne Anne; Roy, Matthieu; Schmid, Stefan; Tredan, Gilles

    2017-06-01

    The topological structure of complex networks has fascinated researchers for several decades, resulting in the discovery of many universal properties and reoccurring characteristics of different kinds of networks. However, much less is known today about the network dynamics: indeed, complex networks in reality are not static, but rather dynamically evolve over time. Our paper is motivated by the empirical observation that network evolution patterns seem far from random, but exhibit structure. Moreover, the specific patterns appear to depend on the network type, contradicting the existence of a ‘one fits it all’ model. However, we still lack observables to quantify these intuitions, as well as metrics to compare graph evolutions. Such observables and metrics are needed for extrapolating or predicting evolutions, as well as for interpolating graph evolutions. To explore the many faces of graph dynamics and to quantify temporal changes, this paper suggests to build upon the concept of centrality, a measure of node importance in a network. In particular, we introduce the notion of centrality distance, a natural similarity measure for two graphs which depends on a given centrality, characterizing the graph type. Intuitively, centrality distances reflect the extent to which (non-anonymous) node roles are different or, in case of dynamic graphs, have changed over time, between two graphs. We evaluate the centrality distance approach for five evolutionary models and seven real-world social and physical networks. Our results empirically show the usefulness of centrality distances for characterizing graph dynamics compared to a null-model of random evolution, and highlight the differences between the considered scenarios. Interestingly, our approach allows us to compare the dynamics of very different networks, in terms of scale and evolution speed.

  18. Methods for quantifying training in sprint kayak.

    Science.gov (United States)

    Borges, Thiago Oliveira; Bullock, Nicola; Duff, Christine; Coutts, Aaron J

    2014-02-01

    The aims of this study were to determine the validity of the session rating of perceived exertion (session-RPE) method by comparing 3 different scales of perceived exertion with common measures of training load (TL). A secondary aim was to verify the relationship between TLs, fitness, and performance in Sprint Kayak athletes. After laboratory assessment of maximal oxygen uptake (V[Combining Dot Above]O2peak) and lactate threshold, the athletes performed on water time trials over 200 and 1,000 m. Training load was quantified for external (distance and speed) and internal (session-RPE: 6-20, category ratio [CR]-10 and CR-100 scales, training impulse [TRIMP], and individual TRIMP). Ten (6 male, 4 female) well-trained junior Sprint Kayak athletes (age 17.1 ± 1.2 years; V[Combining Dot Above]O2peak 4.2 ± 0.7 L·min) were monitored over a 7-week period. There were large-to-very large within-individual correlations between the session distance and the various heart rate (HR) and RPE-based methods for quantifying TL (0.58-0.91). Correlations between the mean session speed and various HR- and RPE-based methods for quantifying TL were small to large (0.12-0.50). The within-individual relationships between the various objective and subjective methods of internal TL were large to very large (0.62-0.94). Moderate-to-large inverse relationships were found between mean session-RPE TL and various aerobic fitness variables (-0.58 to -0.37). Large-to-very large relationships were found between mean session-RPE TL and on water performance (0.57-0.75). In conclusion, session-RPE is a valid method for monitoring TL for junior Sprint Kayak athletes, regardless of the RPE scale used. The session-RPE TL relates to fitness and performance, supporting the use of session-RPE in Sprint Kayak training.

  19. Looking for Similarities Between Lowland (Flash) Floods

    Science.gov (United States)

    Brauer, C.; Teuling, R.; Torfs, P.; Hobbelt, L.; Jansen, F.; Melsen, L.; Uijlenhoet, R.

    2012-12-01

    On 26 August 2010 the eastern part of The Netherlands and the bordering part of Germany were struck by a series of rainfall events. Over an area of 740 km2 more than 120 mm of rainfall were observed in 24 h. We investigated the unprecedented flash flood triggered by this exceptionally heavy rainfall event (return period > 1000 years) in the 6.5 km2 Hupsel Brook catchment, which has been the experimental watershed employed by Wageningen University since the 1960s. This study improved our understanding of the dynamics of such lowland flash floods (Brauer et al., 2011). These observations, however, only show how our experimental catchment behaved and the results cannot be extrapolated directly to different floods in other (neighboring) lowland catchments. Therefore, it is necessary to use the information collected in one well-monitored catchment in combination with data from other, less well monitored catchments to find common signatures which could describe the runoff response during a lowland flood as a function of catchment characteristics. Because of the large spatial extent of the rainfall event in August 2010, many brooks and rivers in the Netherlands and Germany flooded. With data from several catchments we investigated the influence of rainfall and catchment characteristics (such as slope, size and land use) on the reaction of discharge to rainfall. We also investigated the runoff response in these catchments during previous floods by analyzing the relation between storage and discharge and the recession curve. In addition to the flood in August 2010, two other floods occurred in The Netherlands in recently. The three floods occurred in different parts of the country, after different types of rainfall events and with different initial conditions. We selected several catchments during each flood to compare their response and find out if these cases are fundamentally different or that they were produced by the same underlying processes and can be treated in a

  20. How to quantify conduits in wood?

    Directory of Open Access Journals (Sweden)

    Alexander eScholz

    2013-03-01

    Full Text Available Vessels and tracheids represent the most important xylem cells with respect to long distance water transport in plants. Wood anatomical studies frequently provide several quantitative details of these cells, such as vessel diameter, vessel density, vessel element length, and tracheid length, while important information on the three dimensional structure of the hydraulic network is not considered. This paper aims to provide an overview of various techniques, although there is no standard protocol to quantify conduits due to high anatomical variation and a wide range of techniques available. Despite recent progress in image analysis programs and automated methods for measuring cell dimensions, density, and spatial distribution, various characters remain time-consuming and tedious. Quantification of vessels and tracheids is not only important to better understand functional adaptations of tracheary elements to environment parameters, but will also be essential for linking wood anatomy with other fields such as wood development, xylem physiology, palaeobotany, and dendrochronology.

  1. Quantifying creativity: can measures span the spectrum?

    Science.gov (United States)

    Simonton, Dean Keith

    2012-01-01

    Because the cognitive neuroscientists have become increasingly interested in the phenomenon of creativity, the issue arises of how creativity is to be optimally measured. Unlike intelligence, which can be assessed across the full range of intellectual ability creativity measures tend to concentrate on different sections of the overall spectrum. After first defining creativity in terms of the three criteria of novelty, usefulness, and surprise, this article provides an overview of the available measures. Not only do these instruments vary according to whether they focus on the creative process, person, or product, but they differ regarding whether they tap into “little-c” versus “Big-C” creativity; only productivity and eminence measures reach into genius-level manifestations of the phenomenon. The article closes by discussing whether various alternative assessment techniques can be integrated into a single measure that quantifies creativity across the full spectrum. PMID:22577309

  2. Message passing for quantified Boolean formulas

    CERN Document Server

    Zhang, Pan; Zdeborová, Lenka; Zecchina, Riccardo

    2012-01-01

    We introduce two types of message passing algorithms for quantified Boolean formulas (QBF). The first type is a message passing based heuristics that can prove unsatisfiability of the QBF by assigning the universal variables in such a way that the remaining formula is unsatisfiable. In the second type, we use message passing to guide branching heuristics of a Davis-Putnam Logemann-Loveland (DPLL) complete solver. Numerical experiments show that on random QBFs our branching heuristics gives robust exponential efficiency gain with respect to the state-of-art solvers. We also manage to solve some previously unsolved benchmarks from the QBFLIB library. Apart from this our study sheds light on using message passing in small systems and as subroutines in complete solvers.

  3. Quantifying decoherence in continuous variable systems

    Energy Technology Data Exchange (ETDEWEB)

    Serafini, A [Dipartimento di Fisica ' ER Caianiello' , Universita di Salerno, INFM UdR Salerno, INFN Sezione Napoli, Gruppo Collegato Salerno, Via S Allende, 84081 Baronissi, SA (Italy); Paris, M G A [Dipartimento di Fisica and INFM, Universita di Milano, Milan (Italy); Illuminati, F [Dipartimento di Fisica ' ER Caianiello' , Universita di Salerno, INFM UdR Salerno, INFN Sezione Napoli, Gruppo Collegato Salerno, Via S Allende, 84081 Baronissi, SA (Italy); De Siena, S [Dipartimento di Fisica ' ER Caianiello' , Universita di Salerno, INFM UdR Salerno, INFN Sezione Napoli, Gruppo Collegato Salerno, Via S Allende, 84081 Baronissi, SA (Italy)

    2005-04-01

    We present a detailed report on the decoherence of quantum states of continuous variable systems under the action of a quantum optical master equation resulting from the interaction with general Gaussian uncorrelated environments. The rate of decoherence is quantified by relating it to the decay rates of various, complementary measures of the quantum nature of a state, such as the purity, some non-classicality indicators in phase space, and, for two-mode states, entanglement measures and total correlations between the modes. Different sets of physically relevant initial configurations are considered, including one- and two-mode Gaussian states, number states, and coherent superpositions. Our analysis shows that, generally, the use of initially squeezed configurations does not help to preserve the coherence of Gaussian states, whereas it can be effective in protecting coherent superpositions of both number states and Gaussian wavepackets. (review article)

  4. Quantifying truncation errors in effective field theory

    CERN Document Server

    Furnstahl, R J; Phillips, D R; Wesolowski, S

    2015-01-01

    Bayesian procedures designed to quantify truncation errors in perturbative calculations of quantum chromodynamics observables are adapted to expansions in effective field theory (EFT). In the Bayesian approach, such truncation errors are derived from degree-of-belief (DOB) intervals for EFT predictions. Computation of these intervals requires specification of prior probability distributions ("priors") for the expansion coefficients. By encoding expectations about the naturalness of these coefficients, this framework provides a statistical interpretation of the standard EFT procedure where truncation errors are estimated using the order-by-order convergence of the expansion. It also permits exploration of the ways in which such error bars are, and are not, sensitive to assumptions about EFT-coefficient naturalness. We first demonstrate the calculation of Bayesian probability distributions for the EFT truncation error in some representative examples, and then focus on the application of chiral EFT to neutron-pr...

  5. Quantifying interspecific coagulation efficiency of phytoplankton

    DEFF Research Database (Denmark)

    Hansen, J.L.S.; Kiørboe, Thomas

    1997-01-01

    Non-sticky latex beads and sticky diatoms were used as models to describe mutual coagulation between sticky and non-sticky particles. in mixed suspensions of beads and Thalassiosira nordenskjoeldii, both types of particles coagulated into mixed aggregates at specific rates, from which the intersp......Non-sticky latex beads and sticky diatoms were used as models to describe mutual coagulation between sticky and non-sticky particles. in mixed suspensions of beads and Thalassiosira nordenskjoeldii, both types of particles coagulated into mixed aggregates at specific rates, from which....... nordenskjoeldii. Mutual coagulation between Skeletonema costatum and the non-sticky cel:ls of Ditylum brightwellii also proceeded with hall the efficiency of S. costatum alone. The latex beads were suitable to be used as 'standard particles' to quantify the ability of phytoplankton to prime aggregation...

  6. Extraction of quantifiable information from complex systems

    CERN Document Server

    Dahmen, Wolfgang; Griebel, Michael; Hackbusch, Wolfgang; Ritter, Klaus; Schneider, Reinhold; Schwab, Christoph; Yserentant, Harry

    2014-01-01

    In April 2007, the  Deutsche Forschungsgemeinschaft (DFG) approved the  Priority Program 1324 “Mathematical Methods for Extracting Quantifiable Information from Complex Systems.” This volume presents a comprehensive overview of the most important results obtained over the course of the program.   Mathematical models of complex systems provide the foundation for further technological developments in science, engineering and computational finance.  Motivated by the trend toward steadily increasing computer power, ever more realistic models have been developed in recent years. These models have also become increasingly complex, and their numerical treatment poses serious challenges.   Recent developments in mathematics suggest that, in the long run, much more powerful numerical solution strategies could be derived if the interconnections between the different fields of research were systematically exploited at a conceptual level. Accordingly, a deeper understanding of the mathematical foundations as w...

  7. Quantifying Power Grid Risk from Geomagnetic Storms

    Science.gov (United States)

    Homeier, N.; Wei, L. H.; Gannon, J. L.

    2012-12-01

    We are creating a statistical model of the geophysical environment that can be used to quantify the geomagnetic storm hazard to power grid infrastructure. Our model is developed using a database of surface electric fields for the continental United States during a set of historical geomagnetic storms. These electric fields are derived from the SUPERMAG compilation of worldwide magnetometer data and surface impedances from the United States Geological Survey. This electric field data can be combined with a power grid model to determine GICs per node and reactive MVARs at each minute during a storm. Using publicly available substation locations, we derive relative risk maps by location by combining magnetic latitude and ground conductivity. We also estimate the surface electric fields during the August 1972 geomagnetic storm that caused a telephone cable outage across the middle of the United States. This event produced the largest surface electric fields in the continental U.S. in at least the past 40 years.

  8. A Simulation Platform for Quantifying Survival Bias

    DEFF Research Database (Denmark)

    Mayeda, Elizabeth Rose; Tchetgen Tchetgen, Eric J; Power, Melinda C

    2016-01-01

    Bias due to selective mortality is a potential concern in many studies and is especially relevant in cognitive aging research because cognitive impairment strongly predicts subsequent mortality. Biased estimation of the effect of an exposure on rate of cognitive decline can occur when mortality i......-mortality situations. This simulation platform provides a flexible tool for evaluating biases in studies with high mortality, as is common in cognitive aging research.......Bias due to selective mortality is a potential concern in many studies and is especially relevant in cognitive aging research because cognitive impairment strongly predicts subsequent mortality. Biased estimation of the effect of an exposure on rate of cognitive decline can occur when mortality...... platform with which to quantify the expected bias in longitudinal studies of determinants of cognitive decline. We evaluated potential survival bias in naive analyses under several selective survival scenarios, assuming that exposure had no effect on cognitive decline for anyone in the population. Compared...

  9. Quantifying the risk of extreme aviation accidents

    Science.gov (United States)

    Das, Kumer Pial; Dey, Asim Kumer

    2016-12-01

    Air travel is considered a safe means of transportation. But when aviation accidents do occur they often result in fatalities. Fortunately, the most extreme accidents occur rarely. However, 2014 was the deadliest year in the past decade causing 111 plane crashes, and among them worst four crashes cause 298, 239, 162 and 116 deaths. In this study, we want to assess the risk of the catastrophic aviation accidents by studying historical aviation accidents. Applying a generalized Pareto model we predict the maximum fatalities from an aviation accident in future. The fitted model is compared with some of its competitive models. The uncertainty in the inferences are quantified using simulated aviation accident series, generated by bootstrap resampling and Monte Carlo simulations.

  10. Historic Food Production Shocks: Quantifying the Extremes

    Directory of Open Access Journals (Sweden)

    Aled W. Jones

    2016-04-01

    Full Text Available Understanding global food production trends is vital for ensuring food security and to allow the world to develop appropriate policies to manage the food system. Over the past few years, there has been an increasing attention on the global food system, particularly after the extreme shocks seen in food prices after 2007. Several papers and working groups have explored the links between food production and various societal impacts however they often categorise production shocks in different ways even to the extent of identifying different levels, countries and timings for shocks. In this paper we present a simple method to quantify and categorise cereal production shocks at a country level. This method can be used as a baseline for other studies that examine the impact of these production shocks on the global food system.

  11. Quantifying the Anthropogenic Footprint in Eastern China

    Science.gov (United States)

    Meng, Chunlei; Dou, Youjun

    2016-04-01

    Urban heat island (UHI) is one of the most focuses in urban climate study. The parameterization of the anthropogenic heat (AH) is crucial important in UHI study, but universal method to parameterize the spatial pattern of the AH is lacking now. This paper uses the NOAA DMSP/OLS nighttime light data to parameterize the spatial pattern of the AH. Two experiments were designed and performed to quantify the influences of the AH to land surface temperature (LST) in eastern China and 24 big cities. The annual mean heating caused by AH is up to 1 K in eastern China. This paper uses the relative LST differences rather than the absolute LST differences between the control run and contrast run of common land model (CoLM) to find the drivers. The heating effect of the anthropogenic footprint has less influence on relatively warm and wet cities.

  12. Animal biometrics: quantifying and detecting phenotypic appearance.

    Science.gov (United States)

    Kühl, Hjalmar S; Burghardt, Tilo

    2013-07-01

    Animal biometrics is an emerging field that develops quantified approaches for representing and detecting the phenotypic appearance of species, individuals, behaviors, and morphological traits. It operates at the intersection between pattern recognition, ecology, and information sciences, producing computerized systems for phenotypic measurement and interpretation. Animal biometrics can benefit a wide range of disciplines, including biogeography, population ecology, and behavioral research. Currently, real-world applications are gaining momentum, augmenting the quantity and quality of ecological data collection and processing. However, to advance animal biometrics will require integration of methodologies among the scientific disciplines involved. Such efforts will be worthwhile because the great potential of this approach rests with the formal abstraction of phenomics, to create tractable interfaces between different organizational levels of life.

  13. Quantifying decoherence in continuous variable systems

    CERN Document Server

    Serafini, A; Illuminati, F; De Siena, S

    2005-01-01

    We present a detailed report on the decoherence of quantum states of continuous variable systems under the action of a quantum optical master equation resulting from the interaction with general Gaussian uncorrelated environments. The rate of decoherence is quantified by relating it to the decay rates of various, complementary measures of the quantum nature of a state, such as the purity, some nonclassicality indicators in phase space and, for two-mode states, entanglement measures and total correlations between the modes. Different sets of physically relevant initial configurations are considered, including one- and two-mode Gaussian states, number states, and coherent superpositions. Our analysis shows that, generally, the use of initially squeezed configurations does not help to preserve the coherence of Gaussian states, whereas it can be effective in protecting coherent superpositions of both number states and Gaussian wave packets.

  14. How to quantify conduits in wood?

    Science.gov (United States)

    Scholz, Alexander; Klepsch, Matthias; Karimi, Zohreh; Jansen, Steven

    2013-01-01

    Vessels and tracheids represent the most important xylem cells with respect to long distance water transport in plants. Wood anatomical studies frequently provide several quantitative details of these cells, such as vessel diameter, vessel density, vessel element length, and tracheid length, while important information on the three dimensional structure of the hydraulic network is not considered. This paper aims to provide an overview of various techniques, although there is no standard protocol to quantify conduits due to high anatomical variation and a wide range of techniques available. Despite recent progress in image analysis programs and automated methods for measuring cell dimensions, density, and spatial distribution, various characters remain time-consuming and tedious. Quantification of vessels and tracheids is not only important to better understand functional adaptations of tracheary elements to environment parameters, but will also be essential for linking wood anatomy with other fields such as wood development, xylem physiology, palaeobotany, and dendrochronology.

  15. Quantifying capital goods for waste landfilling

    DEFF Research Database (Denmark)

    Brogaard, Line Kai-Sørensen; Stentsøe, Steen; Willumsen, Hans Christian

    2013-01-01

    Materials and energy used for construction of a hill-type landfill of 4 million m3 were quantified in detail. The landfill is engineered with a liner and leachate collections system, as well as a gas collection and control system. Gravel and clay were the most common materials used, amounting...... to approximately 260 kg per tonne of waste landfilled. The environmental burdens from the extraction and manufacturing of the materials used in the landfill, as well as from the construction of the landfill, were modelled as potential environmental impacts. For example, the potential impact on global warming was 2.......5 kg carbon dioxide (CO2) equivalents or 0.32 milli person equivalents per tonne of waste. The potential impacts from the use of materials and construction of the landfill are low-to-insignificant compared with data reported in the literature on impact potentials of landfills in operation...

  16. Quantifying capital goods for waste incineration

    DEFF Research Database (Denmark)

    Brogaard, Line Kai-Sørensen; Riber, C.; Christensen, Thomas Højlund

    2013-01-01

    Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000–240,000tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main...... of producing the materials used in the construction, steel for the building and the machinery contributed the most. The material and energy used for the construction corresponded to the emission of 7–14kg CO2 per tonne of waste combusted throughout the lifetime of the incineration plant. The assessment showed...... material used amounting to 19,000–26,000tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 4000–5000MWh. In terms of the environmental burden...

  17. Quantifying creativity: can measures span the spectrum?

    Science.gov (United States)

    Simonton, Dean Keith

    2012-03-01

    Because the cognitive neuroscientists have become increasingly interested in the phenomenon of creativity, the issue arises of how creativity is to be optimally measured. Unlike intelligence, which can be assessed across the full range of intellectual ability creativity measures tend to concentrate on different sections of the overall spectrum. After first defining creativity in terms of the three criteria of novelty, usefulness, and surprise, this article provides an overview of the available measures. Not only do these instruments vary according to whether they focus on the creative process, person, or product, but they differ regarding whether they tap into "little-c" versus "Big-C" creativity; only productivity and eminence measures reach into genius-level manifestations of the phenomenon. The article closes by discussing whether various alternative assessment techniques can be integrated into a single measure that quantifies creativity across the full spectrum.

  18. Quantifying structural states of soft mudrocks

    Science.gov (United States)

    Li, B.; Wong, R. C. K.

    2016-05-01

    In this paper, a cm model is proposed to quantify structural states of soft mudrocks, which are dependent on clay fractions and porosities. Physical properties of natural and reconstituted soft mudrock samples are used to derive two parameters in the cm model. With the cm model, a simplified homogenization approach is proposed to estimate geomechanical properties and fabric orientation distributions of soft mudrocks based on the mixture theory. Soft mudrocks are treated as a mixture of nonclay minerals and clay-water composites. Nonclay minerals have a high stiffness and serve as a structural framework of mudrocks when they have a high volume fraction. Clay-water composites occupy the void space among nonclay minerals and serve as an in-fill matrix. With the increase of volume fraction of clay-water composites, there is a transition in the structural state from the state of framework supported to the state of matrix supported. The decreases in shear strength and pore size as well as increases in compressibility and anisotropy in fabric are quantitatively related to such transition. The new homogenization approach based on the proposed cm model yields better performance evaluation than common effective medium modeling approaches because the interactions among nonclay minerals and clay-water composites are considered. With wireline logging data, the cm model is applied to quantify the structural states of Colorado shale formations at different depths in the Cold Lake area, Alberta, Canada. Key geomechancial parameters are estimated based on the proposed homogenization approach and the critical intervals with low strength shale formations are identified.

  19. Testing Self-Similarity Through Lamperti Transformations

    KAUST Repository

    Lee, Myoungji

    2016-07-14

    Self-similar processes have been widely used in modeling real-world phenomena occurring in environmetrics, network traffic, image processing, and stock pricing, to name but a few. The estimation of the degree of self-similarity has been studied extensively, while statistical tests for self-similarity are scarce and limited to processes indexed in one dimension. This paper proposes a statistical hypothesis test procedure for self-similarity of a stochastic process indexed in one dimension and multi-self-similarity for a random field indexed in higher dimensions. If self-similarity is not rejected, our test provides a set of estimated self-similarity indexes. The key is to test stationarity of the inverse Lamperti transformations of the process. The inverse Lamperti transformation of a self-similar process is a strongly stationary process, revealing a theoretical connection between the two processes. To demonstrate the capability of our test, we test self-similarity of fractional Brownian motions and sheets, their time deformations and mixtures with Gaussian white noise, and the generalized Cauchy family. We also apply the self-similarity test to real data: annual minimum water levels of the Nile River, network traffic records, and surface heights of food wrappings. © 2016, International Biometric Society.

  20. OTTO MOTOR DYNAMICS

    OpenAIRE

    Petrescu, Florian Ion Tiberiu; Polytechnic University of Bucharest; Petrescu, Relly Victoria Virgil; Polytechnic University of Bucharest

    2016-01-01

    Otto engine dynamics are similar in almost all common internal combustion engines. We can speak so about dynamics of engines: Lenoir, Otto, and Diesel. The dynamic presented model is simple and original. The first thing necessary in the calculation of Otto engine dynamics, is to determine the inertial mass reduced at the piston. It uses then the Lagrange equation. The dynamic equation of motion of the piston, obtained by integrating the Lagrange equation, takes a new form. It presents a new r...

  1. Quantifying climatological ranges and anomalies for Pacific coral reef ecosystems.

    Science.gov (United States)

    Gove, Jamison M; Williams, Gareth J; McManus, Margaret A; Heron, Scott F; Sandin, Stuart A; Vetter, Oliver J; Foley, David G

    2013-01-01

    Coral reef ecosystems are exposed to a range of environmental forcings that vary on daily to decadal time scales and across spatial scales spanning from reefs to archipelagos. Environmental variability is a major determinant of reef ecosystem structure and function, including coral reef extent and growth rates, and the abundance, diversity, and morphology of reef organisms. Proper characterization of environmental forcings on coral reef ecosystems is critical if we are to understand the dynamics and implications of abiotic-biotic interactions on reef ecosystems. This study combines high-resolution bathymetric information with remotely sensed sea surface temperature, chlorophyll-a and irradiance data, and modeled wave data to quantify environmental forcings on coral reefs. We present a methodological approach to develop spatially constrained, island- and atoll-scale metrics that quantify climatological range limits and anomalous environmental forcings across U.S. Pacific coral reef ecosystems. Our results indicate considerable spatial heterogeneity in climatological ranges and anomalies across 41 islands and atolls, with emergent spatial patterns specific to each environmental forcing. For example, wave energy was greatest at northern latitudes and generally decreased with latitude. In contrast, chlorophyll-a was greatest at reef ecosystems proximate to the equator and northern-most locations, showing little synchrony with latitude. In addition, we find that the reef ecosystems with the highest chlorophyll-a concentrations; Jarvis, Howland, Baker, Palmyra and Kingman are each uninhabited and are characterized by high hard coral cover and large numbers of predatory fishes. Finally, we find that scaling environmental data to the spatial footprint of individual islands and atolls is more likely to capture local environmental forcings, as chlorophyll-a concentrations decreased at relatively short distances (>7 km) from 85% of our study locations. These metrics will help

  2. Quantifying climatological ranges and anomalies for Pacific coral reef ecosystems.

    Directory of Open Access Journals (Sweden)

    Jamison M Gove

    Full Text Available Coral reef ecosystems are exposed to a range of environmental forcings that vary on daily to decadal time scales and across spatial scales spanning from reefs to archipelagos. Environmental variability is a major determinant of reef ecosystem structure and function, including coral reef extent and growth rates, and the abundance, diversity, and morphology of reef organisms. Proper characterization of environmental forcings on coral reef ecosystems is critical if we are to understand the dynamics and implications of abiotic-biotic interactions on reef ecosystems. This study combines high-resolution bathymetric information with remotely sensed sea surface temperature, chlorophyll-a and irradiance data, and modeled wave data to quantify environmental forcings on coral reefs. We present a methodological approach to develop spatially constrained, island- and atoll-scale metrics that quantify climatological range limits and anomalous environmental forcings across U.S. Pacific coral reef ecosystems. Our results indicate considerable spatial heterogeneity in climatological ranges and anomalies across 41 islands and atolls, with emergent spatial patterns specific to each environmental forcing. For example, wave energy was greatest at northern latitudes and generally decreased with latitude. In contrast, chlorophyll-a was greatest at reef ecosystems proximate to the equator and northern-most locations, showing little synchrony with latitude. In addition, we find that the reef ecosystems with the highest chlorophyll-a concentrations; Jarvis, Howland, Baker, Palmyra and Kingman are each uninhabited and are characterized by high hard coral cover and large numbers of predatory fishes. Finally, we find that scaling environmental data to the spatial footprint of individual islands and atolls is more likely to capture local environmental forcings, as chlorophyll-a concentrations decreased at relatively short distances (>7 km from 85% of our study locations

  3. Quantifying Selective Pressures Driving Bacterial Evolution Using Lineage Analysis

    Science.gov (United States)

    Lambert, Guillaume; Kussell, Edo

    2015-01-01

    Organisms use a variety of strategies to adapt to their environments and maximize long-term growth potential, but quantitative characterization of the benefits conferred by the use of such strategies, as well as their impact on the whole population's rate of growth, remains challenging. Here, we use a path-integral framework that describes how selection acts on lineages—i.e., the life histories of individuals and their ancestors—to demonstrate that lineage-based measurements can be used to quantify the selective pressures acting on a population. We apply this analysis to Escherichia coli bacteria exposed to cyclical treatments of carbenicillin, an antibiotic that interferes with cell-wall synthesis and affects cells in an age-dependent manner. While the extensive characterization of the life history of thousands of cells is necessary to accurately extract the age-dependent selective pressures caused by carbenicillin, the same measurement can be recapitulated using lineage-based statistics of a single surviving cell. Population-wide evolutionary pressures can be extracted from the properties of the surviving lineages within a population, providing an alternative and efficient procedure to quantify the evolutionary forces acting on a population. Importantly, this approach is not limited to age-dependent selection, and the framework can be generalized to detect signatures of other trait-specific selection using lineage-based measurements. Our results establish a powerful way to study the evolutionary dynamics of life under selection and may be broadly useful in elucidating selective pressures driving the emergence of antibiotic resistance and the evolution of survival strategies in biological systems.

  4. Molecular quantum similarity using conceptual DFT descriptors

    Indian Academy of Sciences (India)

    Patrick Bultinck; Ramon carbó-dorca

    2005-09-01

    This paper reports a Molecular Quantum Similarity study for a set of congeneric steroid molecules, using as basic similarity descriptors electron density ρ (r), shape function (r), the Fukui functions +(r) and -(r) and local softness +(r) and -(r). Correlations are investigated between similarity indices for each couple of descriptors used and compared to assess whether these different descriptors sample different information and to investigate what information is revealed by each descriptor.

  5. Cluster Tree Based Hybrid Document Similarity Measure

    Directory of Open Access Journals (Sweden)

    M. Varshana Devi

    2015-10-01

    Full Text Available similarity measure is established to measure the hybrid similarity. In cluster tree, the hybrid similarity measure can be calculated for the random data even it may not be the co-occurred and generate different views. Different views of tree can be combined and choose the one which is significant in cost. A method is proposed to combine the multiple views. Multiple views are represented by different distance measures into a single cluster. Comparing the cluster tree based hybrid similarity with the traditional statistical methods it gives the better feasibility for intelligent based search. It helps in improving the dimensionality reduction and semantic analysis.

  6. Similarity effects in visual working memory.

    Science.gov (United States)

    Jiang, Yuhong V; Lee, Hyejin J; Asaad, Anthony; Remington, Roger

    2016-04-01

    Perceptual similarity is an important property of multiple stimuli. Its computation supports a wide range of cognitive functions, including reasoning, categorization, and memory recognition. It is important, therefore, to determine why previous research has found conflicting effects of inter-item similarity on visual working memory. Studies reporting a similarity advantage have used simple stimuli whose similarity varied along a featural continuum. Studies reporting a similarity disadvantage have used complex stimuli from either a single or multiple categories. To elucidate stimulus conditions for similarity effects in visual working memory, we tested memory for complex stimuli (faces) whose similarity varied along a morph continuum. Participants encoded 3 morphs generated from a single face identity in the similar condition, or 3 morphs generated from different face identities in the dissimilar condition. After a brief delay, a test face appeared at one of the encoding locations for participants to make a same/different judgment. Two experiments showed that similarity enhanced memory accuracy without changing the response criterion. These findings support previous computational models that incorporate featural variance as a component of working memory load. They delineate limitations of models that emphasize cortical resources or response decisions.

  7. Quantifier hierarchies over the first-Order definable tree languages

    Institute of Scientific and Technical Information of China (English)

    沈云付

    1996-01-01

    Using Boolean operations and concatenation product w.r.t special trees,quantifier hierarchies are given by way of alternate existential and universal quantifiers for the first-order definable tree languages.

  8. Similarities in precursory features in seismic shocks and epileptic seizures

    Science.gov (United States)

    Kapiris, P. G.; Polygiannakis, J.; Li, X.; Yao, X.; Eftaxias, K. A.

    2005-02-01

    Theoretical studies suggest that the final earthquake (EQ) and neural-seizure dynamics should have many similar features and could be analyzed within similar mathematical frameworks. Herein, by monitoring the temporal evolution of the fractal spectral characteristics in EEG time series and pre-seismic electromagnetic (EM) time series we show that many similar distinctive symptoms (including common alterations in associated scaling parameters) emerge as epileptic seizures (ES) and EQs are approaching. These alterations reveal a gradual reduction of complexity as the catastrophic events approach. The transition from anti-persistent to persistent behaviour may indicate that the onset of a severe crisis is imminent. The observations find a unifying explanation within the school of the "Intermittent Criticality".

  9. Burridge-Knopoff model and self-similarity

    CERN Document Server

    Akishin, P G; Budnik, A D; Ivanov, V V; Antoniou, I

    1997-01-01

    The seismic processes are well known to be self-similar in both spatial and temporal behavior. At the same time, the Burridge-Knopoff (BK) model of earthquake fault dynamics, one of the basic models of theoretical seismicity, does not posses self-similarity. In this article an extension of BK model, which directly accounts for the self-similarity of earth crust elastic properties by introducing nonlinear terms for inter-block springs of BK model, is presented. The phase space analysis of the model have shown it to behave like a system of coupled randomly kicked oscillators. The nonlinear stiffness terms cause the synchronization of collective motion and produce stronger seismic events.

  10. Computational protein design quantifies structural constraints on amino acid covariation.

    Directory of Open Access Journals (Sweden)

    Noah Ollikainen

    Full Text Available Amino acid covariation, where the identities of amino acids at different sequence positions are correlated, is a hallmark of naturally occurring proteins. This covariation can arise from multiple factors, including selective pressures for maintaining protein structure, requirements imposed by a specific function, or from phylogenetic sampling bias. Here we employed flexible backbone computational protein design to quantify the extent to which protein structure has constrained amino acid covariation for 40 diverse protein domains. We find significant similarities between the amino acid covariation in alignments of natural protein sequences and sequences optimized for their structures by computational protein design methods. These results indicate that the structural constraints imposed by protein architecture play a dominant role in shaping amino acid covariation and that computational protein design methods can capture these effects. We also find that the similarity between natural and designed covariation is sensitive to the magnitude and mechanism of backbone flexibility used in computational protein design. Our results thus highlight the necessity of including backbone flexibility to correctly model precise details of correlated amino acid changes and give insights into the pressures underlying these correlations.

  11. Time-quantifiable Monte Carlo method for simulating a magnetization-reversal process

    Science.gov (United States)

    Cheng, X. Z.; Jalil, M. B. A.; Lee, H. K.; Okabe, Y.

    2005-09-01

    We propose a time-quantifiable Monte Carlo (MC) method to simulate the thermally induced magnetization reversal for an isolated single domain particle system. The MC method involves the determination of density of states and the use of Master equation for time evolution. We derive an analytical factor to convert MC steps into real time intervals. Unlike a previous time-quantified MC method, our method is readily scalable to arbitrarily long time scales, and can be repeated for different temperatures with minimal computational effort. Based on the conversion factor, we are able to make a direct comparison between the results obtained from MC and Langevin dynamics methods and find excellent agreement between them. An analytical formula for the magnetization reversal time is also derived, which agrees very well with both numerical Langevin and time-quantified MC results, over a large temperature range and for parallel and oblique easy axis orientations.

  12. Retinoid-binding proteins: similar protein architectures bind similar ligands via completely different ways.

    Directory of Open Access Journals (Sweden)

    Yu-Ru Zhang

    Full Text Available BACKGROUND: Retinoids are a class of compounds that are chemically related to vitamin A, which is an essential nutrient that plays a key role in vision, cell growth and differentiation. In vivo, retinoids must bind with specific proteins to perform their necessary functions. Plasma retinol-binding protein (RBP and epididymal retinoic acid binding protein (ERABP carry retinoids in bodily fluids, while cellular retinol-binding proteins (CRBPs and cellular retinoic acid-binding proteins (CRABPs carry retinoids within cells. Interestingly, although all of these transport proteins possess similar structures, the modes of binding for the different retinoid ligands with their carrier proteins are different. METHODOLOGY/PRINCIPAL FINDINGS: In this work, we analyzed the various retinoid transport mechanisms using structure and sequence comparisons, binding site analyses and molecular dynamics simulations. Our results show that in the same family of proteins and subcellular location, the orientation of a retinoid molecule within a binding protein is same, whereas when different families of proteins are considered, the orientation of the bound retinoid is completely different. In addition, none of the amino acid residues involved in ligand binding is conserved between the transport proteins. However, for each specific binding protein, the amino acids involved in the ligand binding are conserved. The results of this study allow us to propose a possible transport model for retinoids. CONCLUSIONS/SIGNIFICANCE: Our results reveal the differences in the binding modes between the different retinoid-binding proteins.

  13. On the similarity of symbol frequency distributions with heavy tails

    CERN Document Server

    Gerlach, Martin; Altmann, Eduardo G

    2015-01-01

    Quantifying the similarity between symbolic sequences is a traditional problem in Information Theory which requires comparing the frequencies of symbols in different sequences. In numerous modern applications, ranging from DNA over music to texts, the distribution of symbol frequencies is characterized by heavy-tailed distributions (e.g., Zipf's law). The large number of low-frequency symbols in these distributions poses major difficulties to the estimation of the similarity between sequences, e.g., they hinder an accurate finite-size estimation of entropies. Here we show how the accuracy of estimations depend on the sample size~$N$, not only for the Shannon entropy $(\\alpha=1)$ and its corresponding similarity measures (e.g., the Jensen-Shanon divergence) but also for measures based on the generalized entropy of order $\\alpha$. For small $\\alpha$'s, including $\\alpha=1$, the bias and fluctuations in the estimations decay slower than the $1/N$ decay observed in short-tailed distributions. For $\\alpha$ larger ...

  14. Learning Faster by Discovering and Exploiting Object Similarities

    Directory of Open Access Journals (Sweden)

    Tadej Janež

    2013-03-01

    Full Text Available In this paper we explore the question: “Is it possible to speed up the learning process of an autonomous agent by performing experiments in a more complex environment (i.e., an environment with a greater number of different objects?” To this end, we use a simple robotic domain, where the robot has to learn a qualitative model predicting the change in the robot’s distance to an object. To quantify the environment’s complexity, we defined cardinal complexity as the number of objects in the robot’s world, and behavioural complexity as the number of objects’ distinct behaviours. We propose Error reduction merging (ERM, a new learning method that automatically discovers similarities in the structure of the agent’s environment. ERM identifies different types of objects solely from the data measured and merges the observations of objects that behave in the same or similar way in order to speed up the agent’s learning. We performed a series of experiments in worlds of increasing complexity. The results in our simple domain indicate that ERM was capable of discovering structural similarities in the data which indeed made the learning faster, clearly superior to conventional learning. This observed trend occurred with various machine learning algorithms used inside the ERM method.

  15. Quantifying capital goods for biological treatment of organic waste

    DEFF Research Database (Denmark)

    Brogaard, Line Kai-Sørensen; Petersen, Per H.; Nielsen, Peter D.

    2015-01-01

    Materials and energy used for construction of anaerobic digestion (AD) and windrow composting plants were quantified in detail. The two technologies were quantified in collaboration with consultants and producers of the parts used to construct the plants. The composting plants were quantified bas...

  16. Mining Diagnostic Assessment Data for Concept Similarity

    Science.gov (United States)

    Madhyastha, Tara; Hunt, Earl

    2009-01-01

    This paper introduces a method for mining multiple-choice assessment data for similarity of the concepts represented by the multiple choice responses. The resulting similarity matrix can be used to visualize the distance between concepts in a lower-dimensional space. This gives an instructor a visualization of the relative difficulty of concepts…

  17. Similar methodological analysis involving the user experience.

    Science.gov (United States)

    Almeida e Silva, Caio Márcio; Okimoto, Maria Lúcia R L; Tanure, Raffaela Leane Zenni

    2012-01-01

    This article deals with the use of a protocol for analysis of similar methodological analysis related to user experience. For both, were selected articles recounting experiments in the area. They were analyze based on the similar analysis protocol and finally, synthesized and associated.

  18. Outsourced Similarity Search on Metric Data Assets

    DEFF Research Database (Denmark)

    Yiu, Man Lung; Assent, Ira; Jensen, Christian S.

    2012-01-01

    This paper considers a cloud computing setting in which similarity querying of metric data is outsourced to a service provider. The data is to be revealed only to trusted users, not to the service provider or anyone else. Users query the server for the most similar data objects to a query example...

  19. Appropriate Similarity Measures for Author Cocitation Analysis

    NARCIS (Netherlands)

    N.J.P. van Eck (Nees Jan); L. Waltman (Ludo)

    2007-01-01

    textabstractWe provide a number of new insights into the methodological discussion about author cocitation analysis. We first argue that the use of the Pearson correlation for measuring the similarity between authors’ cocitation profiles is not very satisfactory. We then discuss what kind of similar

  20. Interleaving Helps Students Distinguish among Similar Concepts

    Science.gov (United States)

    Rohrer, Doug

    2012-01-01

    When students encounter a set of concepts (or terms or principles) that are similar in some way, they often confuse one with another. For instance, they might mistake one word for another word with a similar spelling (e.g., allusion instead of illusion) or choose the wrong strategy for a mathematics problem because it resembles a different kind of…