WorldWideScience

Sample records for acid-water complexes measured

  1. The benzoic acid-water complex: a potential atmospheric nucleation precursor studied using microwave spectroscopy and ab initio calculations.

    Science.gov (United States)

    Schnitzler, Elijah G; Jäger, Wolfgang

    2014-02-14

    The pure rotational, high-resolution spectrum of the benzoic acid-water complex was measured in the range of 4-14 GHz, using a cavity-based molecular beam Fourier-transform microwave spectrometer. In all, 40 a-type transitions and 2 b-type transitions were measured for benzoic acid-water, and 12 a-type transitions were measured for benzoic acid-D2O. The equilibrium geometry of benzoic acid-water was determined with ab initio calculations, at the B3LYP, M06-2X, and MP2 levels of theory, with the 6-311++G(2df,2pd) basis set. The experimental rotational spectrum is most consistent with the B3LYP-predicted geometry. Narrow splittings were observed in the b-type transitions, and possible tunnelling motions were investigated using the B3LYP/6-311++G(d,p) level of theory. Rotation of the water moiety about the lone electron pair hydrogen-bonded to benzoic acid, across a barrier of 7.0 kJ mol(-1), is the most likely cause for the splitting. Wagging of the unbound hydrogen atom of water is barrier-less, and this large amplitude motion results in the absence of c-type transitions. The interaction and spectroscopic dissociation energies calculated using B3LYP and MP2 are in good agreement, but those calculated using M06-2X indicate excess stabilization, possibly due to dispersive interactions being over-estimated. The equilibrium constant of hydration was calculated by statistical thermodynamics, using ab initio results and the experimental rotational constants. This allowed us to estimate the changes in percentage of hydrated benzoic acid with variations in the altitude, region, and season. Using monitoring data from Calgary, Alberta, and the MP2-predicted dissociation energy, a yearly average of 1% of benzoic acid is expected to be present in the form of benzoic acid-water. However, this percentage depends sensitively on the dissociation energy. For example, when using the M06-2X-predicted dissociation energy, we find it increases to 18%.

  2. Comparison of the SAWNUC model with CLOUD measurements of sulphuric acid-water nucleation

    Science.gov (United States)

    Ehrhart, Sebastian; Ickes, Luisa; Almeida, Joao; Amorim, Antonio; Barmet, Peter; Bianchi, Federico; Dommen, Josef; Dunne, Eimear M.; Duplissy, Jonathan; Franchin, Alessandro; Kangasluoma, Juha; Kirkby, Jasper; Kürten, Andreas; Kupc, Agnieszka; Lehtipalo, Katrianne; Nieminen, Tuomo; Riccobono, Francesco; Rondo, Linda; Schobesberger, Siegfried; Steiner, Gerhard; Tomé, António; Wimmer, Daniela; Baltensperger, Urs; Wagner, Paul E.; Curtius, Joachim

    2016-10-01

    Binary nucleation of sulphuric acid-water particles is expected to be an important process in the free troposphere at low temperatures. SAWNUC (Sulphuric Acid Water Nucleation) is a model of binary nucleation that is based on laboratory measurements of the binding energies of sulphuric acid and water in charged and neutral clusters. Predictions of SAWNUC are compared for the first time comprehensively with experimental binary nucleation data from the CLOUD chamber at European Organization for Nuclear Research. The experimental measurements span a temperature range of 208-292 K, sulphuric acid concentrations from 1·106 to 1·109 cm-3, and distinguish between ion-induced and neutral nucleation. Good agreement, within a factor of 5, is found between the experimental and modeled formation rates for ion-induced nucleation at 278 K and below and for neutral nucleation at 208 and 223 K. Differences at warm temperatures are attributed to ammonia contamination which was indicated by the presence of ammonia-sulphuric acid clusters, detected by an Atmospheric Pressure Interface Time of Flight (APi-TOF) mass spectrometer. APi-TOF measurements of the sulphuric acid ion cluster distributions ((H2SO4)i·HSO4- with i = 0, 1, ..., 10) show qualitative agreement with the SAWNUC ion cluster distributions. Remaining differences between the measured and modeled distributions are most likely due to fragmentation in the APi-TOF. The CLOUD results are in good agreement with previously measured cluster binding energies and show the SAWNUC model to be a good representation of ion-induced and neutral binary nucleation of sulphuric acid-water clusters in the middle and upper troposphere.

  3. Comparison of the SAWNUC model with CLOUD measurements of sulphuric acid-water nucleation

    CERN Document Server

    Ehrhart, Sebastian; Almeida, Joao; Amorim, Antonio; Barmet, Peter; Bianchi, Federico; Dommen, Josef; Dunne, Eimear M; Duplissy, Jonathan; Franchin, Alessandro; Kangasluoma, Juha; Kirkby, Jasper; Kürten, Andreas; Kupc, Agnieszka; Lehtipalo, Katrianne; Nieminen, Tuomo; Riccobono, Francesco; Rondo, Linda; Schobesberger, Siegfried; Steiner, Gerhard; Tomé, António; Wimmer, Daniela; Baltensperger, Urs; Wagner, Paul E; Curtius, Joachim

    2016-01-01

    Binary nucleation of sulphuric acid-water particles is expected to be an important process in the free troposphere at low temperatures. SAWNUC (Sulphuric Acid Water Nucleation) is a model of binary nucleation that is based on laboratory measurements of the binding energies of sulphuric acid and water in charged and neutral clusters. Predictions of SAWNUC are compared for the first time comprehensively with experimental binary nucleation data from the CLOUD chamber at European Organization for Nuclear Research. The experimental measurements span a temperature range of 208–292 K, sulphuric acid concentrations from 1·106 to 1·109 cm−3, and distinguish between ion-induced and neutral nucleation. Good agreement, within a factor of 5, is found between the experimental and modeled formation rates for ion-induced nucleation at 278 K and below and for neutral nucleation at 208 and 223 K. Differences at warm temperatures are attributed to ammonia contamination which was indicated by the presence of ammonia-sulphu...

  4. Lewis acid-water/alcohol complexes as hydrogen atom donors in radical reactions.

    Science.gov (United States)

    Povie, Guillaume; Renaud, Philippe

    2013-01-01

    Water or low molecular weight alcohols are, due to their availability, low price and low toxicity ideal reagents for organic synthesis. Recently, it was reported that, despite the very strong BDE of the O-H bond, they can be used as hydrogen atom donors in place of expensive and/or toxic group 14 metal hydrides when boron and titanium(III) Lewis acids are present. This finding represents a considerable innovation and uncovers a new perspective on the paradigm of hydrogen atom transfers to radicals. We discuss here the influence of complex formation and other association processes on the efficacy of the hydrogen transfer step. A delicate balance between activation by complex formation and deactivation by further hydrogen bonding is operative.

  5. Hydration of the simplest α-keto acid: a rotational spectroscopic and ab initio study of the pyruvic acid-water complex.

    Science.gov (United States)

    Schnitzler, Elijah G; Seifert, Nathan A; Ghosh, Supriya; Thomas, Javix; Xu, Yunjie; Jäger, Wolfgang

    2017-02-08

    Intermolecular interactions between pyruvic acid, the simplest α-keto acid, and water are important in bio- and atmospheric chemistry. In this context, the pure rotational spectrum of the pyruvic acid-water complex was measured from 7 to 15 GHz using a cavity-based Fourier-transform microwave spectrometer. In the detected isomer, water acts as a hydrogen bond donor and acceptor, bridging the acidic hydrogen and the keto oxygen. Both a- and b-type transitions were observed; however, c-type transitions were not observed, due to vibrational averaging of the effectively barrier-less wagging motion of the free hydrogen of the water subunit, which results in an effective ground state structure with a plane of symmetry. The mass distribution out of the ab-plane, corrected for the out-of-plane hydrogen atoms of the methyl group, confirms that the complex has a plane of symmetry. The observed transitions exhibit splittings due to internal rotations of the water subunit and the methyl group. The proposed internal rotation of water nominally breaks one hydrogen bond, so it is remarkable that the barrier was calculated to be as low as 5.2 kJ mol(-1); however, a non-covalent interactions analysis indicates that water rotation has surprisingly little effect on the interactions between the water and pyruvic acid subunits. The barrier to methyl internal rotation was determined to be about 4.6 kJ mol(-1) experimentally, significantly higher than that of the pyruvic acid monomer. In general, the structure and dynamics investigated here provide insights into the interactions between pyruvic acid and water that dictate the fate of pyruvic acid in aqueous aerosols and living cells.

  6. Measuring static complexity

    Directory of Open Access Journals (Sweden)

    Ben Goertzel

    1992-01-01

    Full Text Available The concept of “pattern” is introduced, formally defined, and used to analyze various measures of the complexity of finite binary sequences and other objects. The standard Kolmogoroff-Chaitin-Solomonoff complexity measure is considered, along with Bennett's ‘logical depth’, Koppel's ‘sophistication'’, and Chaitin's analysis of the complexity of geometric objects. The pattern-theoretic point of view illuminates the shortcomings of these measures and leads to specific improvements, it gives rise to two novel mathematical concepts--“orders” of complexity and “levels” of pattern, and it yields a new measure of complexity, the “structural complexity”, which measures the total amount of structure an entity possesses.

  7. Viral quasispecies complexity measures.

    Science.gov (United States)

    Gregori, Josep; Perales, Celia; Rodriguez-Frias, Francisco; Esteban, Juan I; Quer, Josep; Domingo, Esteban

    2016-06-01

    Mutant spectrum dynamics (changes in the related mutants that compose viral populations) has a decisive impact on virus behavior. The several platforms of next generation sequencing (NGS) to study viral quasispecies offer a magnifying glass to study viral quasispecies complexity. Several parameters are available to quantify the complexity of mutant spectra, but they have limitations. Here we critically evaluate the information provided by several population diversity indices, and we propose the introduction of some new ones used in ecology. In particular we make a distinction between incidence, abundance and function measures of viral quasispecies composition. We suggest a multidimensional approach (complementary information contributed by adequately chosen indices), propose some guidelines, and illustrate the use of indices with a simple example. We apply the indices to three clinical samples of hepatitis C virus that display different population heterogeneity. Areas of virus biology in which population complexity plays a role are discussed.

  8. Measuring Complexity through Average Symmetry

    OpenAIRE

    Alamino, Roberto C.

    2015-01-01

    This work introduces a complexity measure which addresses some conflicting issues between existing ones by using a new principle - measuring the average amount of symmetry broken by an object. It attributes low (although different) complexity to either deterministic or random homogeneous densities and higher complexity to the intermediate cases. This new measure is easily computable, breaks the coarse graining paradigm and can be straightforwardly generalised, including to continuous cases an...

  9. Measuring Complexity of SAP Systems

    Directory of Open Access Journals (Sweden)

    Ilja Holub

    2016-10-01

    Full Text Available The paper discusses the reasons of complexity rise in ERP system SAP R/3. It proposes a method for measuring complexity of SAP. Based on this method, the computer program in ABAP for measuring complexity of particular SAP implementation is proposed as a tool for keeping ERP complexity under control. The main principle of the measurement method is counting the number of items or relations in the system. The proposed computer program is based on counting of records in organization tables in SAP.

  10. 柠檬酸钠与聚羧酸减水剂复配性能研究%Study on properties of complexes of polycarboxylic acid water reducing agent and sodium citrate

    Institute of Scientific and Technical Information of China (English)

    李萍; 蔡其全; 陈军超; 李薇; 宋明健; 张建兵; 唐小刚

    2013-01-01

    This paper used the sodium citrate and polycarboxylate acid for compound research,investigated the influence of complex products of sodium citrate in different contents on fluidity of cement paste,setting time and compressive strength of mortar and so on,and put forward the optimal compound recipe of polycarboxylic acid water reducing agent and sodium citrate.The experiment proved that when the mixing amount of polycarboxylic acid water reducing agent was 0.13%,considering the comprehensive function of sodium citrate of mortar retarding effect,auxiliary plasticizing effect and contribution on cement mortar strength,the appropriate quantity of sodium citrate was 0.02%~0.03%.%采用柠檬酸钠与聚羧酸减水剂进行复配,考察柠檬酸钠在不同掺量条件下的复配产品对水泥净浆流动度、凝结时间和砂浆抗压强度等的影响,提出了聚羧酸减水剂与柠檬酸钠复配的优化配方.试验结果表明,在聚羧酸减水剂掺量0.13%条件下,从综合发挥柠檬酸钠对砂浆的缓凝作用、辅助塑化效果以及对砂浆强度贡献角度考虑,柠檬酸钠的适宜掺量为0.02%~0.03%.

  11. Hierarchy measure for complex networks

    CERN Document Server

    Mones, Enys; Vicsek, Tamás

    2012-01-01

    Nature, technology and society are full of complexity arising from the intricate web of the interactions among the units of the related systems (e.g., proteins, computers, people). Consequently, one of the most successful recent approaches to capturing the fundamental features of the structure and dynamics of complex systems has been the investigation of the networks associated with the above units (nodes) together with their relations (edges). Most complex systems have an inherently hierarchical organization and, correspondingly, the networks behind them also exhibit hierarchical features. Indeed, several papers have been devoted to describing this essential aspect of networks, however, without resulting in a widely accepted, converging concept concerning the quantitative characterization of the level of their hierarchy. Here we develop an approach and propose a quantity (measure) which is simple enough to be widely applicable, reveals a number of universal features of the organization of real-world networks...

  12. Hierarchy Measure for Complex Networks

    Science.gov (United States)

    Mones, Enys; Vicsek, Lilla; Vicsek, Tamás

    2012-01-01

    Nature, technology and society are full of complexity arising from the intricate web of the interactions among the units of the related systems (e.g., proteins, computers, people). Consequently, one of the most successful recent approaches to capturing the fundamental features of the structure and dynamics of complex systems has been the investigation of the networks associated with the above units (nodes) together with their relations (edges). Most complex systems have an inherently hierarchical organization and, correspondingly, the networks behind them also exhibit hierarchical features. Indeed, several papers have been devoted to describing this essential aspect of networks, however, without resulting in a widely accepted, converging concept concerning the quantitative characterization of the level of their hierarchy. Here we develop an approach and propose a quantity (measure) which is simple enough to be widely applicable, reveals a number of universal features of the organization of real-world networks and, as we demonstrate, is capable of capturing the essential features of the structure and the degree of hierarchy in a complex network. The measure we introduce is based on a generalization of the m-reach centrality, which we first extend to directed/partially directed graphs. Then, we define the global reaching centrality (GRC), which is the difference between the maximum and the average value of the generalized reach centralities over the network. We investigate the behavior of the GRC considering both a synthetic model with an adjustable level of hierarchy and real networks. Results for real networks show that our hierarchy measure is related to the controllability of the given system. We also propose a visualization procedure for large complex networks that can be used to obtain an overall qualitative picture about the nature of their hierarchical structure. PMID:22470477

  13. Turbulence measurements over complex terrain

    Science.gov (United States)

    Skupniewicz, Charles E.; Kamada, Ray F.; Schacher, Gordon E.

    1989-07-01

    Horizontal turbulence measurements obtained from 22 wind sensors located on 9 towers in a mountainous coastal area are described and categorized by stability and terrain. Vector wind time series are high-pass filtered, and lateral and longitudinal wind speed variance is calculated for averaging times ranging from 15 s to 2 h. Parameterizations of the functional dependence of variance on averaging time are discussed, and a modification of Panofsky's (1988) uniform terrain technique applicable to complex terrain is presented. The parameterization is applied to the data and shown to be more realistic than a less complicated power law technique. The parameter values are shown to be different than the flat terrain cases of Kaimal et al. (1972), and are primarily a function of sensor location within the complex terrain. The parameters are also examined in terms of their dependence upon season, stability, marine boundary-layer height, and measurement height.

  14. Complexity measurement based on information theory and kolmogorov complexity.

    Science.gov (United States)

    Lui, Leong Ting; Terrazas, Germán; Zenil, Hector; Alexander, Cameron; Krasnogor, Natalio

    2015-01-01

    In the past decades many definitions of complexity have been proposed. Most of these definitions are based either on Shannon's information theory or on Kolmogorov complexity; these two are often compared, but very few studies integrate the two ideas. In this article we introduce a new measure of complexity that builds on both of these theories. As a demonstration of the concept, the technique is applied to elementary cellular automata and simulations of the self-organization of porphyrin molecules.

  15. Metric for Early Measurement of Software Complexity

    Directory of Open Access Journals (Sweden)

    Ghazal Keshavarz,

    2011-06-01

    Full Text Available Software quality depends on several factors such as on time delivery; within budget and fulfilling user's needs. Complexity is one of the most important factors that may affect the quality. Therefore, measuring and controlling the complexity result in improving the quality. So far, most of the researches have tried to identify and measure the complexity in design and code phase. However, whenwe have the code or design for software, it is too late to control complexity. In this article, with emphasis on Requirement Engineering process, we analyze the causes of software complexity, particularly in the first phase of software development, and propose a requirement based metric. This metric enables a software engineer to measure the complexity before actual design and implementation and choosestrategies that are appropriate to the software complexity degree, thus saving on cost and human resource wastage and, more importantly, leading to lower maintenance costs.

  16. Measurement methods on the complexity of network

    Institute of Scientific and Technical Information of China (English)

    LIN Lin; DING Gang; CHEN Guo-song

    2010-01-01

    Based on the size of network and the number of paths in the network,we proposed a model of topology complexity of a network to measure the topology complexity of the network.Based on the analyses of the effects of the number of the equipment,the types of equipment and the processing time of the node on the complexity of the network with the equipment-constrained,a complexity model of equipment-constrained network was constructed to measure the integrated complexity of the equipment-constrained network.The algorithms for the two models were also developed.An automatic generator of the random single label network was developed to test the models.The results show that the models can correctly evaluate the topology complexity and the integrated complexity of the networks.

  17. Cardiac Aging Detection Using Complexity Measures

    CERN Document Server

    Balasubramanian, Karthi

    2016-01-01

    As we age, our hearts undergo changes which result in reduction in complexity of physiological interactions between different control mechanisms. This results in a potential risk of cardiovascular diseases which are the number one cause of death globally. Since cardiac signals are nonstationary and nonlinear in nature, complexity measures are better suited to handle such data. In this study, non-invasive methods for detection of cardiac aging using complexity measures are explored. Lempel-Ziv (LZ) complexity, Approximate Entropy (ApEn) and Effort-to-Compress (ETC) measures are used to differentiate between healthy young and old subjects using heartbeat interval data. We show that both LZ and ETC complexity measures are able to differentiate between young and old subjects with only 10 data samples while ApEn requires at least 15 data samples.

  18. MATHEMATICAL FOUNDATION OF A NEW COMPLEXITY MEASURE

    Institute of Scientific and Technical Information of China (English)

    SHEN En-hua; CAI Zhi-jie; GU Fan-ji

    2005-01-01

    For many continuous bio-medical signals with both strong nonlinearity and non-stationarity, two criterions were proposed for their complexity estimation: (1) Only a short data set is enough for robust estimation; (2) No over-coarse graining preprocessing, such as transferring the original signal into a binary time series, is needed. C0 complexity measure proposed by us previously is one of such measures.However, it lacks the solid mathematical foundation and thus its use is limited. A modified version of this measure is proposed, and some important properties are proved rigorously. According to these properties, this measure can be considered as an index of randomness of time series in some senses, and thus also a quantitative index of complexity under the meaning of randomness finding complexity. Compared with other similar measures, this measure seems more suitable for estimating a large quantity of complexity measures for a given task, such as studying the dynamic variation of such measures in sliding windows of a long process, owing to its fast speed for estimation.

  19. Measures of complexity in signal analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kurths, J.; Schwarz, U.; Witt, A. [Arbeitsgruppe Nichtlineare Dynamik der Max-Planck-Gesellschaft an der Universitaet Potsdam, Am Neuen Palais, D-14415 Potsdam, PSF 601553 (Germany); Krampe, R.T. [Institut fuer Psychologie, Universitaet Potsdam, Am Neuen Palais, D-14415 Potsdam, PSF 601553 (Germany); Abel, M. [Arbeitsgruppe Nichtlineare Dynamik der Max-Planck-Gessellschaft an der Universitaet Potsdam, Am Neuen Palais, D-14415 Potsdam, PSF 601553 (Germany)

    1996-06-01

    Observational data of natural systems, as measured in astrophysical, geophysical or physiological experiments are typically quite different from those obtained in laboratories. Due to the peculiarities with these data, well-known characteristics processes, such as periodicities or fractal dimension, often do not provide a suitable description. To study such data, we present here the use of measures of complexity, which are mainly basing on symbolic dynamics. We distinguish two types of such quantities: traditional measures (e.g. algorithmic complexity) which are measures of randomness and alternative measures (e.g. {epsilon}-complexity) which relate highest complexity to some critical points. It is important to note that there is no optimum measure of complexity. Its choice should depend on the context. Mostly, a combination of some such quantities is appropriate. Applying this concept to three examples in astrophysics, cardiology and cognitive psychology, we show that it can be helpful also in cases where other tools of data analysis fail. {copyright} {ital 1996 American Institute of Physics.}

  20. A Framework for Evaluating Complex Networks Measurements

    CERN Document Server

    Comin, Cesar H; Costa, Luciano da F

    2014-01-01

    A good deal of current research in complex networks involves the characterization and/or classification of the topological properties of given structures, which has motivated several respective measurements. This letter proposes a framework for evaluating the quality of complex network measurements in terms of their effective resolution, degree of degeneracy and discriminability. The potential of the suggested approach is illustrated with respect to comparing the characterization of several model and real-world networks by using concentric and symmetry measurements. The results indicate a markedly superior performance for the latter type of mapping.

  1. Wind turbine wake measurement in complex terrain

    Science.gov (United States)

    Hansen, KS; Larsen, GC; Menke, R.; Vasiljevic, N.; Angelou, N.; Feng, J.; Zhu, WJ; Vignaroli, A.; W, W. Liu; Xu, C.; Shen, WZ

    2016-09-01

    SCADA data from a wind farm and high frequency time series measurements obtained with remote scanning systems have been analysed with focus on identification of wind turbine wake properties in complex terrain. The analysis indicates that within the flow regime characterized by medium to large downstream distances (more than 5 diameters) from the wake generating turbine, the wake changes according to local atmospheric conditions e.g. vertical wind speed. In very complex terrain the wake effects are often “overruled” by distortion effects due to the terrain complexity or topology.

  2. Bernoulli measure of complex admissible kneading sequences

    CERN Document Server

    Bruin, Henk

    2012-01-01

    Iterated quadratic polynomials give rise to a rich collection of different dynamical systems that are parametrized by a simple complex parameter $c$. The different dynamical features are encoded by the \\emph{kneading sequence} which is an infinite sequence over $\\{0,\\1\\}$. Not every such sequence actually occurs in complex dynamics. The set of admissible kneading sequences was described by Milnor and Thurston for real quadratic polynomials, and by the authors in the complex case. We prove that the set of admissible kneading sequences has positive Bernoulli measure within the set of sequences over $\\{0,\\1\\}$.

  3. A SVD Based Image Complexity Measure

    DEFF Research Database (Denmark)

    Gustafsson, David Karl John; Pedersen, Kim Steenstrup; Nielsen, Mads

    2009-01-01

    Images are composed of geometric structures and texture, and different image processing tools - such as denoising, segmentation and registration - are suitable for different types of image contents. Characterization of the image content in terms of geometric structure and texture is an important...... problem that one is often faced with. We propose a patch based complexity measure, based on how well the patch can be approximated using singular value decomposition. As such the image complexity is determined by the complexity of the patches. The concept is demonstrated on sequences from the newly...... collected DIKU Multi-Scale image database....

  4. A complexity measure for diachronic Chinese phonology

    CERN Document Server

    Raman, A; Patrick, J; Raman, Anand; Newman, John; Patrick, Jon

    1997-01-01

    This paper addresses the problem of deriving distance measures between parent and daughter languages with specific relevance to historical Chinese phonology. The diachronic relationship between the languages is modelled as a Probabilistic Finite State Automaton. The Minimum Message Length principle is then employed to find the complexity of this structure. The idea is that this measure is representative of the amount of dissimilarity between the two languages.

  5. Complexity measurement of natural and artificial languages

    CERN Document Server

    Febres, Gerardo; Gershenson, Carlos

    2013-01-01

    We compared entropy for texts written in natural languages (English, Spanish) and artificial languages (computer software) based on a simple expression for the entropy as a function of message length and specific word diversity. Code text written in artificial languages showed higher entropy than text of similar length expressed in natural languages. Spanish texts exhibit more symbolic diversity than English ones. Results showed that algorithms based on complexity measures differentiate artificial from natural languages, and that text analysis based on complexity measures allows the unveiling of important aspects of their nature. We propose specific expressions to examine entropy related aspects of tests and estimate the values of entropy, emergence, self-organization and complexity based on specific diversity and message length.

  6. Wind turbine wake measurement in complex terrain

    DEFF Research Database (Denmark)

    Hansen, Kurt Schaldemose; Larsen, Gunner Chr.; Menke, Robert;

    2016-01-01

    SCADA data from a wind farm and high frequency time series measurements obtained with remote scanning systems have been analysed with focus on identification of wind turbine wake properties in complex terrain. The analysis indicates that within the flow regime characterized by medium to large...

  7. Measuring Customer Profitability in Complex Environments

    DEFF Research Database (Denmark)

    Holm, Morten; Kumar, V.; Rohde, Carsten

    2012-01-01

    Customer profitability measurement is an important element in customer relationship management and a lever for enhanced marketing accountability. Two distinct measurement approaches have emerged in the marketing literature: Customer Lifetime Value (CLV) and Customer Profitability Analysis (CPA......). Myriad models have been demonstrated within these two approaches across industries. However, limited efforts have been made to explain when sophisticated CLV or CPA models will be most useful. This paper explores the advantages and limitations of sophisticated CLV and CPA models and proposes...... that the degree of sophistication deployed when implementing customer profitability measurement models is determined by the type of complexity encountered in firms’ customer environments. This gives rise to a contingency framework for customer profitability measurement model selection and five research...

  8. Study on fluorescence spectra of molecular association of acetic acid-water

    Institute of Scientific and Technical Information of China (English)

    Caiqin Han; Ying Liu; Yang Yang; Xiaowu Ni; Jian Lu; Xiaosen Luo

    2009-01-01

    Fluorescence spectra of acetic acid-water solution excited by ultraviolet (UV) light are studied, and the relationship between fluorescence spectra and molecular association of acetic acid is discussed. The results indicate that when the exciting light wavelength is longer than 246 nm, there are two fluorescence peaks located at 305 and 334 nm, respectively. By measuring the excitation spectra, the optimal wavelengths of the two fluorescence peaks are obtained, which are 258 and 284 nm, respectively. Fluorescence spectra of acetic acid-water solution change with concentrations, which is primarily attributed to changes of molecular association of acetic acid in aqueous solution. Through theoretical analysis, three variations of molecular association have been obtained in acetic acid-water solution, which are the hydrated monomers, the linear dimers, and the water separated dimers. This research can provide references to studies of molecular association of acetic acid-water, especially studies of hydrogen bonds.

  9. Balancing model complexity and measurements in hydrology

    Science.gov (United States)

    Van De Giesen, N.; Schoups, G.; Weijs, S. V.

    2012-12-01

    The Data Processing Inequality implies that hydrological modeling can only reduce, and never increase, the amount of information available in the original data used to formulate and calibrate hydrological models: I(X;Z(Y)) ≤ I(X;Y). Still, hydrologists around the world seem quite content building models for "their" watersheds to move our discipline forward. Hydrological models tend to have a hybrid character with respect to underlying physics. Most models make use of some well established physical principles, such as mass and energy balances. One could argue that such principles are based on many observations, and therefore add data. These physical principles, however, are applied to hydrological models that often contain concepts that have no direct counterpart in the observable physical universe, such as "buckets" or "reservoirs" that fill up and empty out over time. These not-so-physical concepts are more like the Artificial Neural Networks and Support Vector Machines of the Artificial Intelligence (AI) community. Within AI, one quickly came to the realization that by increasing model complexity, one could basically fit any dataset but that complexity should be controlled in order to be able to predict unseen events. The more data are available to train or calibrate the model, the more complex it can be. Many complexity control approaches exist in AI, with Solomonoff inductive inference being one of the first formal approaches, the Akaike Information Criterion the most popular, and Statistical Learning Theory arguably being the most comprehensive practical approach. In hydrology, complexity control has hardly been used so far. There are a number of reasons for that lack of interest, the more valid ones of which will be presented during the presentation. For starters, there are no readily available complexity measures for our models. Second, some unrealistic simplifications of the underlying complex physics tend to have a smoothing effect on possible model

  10. Consistently weighted measures for complex network topologies

    CERN Document Server

    Heitzig, Jobst; Zou, Yong; Marwan, Norbert; Kurths, Jürgen

    2011-01-01

    When network and graph theory are used in the study of complex systems, a typically finite set of nodes of the network under consideration is frequently either explicitly or implicitly considered representative of a much larger finite or infinite set of objects of interest. The selection procedure, e.g., formation of a subset or some kind of discretization or aggregation, typically results in individual nodes of the studied network representing quite differently sized parts of the domain of interest. This heterogeneity may induce substantial bias and artifacts in derived network statistics. To avoid this bias, we propose an axiomatic scheme based on the idea of {\\em node splitting invariance} to derive consistently weighted variants of various commonly used statistical network measures. The practical relevance and applicability of our approach is demonstrated for a number of example networks from different fields of research, and is shown to be of fundamental importance in particular in the study of climate n...

  11. Thermodynamic properties of citric acid and the system citric acid-water

    NARCIS (Netherlands)

    Kruif, C.G. de; Miltenburg, J.C. van; Sprenkels, A.J.J.; Stevens, G.; Graaf, W. de; Wit, H.G.M. de

    1982-01-01

    The binary system citric acid-water has been investigated with static vapour pressure measurements, adiabatic calorimetry, solution calorimetry, solubility measurements and powder X-ray measurements. The data are correlated by thermodynamics and a large part of the phase diagram is given. Molar heat

  12. Thermodynamic properties of citric acid and the system citric acid-water

    NARCIS (Netherlands)

    Kruif, C.G. de; Miltenburg, J.C. van; Sprenkels, A.J.J.; Stevens, G.; Graaf, W. de; Wit, H.G.M. de

    1982-01-01

    The binary system citric acid-water has been investigated with static vapour pressure measurements, adiabatic calorimetry, solution calorimetry, solubility measurements and powder X-ray measurements. The data are correlated by thermodynamics and a large part of the phase diagram is given. Molar heat

  13. High resolution pollutant measurements in complex urban ...

    Science.gov (United States)

    Measuring air pollution in real-time using an instrumented vehicle platform has been an emerging strategy to resolve air pollution trends at a very fine spatial scale (10s of meters). Achieving second-by-second data representative of urban air quality trends requires advanced instrumentation, such as a quantum cascade laser utilized to resolve carbon monoxide and real-time optical detection of black carbon. An equally challenging area of development is processing and visualization of complex geospatial air monitoring data to decipher key trends of interest. EPA’s Office of Research and Development staff have applied air monitoring to evaluate community air quality in a variety of environments, including assessing air quality surrounding rail yards, evaluating noise wall or tree stand effects on roadside and on-road air quality, and surveying of traffic-related exposure zones for comparison with land-use regression estimates. ORD has ongoing efforts to improve mobile monitoring data collection and interpretation, including instrumentation testing, evaluating the effect of post-processing algorithms on derived trends, and developing a web-based tool called Real-Time Geospatial Data Viewer (RETIGO) allowing for a simple plug-and-play of mobile monitoring data. Example findings from mobile data sets include an estimated 50% in roadside ultrafine particle levels when immediately downwind of a noise barrier, increases in neighborhood-wide black carbon levels (3

  14. Measuring multiple evolution mechanisms of complex networks.

    Science.gov (United States)

    Zhang, Qian-Ming; Xu, Xiao-Ke; Zhu, Yu-Xiao; Zhou, Tao

    2015-01-01

    Numerous concise models such as preferential attachment have been put forward to reveal the evolution mechanisms of real-world networks, which show that real-world networks are usually jointly driven by a hybrid mechanism of multiplex features instead of a single pure mechanism. To get an accurate simulation for real networks, some researchers proposed a few hybrid models by mixing multiple evolution mechanisms. Nevertheless, how a hybrid mechanism of multiplex features jointly influence the network evolution is not very clear. In this study, we introduce two methods (link prediction and likelihood analysis) to measure multiple evolution mechanisms of complex networks. Through tremendous experiments on artificial networks, which can be controlled to follow multiple mechanisms with different weights, we find the method based on likelihood analysis performs much better and gives very accurate estimations. At last, we apply this method to some real-world networks which are from different domains (including technology networks and social networks) and different countries (e.g., USA and China), to see how popularity and clustering co-evolve. We find most of them are affected by both popularity and clustering, but with quite different weights.

  15. A New Method for Measurement and Reduction of Software Complexity

    Institute of Scientific and Technical Information of China (English)

    SHI Yindun; XU Shiyi

    2007-01-01

    This paper develops an improved structural software complexity metrics named information flow complexity which is closely related to the reliability of software. Together with the three software complexity metrics, the total software complexity is measured and some rules to reduce the complexity are presented in the paper. To illustrate and explain the process of measurement and reduction of software complexity, several examples and experiments are given. It is proposed that software complexity metrics can be measured earlier in software development and can provide substantial information of software systems whose reliability can be modeled and used in the determination of initial parameter estimation.

  16. THE COMPLEX METHOD OF MEASURING ORGANIZATIONAL COMMUNICATIONS

    OpenAIRE

    I.A. Maltsev; L.S. Nikolaeva

    2008-01-01

    The role of organizational communications constantly increases. The paper is focused on organizational communications management by creating complex method of organizational communication’s diagnostics.

  17. THE COMPLEX METHOD OF MEASURING ORGANIZATIONAL COMMUNICATIONS

    Directory of Open Access Journals (Sweden)

    I.A. Maltsev

    2008-12-01

    Full Text Available The role of organizational communications constantly increases. The paper is focused on organizational communications management by creating complex method of organizational communication’s diagnostics.

  18. On System Complexity: Identification, Measurement, and Management

    OpenAIRE

    Casti, J.L.

    1985-01-01

    Attempts to axiomatize and formalize system complexity all leave a feeling of basic incompleteness and a sense of failure to grasp important aspects of the problem. This paper examines some of the root causes of these failures and outlines a framework for the consideration of complexity as an implicate, rather than explicate, property of systems in interaction.

  19. Complexity measures in magnetoencephalography: measuring "disorder" in schizophrenia.

    Directory of Open Access Journals (Sweden)

    Matthew J Brookes

    Full Text Available This paper details a methodology which, when applied to magnetoencephalography (MEG data, is capable of measuring the spatio-temporal dynamics of 'disorder' in the human brain. Our method, which is based upon signal entropy, shows that spatially separate brain regions (or networks generate temporally independent entropy time-courses. These time-courses are modulated by cognitive tasks, with an increase in local neural processing characterised by localised and transient increases in entropy in the neural signal. We explore the relationship between entropy and the more established time-frequency decomposition methods, which elucidate the temporal evolution of neural oscillations. We observe a direct but complex relationship between entropy and oscillatory amplitude, which suggests that these metrics are complementary. Finally, we provide a demonstration of the clinical utility of our method, using it to shed light on aberrant neurophysiological processing in schizophrenia. We demonstrate significantly increased task induced entropy change in patients (compared to controls in multiple brain regions, including a cingulo-insula network, bilateral insula cortices and a right fronto-parietal network. These findings demonstrate potential clinical utility for our method and support a recent hypothesis that schizophrenia can be characterised by abnormalities in the salience network (a well characterised distributed network comprising bilateral insula and cingulate cortices.

  20. Complexity measures in magnetoencephalography: measuring "disorder" in schizophrenia.

    Science.gov (United States)

    Brookes, Matthew J; Hall, Emma L; Robson, Siân E; Price, Darren; Palaniyappan, Lena; Liddle, Elizabeth B; Liddle, Peter F; Robinson, Stephen E; Morris, Peter G

    2015-01-01

    This paper details a methodology which, when applied to magnetoencephalography (MEG) data, is capable of measuring the spatio-temporal dynamics of 'disorder' in the human brain. Our method, which is based upon signal entropy, shows that spatially separate brain regions (or networks) generate temporally independent entropy time-courses. These time-courses are modulated by cognitive tasks, with an increase in local neural processing characterised by localised and transient increases in entropy in the neural signal. We explore the relationship between entropy and the more established time-frequency decomposition methods, which elucidate the temporal evolution of neural oscillations. We observe a direct but complex relationship between entropy and oscillatory amplitude, which suggests that these metrics are complementary. Finally, we provide a demonstration of the clinical utility of our method, using it to shed light on aberrant neurophysiological processing in schizophrenia. We demonstrate significantly increased task induced entropy change in patients (compared to controls) in multiple brain regions, including a cingulo-insula network, bilateral insula cortices and a right fronto-parietal network. These findings demonstrate potential clinical utility for our method and support a recent hypothesis that schizophrenia can be characterised by abnormalities in the salience network (a well characterised distributed network comprising bilateral insula and cingulate cortices).

  1. Complex Fuzzy Set-Valued Complex Fuzzy Measures and Their Properties

    Science.gov (United States)

    Ma, Shengquan; Li, Shenggang

    2014-01-01

    Let F*(K) be the set of all fuzzy complex numbers. In this paper some classical and measure-theoretical notions are extended to the case of complex fuzzy sets. They are fuzzy complex number-valued distance on F*(K), fuzzy complex number-valued measure on F*(K), and some related notions, such as null-additivity, pseudo-null-additivity, null-subtraction, pseudo-null-subtraction, autocontionuous from above, autocontionuous from below, and autocontinuity of the defined fuzzy complex number-valued measures. Properties of fuzzy complex number-valued measures are studied in detail. PMID:25093202

  2. Complex fuzzy set-valued complex fuzzy measures and their properties.

    Science.gov (United States)

    Ma, Shengquan; Li, Shenggang

    2014-01-01

    Let F*(K) be the set of all fuzzy complex numbers. In this paper some classical and measure-theoretical notions are extended to the case of complex fuzzy sets. They are fuzzy complex number-valued distance on F*(K), fuzzy complex number-valued measure on F*(K), and some related notions, such as null-additivity, pseudo-null-additivity, null-subtraction, pseudo-null-subtraction, autocontionuous from above, autocontionuous from below, and autocontinuity of the defined fuzzy complex number-valued measures. Properties of fuzzy complex number-valued measures are studied in detail.

  3. Evaluating quantitative measures of grammatical complexity in spontaneous speech samples.

    Science.gov (United States)

    Blake, J; Quartaro, G; Onorati, S

    1993-02-01

    The validity of MLU and a measure of syntactic complexity were tested against LARSP on spontaneous speech samples from 87 children, ranging in age from 1;6 to 4;9. Change in some LARSP clausal measures was found across MLU stages up to MLU 4.5. For the measure of syntactic complexity, no such ceiling was found for the clausal connectivity score in LARSP or for average clausal complexity in LARSP. Neither MLU nor the measure of syntactic complexity indexed LARSP phrasal complexity. It is concluded that MLU is a valid measure of clausal complexity up to 4.5 and that our measure of syntactic complexity is more valid at more advanced stages.

  4. Techniques to measure complex-plane fields

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2014-09-25

    Full Text Available In this work we construct coherent superpositions of Gaussian and vortex modes which can be described to occupy the complex-plane. We demonstrate how these fields can be experimentally constructed in a digital, controllable manner with a spatial...

  5. Complexity analysis in particulate matter measurements

    OpenAIRE

    Luciano Telesca; Michele Lovallo

    2011-01-01

    We investigated the complex temporal fluctuations of particulate matter data recorded in London area by using the Fisher-Shannon (FS) information plane. In the FS plane the PM10 and PM2.5 data are aggregated in two different clusters, characterized by different degrees of order and organization. This results could be related to different sources of the particulate matter.

  6. Measurement of protein-ligand complex formation.

    Science.gov (United States)

    Lowe, Peter N; Vaughan, Cara K; Daviter, Tina

    2013-01-01

    Experimental approaches to detect, measure, and quantify protein-ligand binding, along with their theoretical bases, are described. A range of methods for detection of protein-ligand interactions is summarized. Specific protocols are provided for a nonequilibrium procedure pull-down assay, for an equilibrium direct binding method and its modification into a competition-based measurement and for steady-state measurements based on the effects of ligands on enzyme catalysis.

  7. Quantum Measurement, Complexity and Discrete Physics

    OpenAIRE

    Leckey, Martin

    2003-01-01

    This paper presents a new modified quantum mechanics, Critical Complexity Quantum Mechanics, which includes a new account of wavefunction collapse. This modified quantum mechanics is shown to arise naturally from a fully discrete physics, where all physical quantities are discrete rather than continuous. I compare this theory with the spontaneous collapse theories of Ghirardi, Rimini, Weber and Pearle and discuss some implications of the theory for a realist view of the quantum realm.

  8. Complexity analysis in particulate matter measurements

    Directory of Open Access Journals (Sweden)

    Luciano Telesca

    2011-09-01

    Full Text Available We investigated the complex temporal fluctuations of particulate matter data recorded in London area by using the Fisher-Shannon (FS information plane. In the FS plane the PM10 and PM2.5 data are aggregated in two different clusters, characterized by different degrees of order and organization. This results could be related to different sources of the particulate matter.

  9. Comparative Analysis of EEG Signals Based on Complexity Measure

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    The aim of this study is to identify the functions and states of the brains according to the values of the complexity measure of the EEG signals. The EEG signals of 30 normal samples and 30 patient samples are collected. Based on the preprocessing for the raw data, a computational program for complexity measure is compiled and the complexity measures of all samples are calculated. The mean value and standard error of complexity measure of control group is as 0.33 and 0.10, and the normal group is as 0.53 an...

  10. Unraveling chaotic attractors by complex networks and measurements of stock market complexity

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Hongduo; Li, Ying, E-mail: mnsliy@mail.sysu.edu.cn [Business School, Sun Yat-Sen University, Guangzhou 510275 (China)

    2014-03-15

    We present a novel method for measuring the complexity of a time series by unraveling a chaotic attractor modeled on complex networks. The complexity index R, which can potentially be exploited for prediction, has a similar meaning to the Kolmogorov complexity (calculated from the Lempel–Ziv complexity), and is an appropriate measure of a series' complexity. The proposed method is used to research the complexity of the world's major capital markets. None of these markets are completely random, and they have different degrees of complexity, both over the entire length of their time series and at a level of detail. However, developing markets differ significantly from mature markets. Specifically, the complexity of mature stock markets is stronger and more stable over time, whereas developing markets exhibit relatively low and unstable complexity over certain time periods, implying a stronger long-term price memory process.

  11. Unraveling chaotic attractors by complex networks and measurements of stock market complexity.

    Science.gov (United States)

    Cao, Hongduo; Li, Ying

    2014-03-01

    We present a novel method for measuring the complexity of a time series by unraveling a chaotic attractor modeled on complex networks. The complexity index R, which can potentially be exploited for prediction, has a similar meaning to the Kolmogorov complexity (calculated from the Lempel-Ziv complexity), and is an appropriate measure of a series' complexity. The proposed method is used to research the complexity of the world's major capital markets. None of these markets are completely random, and they have different degrees of complexity, both over the entire length of their time series and at a level of detail. However, developing markets differ significantly from mature markets. Specifically, the complexity of mature stock markets is stronger and more stable over time, whereas developing markets exhibit relatively low and unstable complexity over certain time periods, implying a stronger long-term price memory process.

  12. Unraveling chaotic attractors by complex networks and measurements of stock market complexity

    Science.gov (United States)

    Cao, Hongduo; Li, Ying

    2014-03-01

    We present a novel method for measuring the complexity of a time series by unraveling a chaotic attractor modeled on complex networks. The complexity index R, which can potentially be exploited for prediction, has a similar meaning to the Kolmogorov complexity (calculated from the Lempel-Ziv complexity), and is an appropriate measure of a series' complexity. The proposed method is used to research the complexity of the world's major capital markets. None of these markets are completely random, and they have different degrees of complexity, both over the entire length of their time series and at a level of detail. However, developing markets differ significantly from mature markets. Specifically, the complexity of mature stock markets is stronger and more stable over time, whereas developing markets exhibit relatively low and unstable complexity over certain time periods, implying a stronger long-term price memory process.

  13. On the complexity of computing two nonlinearity measures

    DEFF Research Database (Denmark)

    Find, Magnus Gausdal

    2014-01-01

    We study the computational complexity of two Boolean nonlinearity measures: the nonlinearity and the multiplicative complexity. We show that if one-way functions exist, no algorithm can compute the multiplicative complexity in time 2O(n) given the truth table of length 2n, in fact under the same...

  14. Cognitive Agility Measurement in a Complex Environment

    Science.gov (United States)

    2017-04-11

    Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202- 4302, and to the Office of Management...experiment using psychological tests and a military decision computer game called Make Goal to attempt to measure cognitive agility in military leaders ...part, is prohibited except by permission of the Director , TRAC, ATTN: ATRC, 255 Sedgwick Avenue, Fort Leavenworth, Kansas 66027-2345. DISTRIBUTION

  15. A complexity measure for symbolic sequences and applications to DNA

    CERN Document Server

    Majtey, A P; Lamberti, P W; Majtey, Ana P.; Roman-Roldan, Ramon; Lamberti, Pedro W.

    2006-01-01

    We introduce a complexity measure for symbolic sequences. Starting from a segmentation procedure of the sequence, we define its complexity as the entropy of the distribution of lengths of the domains of relatively uniform composition in which the sequence is decomposed. We show that this quantity verifies the properties usually required for a ``good'' complexity measure. In particular it satisfies the one hump property, is super-additive and has the important property of being dependent of the level of detail in which the sequence is analyzed. Finally we apply it to the evaluation of the complexity profile of some genetic sequences.

  16. Clinical complexity in medicine: A measurement model of task and patient complexity

    Science.gov (United States)

    Islam, R.; Weir, C.; Fiol, G. Del

    2016-01-01

    Summary Background Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. Objective The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on infectious disease domain. The measurement model was adapted and modified to healthcare domain. Methods Three clinical Infectious Disease teams were observed, audio-recorded and transcribed. Each team included an Infectious Diseases expert, one Infectious Diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding process and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen’s kappa. Results The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. Conclusion The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare. PMID:26404626

  17. Laser beam complex amplitude measurement by phase diversity.

    Science.gov (United States)

    Védrenne, Nicolas; Mugnier, Laurent M; Michau, Vincent; Velluet, Marie-Thérèse; Bierent, Rudolph

    2014-02-24

    The control of the optical quality of a laser beam requires a complex amplitude measurement able to deal with strong modulus variations and potentially highly perturbed wavefronts. The method proposed here consists in an extension of phase diversity to complex amplitude measurements that is effective for highly perturbed beams. Named camelot for Complex Amplitude MEasurement by a Likelihood Optimization Tool, it relies on the acquisition and processing of few images of the beam section taken along the optical path. The complex amplitude of the beam is retrieved from the images by the minimization of a Maximum a Posteriori error metric between the images and a model of the beam propagation. The analytical formalism of the method and its experimental validation are presented. The modulus of the beam is compared to a measurement of the beam profile, the phase of the beam is compared to a conventional phase diversity estimate. The precision of the experimental measurements is investigated by numerical simulations.

  18. Complexity Measurement of Large-Scale Software System Based on Complex Network

    Directory of Open Access Journals (Sweden)

    Dali Li

    2014-05-01

    Full Text Available With the increase of software system complexity, the traditional measurements can not meet the requirements, for the reason that the developers need control the software quality effectively and guarantee the normal operation of software system. Hence how to measure the complexity of large-scale software system has been a challenge problem. In order to solve this problem, the developers have to obtain a good method to measure the complexity of software system first. Only through this work, the software quality and the software structure could be controlled and optimized. Note that the complex network theory has offered a new theoretical understanding and a new perspective to solve this kind of complexity problem, this work discusses the complexity phenomenon in large-scale software system. Based on this, some complexity measurements of large-scale software system are put forward from static structure and dynamic structure perspectives. Furthermore, we find some potential complexity characteristics in large-scale software networks through the numerical simulations. The proposed measurement methods have a guiding significance on the development for today's large-scale software system. In addition, this paper presents a new technique for the structural complexity measurements of large-scale software system

  19. A Complexity measure based on Requirement Engineering Document

    CERN Document Server

    Sharma, Ashish

    2010-01-01

    Research shows, that the major issue in development of quality software is precise estimation. Further this estimation depends upon the degree of intricacy inherent in the software i.e. complexity. This paper attempts to empirically demonstrate the proposed complexity which is based on IEEE Requirement Engineering document. It is said that a high quality SRS is pre requisite for high quality software. Requirement Engineering document (SRS) is a specification for a particular software product, program or set of program that performs some certain functions for a specific environment. The various complexity measure given so far are based on Code and Cognitive metrics value of software, which are code based. So these metrics provide no leverage to the developer of the code. Considering the shortcoming of code based approaches, the proposed approach identifies complexity of software immediately after freezing the requirement in SDLC process. The proposed complexity measure compares well with established complexity...

  20. The generalization complexity measure for continuous input data.

    Science.gov (United States)

    Gómez, Iván; Cannas, Sergio A; Osenda, Omar; Jerez, José M; Franco, Leonardo

    2014-01-01

    We introduce in this work an extension for the generalization complexity measure to continuous input data. The measure, originally defined in Boolean space, quantifies the complexity of data in relationship to the prediction accuracy that can be expected when using a supervised classifier like a neural network, SVM, and so forth. We first extend the original measure for its use with continuous functions to later on, using an approach based on the use of the set of Walsh functions, consider the case of having a finite number of data points (inputs/outputs pairs), that is, usually the practical case. Using a set of trigonometric functions a model that gives a relationship between the size of the hidden layer of a neural network and the complexity is constructed. Finally, we demonstrate the application of the introduced complexity measure, by using the generated model, to the problem of estimating an adequate neural network architecture for real-world data sets.

  1. A Z_m Ham Sandwich Theorem for Complex Measures

    CERN Document Server

    Simon, Steven

    2010-01-01

    A "ham sandwich" theorem is derived for n complex measures on C^n. For each integer m >= 2, a complex hyperplane H and m corresponding "1/m" sectors are found, satisfying the condition that the "rotational average" of the measures of these sectors is zero. The result can be seen as a kind of rotational equipartition of each measure by a regular "m-fan". The proof of the theorem is topological, based on a Z_m version of the Borsuk-Ulam theorem for punctured (2n - 1)-spheres in C^n. Any finite real measure on R^{2n} can be seen as a type of complex measure on C^n, so the theorem can be applied to n finite measures on R^{2n}. In the case that m = 3, this shows the existence of a regular 3-fan which trisects each of the n measures. Similarly, any pair of finite real measures on R^{2n} can be naturally identified with a complex measure on C^n, so the theorem applies to 2n finite measures on R^{2n}. This yields a rotational condition on pairs of finite measures, and the original ham sandwich theorem for R^{2n} is r...

  2. Complexity and Chaos - State-of-the-Art; Formulations and Measures of Complexity

    Science.gov (United States)

    2007-09-01

    interpretation and description. That is the complexity of encoding the realisation into a descriptive code and decoding it back into a realisation of... GSM -05-20001 (http://www.dialog.com). Edmonds, Bruce, 1999. Syntactic Measures of Complexity. PhD Thesis. University of Manchester, number of pages

  3. Confidence bounds of recurrence-based complexity measures

    Energy Technology Data Exchange (ETDEWEB)

    Schinkel, Stefan [Interdisciplinary Centre for Dynamics of Complex Systems, University of Potsdam (Germany)], E-mail: schinkel@agnld.uni-potsdam.de; Marwan, N. [Interdisciplinary Centre for Dynamics of Complex Systems, University of Potsdam (Germany); Potsdam Institute for Climate Impact Research (PIK) (Germany); Dimigen, O. [Department of Psychology, University of Potsdam (Germany); Kurths, J. [Potsdam Institute for Climate Impact Research (PIK) (Germany); Department of Physics, Humboldt University at Berlin (Germany)

    2009-06-15

    In the recent past, recurrence quantification analysis (RQA) has gained an increasing interest in various research areas. The complexity measures the RQA provides have been useful in describing and analysing a broad range of data. It is known to be rather robust to noise and nonstationarities. Yet, one key question in empirical research concerns the confidence bounds of measured data. In the present Letter we suggest a method for estimating the confidence bounds of recurrence-based complexity measures. We study the applicability of the suggested method with model and real-life data.

  4. Complexity curve: a graphical measure of data complexity and classifier performance

    Directory of Open Access Journals (Sweden)

    Julian Zubek

    2016-08-01

    Full Text Available We describe a method for assessing data set complexity based on the estimation of the underlining probability distribution and Hellinger distance. In contrast to some popular complexity measures, it is not focused on the shape of a decision boundary in a classification task but on the amount of available data with respect to the attribute structure. Complexity is expressed in terms of graphical plot, which we call complexity curve. It demonstrates the relative increase of available information with the growth of sample size. We perform theoretical and experimental examination of properties of the introduced complexity measure and show its relation to the variance component of classification error. We then compare it with popular data complexity measures on 81 diverse data sets and show that it can contribute to explaining performance of specific classifiers on these sets. We also apply our methodology to a panel of simple benchmark data sets, demonstrating how it can be used in practice to gain insights into data characteristics. Moreover, we show that the complexity curve is an effective tool for reducing the size of the training set (data pruning, allowing to significantly speed up the learning process without compromising classification accuracy. The associated code is available to download at: https://github.com/zubekj/complexity_curve (open source Python implementation.

  5. Spatio-ecological complexity measures in GRASS GIS

    Science.gov (United States)

    Rocchini, Duccio; Petras, Vaclav; Petrasova, Anna; Chemin, Yann; Ricotta, Carlo; Frigeri, Alessandro; Landa, Martin; Marcantonio, Matteo; Bastin, Lucy; Metz, Markus; Delucchi, Luca; Neteler, Markus

    2017-07-01

    Good estimates of ecosystem complexity are essential for a number of ecological tasks: from biodiversity estimation, to forest structure variable retrieval, to feature extraction by edge detection and generation of multifractal surface as neutral models for e.g. feature change assessment. Hence, measuring ecological complexity over space becomes crucial in macroecology and geography. Many geospatial tools have been advocated in spatial ecology to estimate ecosystem complexity and its changes over space and time. Among these tools, free and open source options especially offer opportunities to guarantee the robustness of algorithms and reproducibility. In this paper we will summarize the most straightforward measures of spatial complexity available in the Free and Open Source Software GRASS GIS, relating them to key ecological patterns and processes.

  6. Measuring logic complexity can guide pattern discovery in empirical systems

    CERN Document Server

    Gherardi, Marco

    2016-01-01

    We explore a definition of complexity based on logic functions, which are widely used as compact descriptions of rules in diverse fields of contemporary science. Detailed numerical analysis shows that (i) logic complexity is effective in discriminating between classes of functions commonly employed in modelling contexts; (ii) it extends the notion of canalisation, used in the study of genetic regulation, to a more general and detailed measure; (iii) it is tightly linked to the resilience of a function's output to noise affecting its inputs. We demonstrate its utility by measuring it in empirical data on gene regulation, digital circuitry, and propositional calculus. Logic complexity is exceptionally low in these systems. The asymmetry between "on" and "off" states in the data correlates with the complexity in a non-null way; a model of random Boolean networks clarifies this trend and indicates a common hierarchical architecture in the three systems.

  7. Objective measurement of chronic pain by a complex concentration test

    OpenAIRE

    Berg, Anja; Oster, Karen; Janig, Herbert; Likar, Rudolf; Pipam, Wolfgang; Scholz, Anja; Westhoff, Karl

    2009-01-01

    Higher intensity of chronic pain occurs together with the subjective experience of impaired concentration. With a complex test of concentration two facets of concentrated work can be measured reliably and validly: speed of concentrated work and percentage of concentration errors. Two studies were conducted to test whether the Complex-Concentration-Test is suitable for assessing the cognitive deficit caused by chronic pain. In Study I, 60 chronic pain patients in Germany, and in Study II, 86 p...

  8. Riemannian-geometric entropy for measuring network complexity

    Science.gov (United States)

    Franzosi, Roberto; Felice, Domenico; Mancini, Stefano; Pettini, Marco

    2016-06-01

    A central issue in the science of complex systems is the quantitative characterization of complexity. In the present work we address this issue by resorting to information geometry. Actually we propose a constructive way to associate with a—in principle, any—network a differentiable object (a Riemannian manifold) whose volume is used to define the entropy. The effectiveness of the latter in measuring network complexity is successfully proved through its capability of detecting a classical phase transition occurring in both random graphs and scale-free networks, as well as of characterizing small exponential random graphs, configuration models, and real networks.

  9. Complex Squeezing and Force Measurement Beyond the Standard Quantum Limit.

    Science.gov (United States)

    Buchmann, L F; Schreppler, S; Kohler, J; Spethmann, N; Stamper-Kurn, D M

    2016-07-15

    A continuous quantum field, such as a propagating beam of light, may be characterized by a squeezing spectrum that is inhomogeneous in frequency. We point out that homodyne detectors, which are commonly employed to detect quantum squeezing, are blind to squeezing spectra in which the correlation between amplitude and phase fluctuations is complex. We find theoretically that such complex squeezing is a component of ponderomotive squeezing of light through cavity optomechanics. We propose a detection scheme called synodyne detection, which reveals complex squeezing and allows the accounting of measurement backaction. Even with the optomechanical system subject to continuous measurement, such detection allows the measurement of one component of an external force with sensitivity only limited by the mechanical oscillator's thermal occupation.

  10. A complex network-based importance measure for mechatronics systems

    Science.gov (United States)

    Wang, Yanhui; Bi, Lifeng; Lin, Shuai; Li, Man; Shi, Hao

    2017-01-01

    In view of the negative impact of functional dependency, this paper attempts to provide an alternative importance measure called Improved-PageRank (IPR) for measuring the importance of components in mechatronics systems. IPR is a meaningful extension of the centrality measures in complex network, which considers usage reliability of components and functional dependency between components to increase importance measures usefulness. Our work makes two important contributions. First, this paper integrates the literature of mechatronic architecture and complex networks theory to define component network. Second, based on the notion of component network, a meaningful IPR is brought into the identifying of important components. In addition, the IPR component importance measures, and an algorithm to perform stochastic ordering of components due to the time-varying nature of usage reliability of components and functional dependency between components, are illustrated with a component network of bogie system that consists of 27 components.

  11. A Measure of Learning Model Complexity by VC Dimension

    Institute of Scientific and Technical Information of China (English)

    WANG Wen-jian; ZHANG Li-xia; XU Zong-ben

    2002-01-01

    When developing models there is always a trade-off between model complexity and model fit. In this paper, a measure of learning model complexity based on VC dimension is presented, and some relevant mathematical theory surrounding the derivation and use of this metric is summarized. The measure allows modelers to control the amount of error that is returned from a modeling system and to state upper bounds on the amount of error that the modeling system will return on all future, as yet unseen and uncollected data sets. It is possible for modelers to use the VC theory to determine which type of model more accurately represents a system.

  12. Intensive statistical complexity measure of pseudorandom number generators

    Science.gov (United States)

    Larrondo, H. A.; González, C. M.; Martín, M. T.; Plastino, A.; Rosso, O. A.

    2005-10-01

    A Statistical Complexity measure has been recently proposed to quantify the performance of chaotic Pseudorandom number generators (PRNG) (Physica A 354 (2005) 281). Here we revisit this quantifier and introduce two important improvements: (i) consideration of an intensive statistical complexity (Physica A 334 (2004) 119), and (ii) following the prescription of Brand and Pompe (Phys. Rev. Lett. 88 (2002) 174102-1) in evaluating the probability distribution associated with the PRNG. The ensuing new measure is applied to a very well-tested PRNG advanced by Marsaglia.

  13. Sensitivity of 2-D complex resistivity measurements to subsurface anisotropy

    Science.gov (United States)

    Kenkel, J.; Kemna, A.

    2017-02-01

    In general, the complex electrical resistivity in the subsurface is anisotropic. Despite this, algorithms for the tomographic inversion of complex resistivity data commonly assume isotropy, mainly due to the lack of anisotropic modelling and inversion schemes, potentially leading to artefacts in the inversion results in the presence of anisotropy. The development of an effective anisotropic complex resistivity inversion algorithm which utilizes the gradient information of some cost function benefits from understanding the characteristics of the problem's sensitivities, that is, the partial derivative of the impedance forward response with respect to the complex conductivities in the different spatial directions, as well as with respect to the different ratios of complex conductivities, that is, the different anisotropy ratios. We here derive expressions for these sensitivities and, based on a 2.5-D finite-element modelling algorithm, we compute and discuss sensitivity distributions as well as measurement response curves of typical surface and cross-borehole measurement configurations for 2-D subsurface anisotropic complex resistivity distributions. Depending on the electrode layout and measurement configuration, the sensitivity with respect to the conductivity in a particular direction shows a unique pattern, while for other directions sensitivity patterns are qualitatively similar. These sensitivity characteristics translate into important equivalences between impedance responses of local anisotropic and isotropic anomalies, for both magnitude and phase. Accordingly, with collinear surface arrays only the complex conductivity in the direction of the electrode layout can be unambiguously resolved, and with cross-borehole arrays only the conductivity in the vertical direction, provided an in-hole current injection is used. Nevertheless, anisotropy ratios involving these resolvable conductivity components are likewise detectable. The distinct shape of the measurement

  14. A new complexity measure for time series analysis and classification

    Science.gov (United States)

    Nagaraj, Nithin; Balasubramanian, Karthi; Dey, Sutirth

    2013-07-01

    Complexity measures are used in a number of applications including extraction of information from data such as ecological time series, detection of non-random structure in biomedical signals, testing of random number generators, language recognition and authorship attribution etc. Different complexity measures proposed in the literature like Shannon entropy, Relative entropy, Lempel-Ziv, Kolmogrov and Algorithmic complexity are mostly ineffective in analyzing short sequences that are further corrupted with noise. To address this problem, we propose a new complexity measure ETC and define it as the "Effort To Compress" the input sequence by a lossless compression algorithm. Here, we employ the lossless compression algorithm known as Non-Sequential Recursive Pair Substitution (NSRPS) and define ETC as the number of iterations needed for NSRPS to transform the input sequence to a constant sequence. We demonstrate the utility of ETC in two applications. ETC is shown to have better correlation with Lyapunov exponent than Shannon entropy even with relatively short and noisy time series. The measure also has a greater rate of success in automatic identification and classification of short noisy sequences, compared to entropy and a popular measure based on Lempel-Ziv compression (implemented by Gzip).

  15. The Generalization Complexity Measure for Continuous Input Data

    Directory of Open Access Journals (Sweden)

    Iván Gómez

    2014-01-01

    defined in Boolean space, quantifies the complexity of data in relationship to the prediction accuracy that can be expected when using a supervised classifier like a neural network, SVM, and so forth. We first extend the original measure for its use with continuous functions to later on, using an approach based on the use of the set of Walsh functions, consider the case of having a finite number of data points (inputs/outputs pairs, that is, usually the practical case. Using a set of trigonometric functions a model that gives a relationship between the size of the hidden layer of a neural network and the complexity is constructed. Finally, we demonstrate the application of the introduced complexity measure, by using the generated model, to the problem of estimating an adequate neural network architecture for real-world data sets.

  16. Measuring the Complexity of Self-Organizing Traffic Lights

    Directory of Open Access Journals (Sweden)

    Darío Zubillaga

    2014-04-01

    Full Text Available We apply measures of complexity, emergence, and self-organization to an urban traffic model for comparing a traditional traffic-light coordination method with a self-organizing method in two scenarios: cyclic boundaries and non-orientable boundaries. We show that the measures are useful to identify and characterize different dynamical phases. It becomes clear that different operation regimes are required for different traffic demands. Thus, not only is traffic a non-stationary problem, requiring controllers to adapt constantly; controllers must also change drastically the complexity of their behavior depending on the demand. Based on our measures and extending Ashby’s law of requisite variety, we can say that the self-organizing method achieves an adaptability level comparable to that of a living system.

  17. Optimizing complexity measures for FMRI data: algorithm, artifact, and sensitivity.

    Directory of Open Access Journals (Sweden)

    Denis Rubin

    Full Text Available INTRODUCTION: Complexity in the brain has been well-documented at both neuronal and hemodynamic scales, with increasing evidence supporting its use in sensitively differentiating between mental states and disorders. However, application of complexity measures to fMRI time-series, which are short, sparse, and have low signal/noise, requires careful modality-specific optimization. METHODS: HERE WE USE BOTH SIMULATED AND REAL DATA TO ADDRESS TWO FUNDAMENTAL ISSUES: choice of algorithm and degree/type of signal processing. Methods were evaluated with regard to resilience to acquisition artifacts common to fMRI as well as detection sensitivity. Detection sensitivity was quantified in terms of grey-white matter contrast and overlap with activation. We additionally investigated the variation of complexity with activation and emotional content, optimal task length, and the degree to which results scaled with scanner using the same paradigm with two 3T magnets made by different manufacturers. Methods for evaluating complexity were: power spectrum, structure function, wavelet decomposition, second derivative, rescaled range, Higuchi's estimate of fractal dimension, aggregated variance, and detrended fluctuation analysis. To permit direct comparison across methods, all results were normalized to Hurst exponents. RESULTS: Power-spectrum, Higuchi's fractal dimension, and generalized Hurst exponent based estimates were most successful by all criteria; the poorest-performing measures were wavelet, detrended fluctuation analysis, aggregated variance, and rescaled range. CONCLUSIONS: Functional MRI data have artifacts that interact with complexity calculations in nontrivially distinct ways compared to other physiological data (such as EKG, EEG for which these measures are typically used. Our results clearly demonstrate that decisions regarding choice of algorithm, signal processing, time-series length, and scanner have a significant impact on the reliability and

  18. Network Complexity Measures. An Information-Theoretic Approach.

    Directory of Open Access Journals (Sweden)

    Matthias Dehmer

    2015-04-01

    Full Text Available Quantitative graph analysis by using structural indices has been intricate in a sense that it often remains unclear which structural graph measures is the most suitable one, see [1, 12, 13]. In general, quantitative graph analysis deals with quantifying structural information of networks by using a measurement approach [5]. As special problem thereof is to characterize a graph quantitatively, that means to determine a measure that captures structural features of a network meaningfully. Various classical structural graph measures have been used to tackle this problem [13]. A fruitful approach by using information-theoretic [21] and statistical methods is to quantify the structural information content of a graph [1, 8, 18]. In this note, we sketch some classical information measures. Also, we briefly address the problem what kind of measures capture structural information uniquely. This relates to determine the discrimination power (or also called uniqueness of a graph measure, that is, how is the ability of the measures to discriminate non-isomorphic graphs structurally. [1] D. Bonchev. Information Theoretic Indices for Characterization of Chemical Structures. Research Studies Press, Chichester, 1983. [5] M. Dehmer and F. Emmert-Streib. Quantitative Graph Theory. Theory and Applications. CRC Press, 2014. [8] M. Dehmer, M. Grabner, and K. Varmuza. Information indices with high discriminative power for graphs. PLoS ONE, 7:e31214, 2012. [12] F. Emmert-Streib and M. Dehmer. Exploring statistical and population aspects of network complexity. PLoS ONE, 7:e34523, 2012. [13] F. Harary. Graph Theory. Addison Wesley Publishing Company, 1969. Reading, MA, USA. [18] A. Mowshowitz. Entropy and the complexity of the graphs I: An index of the relative complexity of a graph. Bull. Math. Biophys., 30:175–204, 1968. [21] C. E. Shannon and W. Weaver. The Mathematical Theory of Communication. University of Illinois Press, 1949.

  19. A Method for Measuring the Structure Complexity of Web Application

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The precise and effective measure results of Web applications not only facilitate good comprehension of them, but also benefit to the macro-management of software activities, such as testing, reverse engineering, reuse, etc. The paper exploits some researches on measuring the structure complexity of Web application. Through a deep analysis of the configuration and objects' interactions of Web system, two conclusions have been drawn:①A generic Web application consists of static web page, dynamic page, component and database object;②The main interactions have only three styles, that is static link, dynamic link and call/return relation. Based on analysis and modeling of the content of a Web page (static or dynamic), complexity measure methods of both control logic of script and nesting of HTML code are further discussed. In addition, two methods for measuring the complexity of inter-page navigation are also addressed by modeling the inter-page navigation behaviors of Web application via WNG graph.

  20. Measures for track complexity and robustness of operation at stations

    DEFF Research Database (Denmark)

    Landex, Alex; Jensen, Lars Wittrup

    2013-01-01

    Stations are often limiting the capacity of a railway network. However, most capacity analysis methods focus on open line capacity. This paper presents methods to analyse and describe stations by the use of complexity and robustness measures at stations.Five methods to analyse infrastructure...... and operation at stations are developed in the paper. The first method is an adapted UIC 406 capacity method that can be used to analyse switch zones and platform tracks at stations with simple track layouts. The second method examines the need for platform tracks and the probability that arriving trains...... will not get a platform track immediately at arrival. The third method is a scalable method that analyses the conflicts and the infrastructure complexity in the switch zone(s). The fourth method can be used to examine the complexity and the expected robustness of timetables at a station. The last method...

  1. An application of a measure for organization of complex networks

    Science.gov (United States)

    Georgiev, Georgi; Daly, Michael

    2013-03-01

    In order to measure self-organization in complex networks a quantitative measure for organization is necessary. This will allow us to measure their degree of organization and rate of self-organization. We apply as a measure for quantity of organization the inverse of the average sum of physical actions of all elements in a system per unit motion multiplied by the Planck's constant, using the principle of least action. The meaning of quantity of organization here is the inverse of average number of quanta of action per one node crossing of an element of the system. We apply this measure to the central processing unit (CPU) of computers. The organization for several generations of CPUs shows a double exponential rate of change of organization with time. The exact functional dependence has, S-shaped structure, suggesting some of the mechanisms of self-organization. We also study the dependence of organization on the number of transistors. This method helps us explain the mechanism of increase of organization through quantity accumulation and constraint and curvature minimization with an attractor, the least average sum of actions of all elements and for all motions. This approach can help to describe, quantify, measure, manage, design and predict future behavior of complex systems to achieve the highest rates of self-organization to improve their quality.

  2. Statistical analysis of complex systems with nonclassical invariant measures

    KAUST Repository

    Fratalocchi, Andrea

    2011-02-28

    I investigate the problem of finding a statistical description of a complex many-body system whose invariant measure cannot be constructed stemming from classical thermodynamics ensembles. By taking solitons as a reference system and by employing a general formalism based on the Ablowitz-Kaup-Newell-Segur scheme, I demonstrate how to build an invariant measure and, within a one-dimensional phase space, how to develop a suitable thermodynamics. A detailed example is provided with a universal model of wave propagation, with reference to a transparent potential sustaining gray solitons. The system shows a rich thermodynamic scenario, with a free-energy landscape supporting phase transitions and controllable emergent properties. I finally discuss the origin of such behavior, trying to identify common denominators in the area of complex dynamics.

  3. Reliability of surface EMG measurements from the suprahyoid muscle complex

    DEFF Research Database (Denmark)

    Kothari, Mohit; Stubbs, Peter William; Pedersen, Asger Roer

    2017-01-01

    reliable for ≈50% of participants. Although using sEMG to assess swallowing musculature function is easier to perform clinically and more comfortable to patients than invasive measures, as the measurement of muscle activity using TMS is unreliable, the use of sEMG for this muscle group is not recommended......Background: Assessment of swallowing musculature using motor evoked potentials (MEPs) can be used to evaluate neural pathways. However, recording of the swallowing musculature is often invasive, uncomfortable and unrealistic in normal clinical practise. Objective: To investigate the possibility...... of using the suprahyoid muscle complex (SMC) using surface electromyography (sEMG) to assess changes to neural pathways by determining the reliability of measurements in healthy participants over days. Methods: Seventeen healthy participants were recruited. Measurements were performed twice with one week...

  4. Measurements of complex refractive indices of photoactive yellow protein

    CERN Document Server

    Lee, KyeoReh; Jung, JaeHwang; Ihee, Hyotcherl; Park, YongKeun

    2015-01-01

    A novel optical technique for measuring the complex refractive index (CRI) of photoactive proteins over the wide range of visible wavelengths is presented. Employing quantitative phase microscopy equipped with a wavelength swept source, optical fields transmitted from a solution of photoactive proteins were precisely measured, from which the CRIs of the photoactive proteins were retrieved with the Fourier light scattering technique. Using the present method, both the real and imaginary RIs of a photoactive yellow protein (PYP) solution were precisely measured over a broad wavelength range (461 - 582 nm). The internal population of the ground and excited states were switched by blue light excitation (445 nm center wavelength), and the broadband refractive index increments of each state were measured. The significant CRI deviation between in the presence and absence of the blue excitation was quantified and explained based on the Kramers-Kronig relations.

  5. OPEN PUBLIC SPACE ATTRIBUTES AND CATEGORIES – COMPLEXITY AND MEASURABILITY

    Directory of Open Access Journals (Sweden)

    Ljiljana Čavić

    2014-12-01

    Full Text Available Within the field of architectural and urban research, this work addresses the complexity of contemporary public space, both in a conceptual and concrete sense. It aims at systematizing spatial attributes and their categories and discussing spatial complexity and measurability, all this in order to reach a more comprehensive understanding, description and analysis of public space. Our aim is to improve everyday usage of open public space and we acknowledged users as its crucial factor. There are numerous investigations on the complex urban and architectural reality of public space that recognise importance of users. However, we did not find any that would holistically account for what users find essential in public space. Based on the incompleteness of existing approaches on open public space and the importance of users for their success, this paper proposes a user-orientated approach. Through an initial survey directed to users, we collected the most important aspects of public spaces in the way that contemporary humans see them. The gathered data is analysed and coded into spatial attributes from which their role in the complexity of open public space and measurability are discussed. The work results in an inventory of attributes that users find salient in public spaces. It does not discuss their qualitative values or contribution in generating spatial realities. It aims to define them clearly so that any further logical argumentation on open space concerning users may be solidly constructed. Finally, through categorisation of attributes it proposes the disciplinary levels necessary for the analysis of complex urban-architectural reality

  6. Novel measures based on the Kolmogorov complexity for use in complex system behavior studies and time series analysis

    CERN Document Server

    Mihailovic, Dragutin T; Nikolic-Djoric, Emilija; Arsenic, Ilija

    2013-01-01

    We have proposed novel measures based on the Kolmogorov complexity for use in complex system behavior studies and time series analysis. We have considered background of the Kolmogorov complexity and also we have discussed meaning of the physical as well as other complexities. To get better insights into the complexity of complex systems and time series analysis we have introduced the three novel measures based on the Kolmogorov complexity: (i) the Kolmogorov complexity spectrum, (ii) the Kolmogorov complexity spectrum highest value and (iii) the overall Kolmogorov complexity. The characteristics of these measures have been tested using a generalized logistic equation. Finally, the proposed measures have been applied on different time series originating from: the model output (the biochemical substance exchange in a multi-cell system), four different geophysical phenomena (dynamics of: river flow, long term precipitation, indoor 222Rn concentration and UV radiation dose) and economy (stock prices dynamics). Re...

  7. Novel measures based on the Kolmogorov complexity for use in complex system behavior studies and time series analysis

    Directory of Open Access Journals (Sweden)

    Mihailović Dragutin T.

    2015-01-01

    Full Text Available We propose novel metrics based on the Kolmogorov complexity for use in complex system behavior studies and time series analysis. We consider the origins of the Kolmogorov complexity and discuss its physical meaning. To get better insights into the nature of complex systems and time series analysis we introduce three novel measures based on the Kolmogorov complexity: (i the Kolmogorov complexity spectrum, (ii the Kolmogorov complexity spectrum highest value and (iii the overall Kolmogorov complexity. The characteristics of these measures have been tested using a generalized logistic equation. Finally, the proposed measures have been applied to different time series originating from: a model output (the biochemical substance exchange in a multi-cell system, four different geophysical phenomena (dynamics of: river flow, long term precipitation, indoor 222Rn concentration and UV radiation dose and the economy (stock price dynamics. The results obtained offer deeper insights into the complexity of system dynamics and time series analysis with the proposed complexity measures.

  8. Compositional segmentation and complexity measurement in stock indices

    Science.gov (United States)

    Wang, Haifeng; Shang, Pengjian; Xia, Jianan

    2016-01-01

    In this paper, we introduce a complexity measure based on the entropic segmentation called sequence compositional complexity (SCC) into the analysis of financial time series. SCC was first used to deal directly with the complex heterogeneity in nonstationary DNA sequences. We already know that SCC was found to be higher in sequences with long-range correlation than those with low long-range correlation, especially in the DNA sequences. Now, we introduce this method into financial index data, subsequently, we find that the values of SCC of some mature stock indices, such as S & P 500 (simplified with S & P in the following) and HSI, are likely to be lower than the SCC value of Chinese index data (such as SSE). What is more, we find that, if we classify the indices with the method of SCC, the financial market of Hong Kong has more similarities with mature foreign markets than Chinese ones. So we believe that a good correspondence is found between the SCC of the index sequence and the complexity of the market involved.

  9. On bias of kinetic temperature measurements in complex plasmas

    DEFF Research Database (Denmark)

    Kantor, M.; Moseev, D.; Salewski, Mirko

    2014-01-01

    The kinetic temperature in complex plasmas is often measured using particle tracking velocimetry. Here, we introduce a criterion which minimizes the probability of faulty tracking of particles with normally distributed random displacements in consecutive frames. Faulty particle tracking results...... in a measurement bias of the deduced velocity distribution function and hence the deduced kinetic temperature. For particles with a normal velocity distribution function, mistracking biases the obtained velocity distribution function towards small velocities at the expense of large velocities, i.e., the inferred...... velocity distribution is more peaked and its tail is less pronounced. The kinetic temperature is therefore systematically underestimated in measurements. We give a prescription to mitigate this type of error....

  10. Simultaneous Rheoelectric Measurements of Strongly Conductive Complex Fluids

    Science.gov (United States)

    Helal, Ahmed; Divoux, Thibaut; McKinley, Gareth H.

    2016-12-01

    We introduce an modular fixture designed for stress-controlled rheometers to perform simultaneous rheological and electrical measurements on strongly conductive complex fluids under shear. By means of a nontoxic liquid metal at room temperature, the electrical connection to the rotating shaft is completed with minimal additional mechanical friction, allowing for simultaneous stress measurements at values as low as 1 Pa. Motivated by applications such as flow batteries, we use the capabilities of this design to perform an extensive set of rheoelectric experiments on gels formulated from attractive carbon-black particles, at concentrations ranging from 4 to 15 wt %. First, experiments on gels at rest prepared with different shear histories show a robust power-law scaling between the elastic modulus G0' and the conductivity σ0 of the gels—i.e., G0'˜σ0α, with α =1.65 ±0.04 , regardless of the gel concentration. Second, we report conductivity measurements performed simultaneously with creep experiments. Changes in conductivity in the early stage of the experiments, also known as the Andrade-creep regime, reveal for the first time that plastic events take place in the bulk, while the shear rate γ ˙ decreases as a weak power law of time. The subsequent evolution of the conductivity and the shear rate allows us to propose a local yielding scenario that is in agreement with previous velocimetry measurements. Finally, to establish a set of benchmark data, we determine the constitutive rheological and electrical behavior of carbon-black gels. Corrections first introduced for mechanical measurements regarding shear inhomogeneity and wall slip are carefully extended to electrical measurements to accurately distinguish between bulk and surface contributions to the conductivity. As an illustrative example, we examine the constitutive rheoelectric properties of five different grades of carbon-black gels and we demonstrate the relevance of this rheoelectric apparatus as a

  11. Atmospheric stability and complex terrain: comparing measurements and CFD

    DEFF Research Database (Denmark)

    Koblitz, Tilman; Bechmann, Andreas; Berg, Jacob;

    2014-01-01

    -neutral atmospheric flow over complex terrain including physical processes like stability and Coriolis force. We examine the influence of these effects on the whole atmospheric boundary layer using the DTU Wind Energy flow solver EllipSys3D. To validate the flow solver, measurements from Benakanahalli hill, a field...... experiment that took place in India in early 2010, are used. The experiment was specifically designed to address the combined effects of stability and Coriolis force over complex terrain, and provides a dataset to validate flow solvers. Including those effects into EllipSys3D significantly improves......For wind resource assessment, the wind industry is increasingly relying on Computational Fluid Dynamics models that focus on modeling the airflow in a neutrally stratified surface layer. So far, physical processes that are specific to the atmospheric boundary layer, for example the Coriolis force...

  12. Increment entropy as a measure of complexity for time series

    CERN Document Server

    Liu, Xiaofeng; Xu, Ning; Xue, Jianru

    2015-01-01

    Entropy has been a common index to quantify the complexity of time series in a variety of fields. Here, we introduce increment entropy to measure the complexity of time series in which each increment is mapped into a word of two letters, one letter corresponding to direction and the other corresponding to magnitude. The Shannon entropy of the words is termed as increment entropy (IncrEn). Simulations on synthetic data and tests on epileptic EEG signals have demonstrated its ability of detecting the abrupt change, regardless of energetic (e.g. spikes or bursts) or structural changes. The computation of IncrEn does not make any assumption on time series and it can be applicable to arbitrary real-world data.

  13. Overcoming Problems in the Measurement of Biological Complexity

    CERN Document Server

    Cebrian, Manuel; Ortega, Alfonso

    2010-01-01

    In a genetic algorithm, fluctuations of the entropy of a genome over time are interpreted as fluctuations of the information that the genome's organism is storing about its environment, being this reflected in more complex organisms. The computation of this entropy presents technical problems due to the small population sizes used in practice. In this work we propose and test an alternative way of measuring the entropy variation in a population by means of algorithmic information theory, where the entropy variation between two generational steps is the Kolmogorov complexity of the first step conditioned to the second one. As an example application of this technique, we report experimental differences in entropy evolution between systems in which sexual reproduction is present or absent.

  14. Determination of complex microcalorimeter parameters with impedance measurements

    Energy Technology Data Exchange (ETDEWEB)

    Saab, T. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States)]. E-mail: tsaab@phys.ufl.edu; Bandler, S.R. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Chervenak, J. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Figueroa-Feliciano, E. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Finkbeiner, F. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Iyomoto, N. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Kelley, R.L. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Kilbourne, C.A. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Lindeman, M.A. [University of Wisconsin, Madison, WI 53706 (United States); Porter, F.S. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Sadleir, J. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States)

    2006-04-15

    The proper understanding and modeling of a microcalorimeter's response requires accurate knowledge of a handful of parameters, such as C, G, {alpha}. While a few of these parameters are directly determined from the IV characteristics, some others, notoriously the heat capacity (C) and {alpha}, appear in degenerate combinations in most measurable quantities. The consideration of a complex microcalorimeter leads to an added ambiguity in the determination of the parameters. In general, the dependence of the microcalorimeter's complex impedance on these various parameters varies with frequency. This dependence allows us to determine individual parameters by fitting the prediction of the microcalorimeter model to impedance data. In this paper we describe efforts at characterizing the Goddard X-ray microcalorimeters. With the parameters determined by this method, we compare the pulse shape and noise spectra predictions to data taken with the same devices.

  15. Increment Entropy as a Measure of Complexity for Time Series

    Directory of Open Access Journals (Sweden)

    Xiaofeng Liu

    2016-01-01

    Full Text Available Entropy has been a common index to quantify the complexity of time series in a variety of fields. Here, we introduce an increment entropy to measure the complexity of time series in which each increment is mapped onto a word of two letters, one corresponding to the sign and the other corresponding to the magnitude. Increment entropy (IncrEn is defined as the Shannon entropy of the words. Simulations on synthetic data and tests on epileptic electroencephalogram (EEG signals demonstrate its ability of detecting abrupt changes, regardless of the energetic (e.g., spikes or bursts or structural changes. The computation of IncrEn does not make any assumption on time series, and it can be applicable to arbitrary real-world data.

  16. A new measure of heterogeneity for complex networks

    CERN Document Server

    Jacob, Rinku; Misra, R; Ambika, G

    2016-01-01

    We propose a novel measure of heterogeneity for unweighted and undirected complex networks that can be derived from the degree distribution of the network instead of the degree sequences, as is done at present. We show that the proposed measure can be applied to all types of topology with ease and shows direct correlation with the diversity of node degrees in the network. The measure is mathematically well behaved and is normalised in the interval [0, 1]. The measure is applied to compute the heterogeneity of synthetic (both random and scale free) and real world networks. We specifically show that the heterogeneity of an evolving scale free network decreases as a power law with the size of the network N, implying a scale free character for the proposed measure. Finally, as a specific application, we show that the proposed measure can be used to compare the heterogeneity of recurrence networks constructed from the time series of several low dimensional chaotic attractors, thereby providing a single index to co...

  17. Viscosity measurement of Newtonian liquids using the complex reflection coefficient.

    Science.gov (United States)

    Franco, Ediguer E; Adamowski, Julio C; Higuti, Ricardo T; Buiochi, Flávio

    2008-10-01

    This work presents the implementation of the ultrasonic shear reflectance method for viscosity measurement of Newtonian liquids using wave mode conversion from longitudinal to shear waves and vice versa. The method is based on the measurement of the complex reflection coefficient (magnitude and phase) at a solid-liquid interface. The implemented measurement cell is composed of an ultrasonic transducer, a water buffer, an aluminum prism, a PMMA buffer rod, and a sample chamber. Viscosity measurements were made in the range from 1 to 3.5 MHz for olive oil and for automotive oils (SAE 40, 90, and 250) at 15 and 22.5 degrees C, respectively. Moreover, olive oil and corn oil measurements were conducted in the range from 15 to 30 degrees C at 3.5 and 2.25 MHz, respectively. The ultrasonic measurements, in the case of the less viscous liquids, agree with the results provided by a rotational viscometer, showing Newtonian behavior. In the case of the more viscous liquids, a significant difference was obtained, showing a clear non-Newtonian behavior that cannot be described by the Kelvin-Voigt model.

  18. Fractal and complexity measures of heart rate variability.

    Science.gov (United States)

    Perkiömäki, Juha S; Mäkikallio, Timo H; Huikuri, Heikki V

    2005-01-01

    Heart rate variability has been analyzed conventionally with time and frequency domain methods, which measure the overall magnitude of RR interval fluctuations around its mean value or the magnitude of fluctuations in some predetermined frequencies. Analysis of heart rate dynamics by methods based on chaos theory and nonlinear system theory has gained recent interest. This interest is based on observations suggesting that the mechanisms involved in cardiovascular regulation likely interact with each other in a nonlinear way. Furthermore, recent observational studies suggest that some indexes describing nonlinear heart rate dynamics, such as fractal scaling exponents, may provide more powerful prognostic information than the traditional heart rate variability indexes. In particular, the short-term fractal scaling exponent measured by the detrended fluctuation analysis method has predicted fatal cardiovascular events in various populations. Approximate entropy, a nonlinear index of heart rate dynamics, that describes the complexity of RR interval behavior, has provided information on the vulnerability to atrial fibrillation. Many other nonlinear indexes, e.g., Lyapunov exponent and correlation dimensions, also give information on the characteristics of heart rate dynamics, but their clinical utility is not well established. Although concepts of chaos theory, fractal mathematics, and complexity measures of heart rate behavior in relation to cardiovascular physiology or various cardiovascular events are still far away from clinical medicine, they are a fruitful area for future research to expand our knowledge concerning the behavior of cardiovascular oscillations in normal healthy conditions as well as in disease states.

  19. Hydrogen bonding in the hydroxysulfinyl radical-formic acid-water system: A theoretical study.

    Science.gov (United States)

    Tušar, Simona; Lesar, Antonija

    2016-06-30

    Quantum chemical methods have been employed to evaluate the possible configurations of the 1:1 and 1:2 HOSO-formic acid complexes and 1:1:1 HOSO-formic acid-water complexes. The first type of complex involves two H bonds, while the other two types comprise three H bonds in a ring. The complexes are relatively stable, with CBS-QB3 computed binding energies of 14.3 kcal mol(-1) , 23.4 kcal mol(-1) , and 21.1 kcal mol(-1) for the lowest-energy structures of the 1:1, 1:2, and 1:1:1 complexes, respectively. Complex formations induce a large spectral red-shift and an enhancement of the IR intensity for the H-bonded OH stretching modes relative to those in the parent monomers. TDDFT calculations of the low-lying electronic excited states demonstrate that the complexes are photochemically quite stable in the troposphere. Small spectral shifts in comparison to the free HOSO radical suggest that the radical and the complexes would not be easily distinguishable using standard UV/vis absorption spectroscopy. © 2016 Wiley Periodicals, Inc.

  20. Complex susceptibility measurements of a suspension of magnetic beads

    Energy Technology Data Exchange (ETDEWEB)

    Fannin, P.C. [Department of Electronic and Electrical Engineering, Trinity College, Dublin 2 (Ireland)]. E-mail: pfannin@tcd.ie; Mac Oireachtaigh, C. [Department of Electronic and Electrical Engineering, Trinity College, Dublin 2 (Ireland); Cohen-Tannoudji, L. [Laboratoire Colloides et Materiaux Divises, CNRS UMR7612, ESPCI, 10 Rue Vauquelin, F-75005 Paris (France); Bertrand, E. [Laboratoire Colloides et Materiaux Divises, CNRS UMR7612, ESPCI, 10 Rue Vauquelin, F-75005 Paris (France); Bibette, J. [Laboratoire Colloides et Materiaux Divises, CNRS UMR7612, ESPCI, 10 Rue Vauquelin, F-75005 Paris (France)

    2006-05-15

    Measurements of the frequency and field dependence of the complex magnetic susceptibility, {chi}{sub s}({omega},H)={chi}{sub s}{sup '}({omega},H)-i{chi}{sub s}{sup '}'({omega},H), of a suspension of magnetic beads in water over the frequency range 200Hz to 1MHz are presented. The magnetic polarizing field, H, is applied to the sample, first in a forward direction and then in a reverse direction and from a plot of the static susceptibility, {chi}{sub 0S}, against polarizing field H, the existence of a hysteresis effect is demonstrated.

  1. Measuring the energy landscape of complex bonds using AFM

    Science.gov (United States)

    Mayyas, Essa; Hoffmann, Peter; Runyan, Lindsay

    2009-03-01

    We measured rupture force of a complex bond of two interacting proteins with atomic force microscopy. Proteins of interest were active and latent Matrix metalloproteinases (MMPs), type 2 and 9, and their tissue inhibitors TIMP1 and TIMP2. Measurements show that the rupture force depends on the pulling speed; it ranges from 30 pN to 150 pN at pulling speeds 30nm/s to 48000nm/s. Analyzing data using an extended theory enabled us to understand the mechanism of MMP-TIMP interaction; we determined all physical parameters that form the landscape energy of the interaction, in addition to the life time of the bond and its length. Moreover, we used the pulling experiment to study the interaction of TIMP2 with the receptor MT1-MMP on the surface of living cells.

  2. Reliability of surface electromyography measurements from the suprahyoid muscle complex.

    Science.gov (United States)

    Kothari, M; Stubbs, P W; Pedersen, A R; Jensen, J; Nielsen, J F

    2017-09-01

    Assessment of swallowing musculature using motor evoked potentials (MEPs) can be used to evaluate neural pathways. However, recording of the swallowing musculature is often invasive, uncomfortable and unrealistic in normal clinical practice. To investigate the possibility of using the suprahyoid muscle complex (SMC) using surface electromyography (sEMG) to assess changes to neural pathways by determining the reliability of measurements in healthy participants over days. Seventeen healthy participants were recruited. Measurements were performed twice with one week between sessions. Single-pulse (at 120% and 140% of the resting motor threshold (rMT)) and paired-pulse (2 ms and 15 ms paired pulse) transcranial magnetic stimulation (TMS) were used to elicit MEPs in the SMC which were recorded using sEMG. ≈50% of participants (range: 42-58%; depending on stimulus type/intensity) had significantly different MEP values between day 1 and day 2 for single-pulse and paired-pulse TMS. A large stimulus artefact resulted in MEP responses that could not be assessed in four participants. The assessment of the SMC using sEMG following TMS was poorly reliable for ≈50% of participants. Although using sEMG to assess swallowing musculature function is easier to perform clinically and more comfortable to patients than invasive measures, as the measurement of muscle activity using TMS is unreliable, the use of sEMG for this muscle group is not recommended and requires further research and development. © 2017 John Wiley & Sons Ltd.

  3. Complexity and measurement of complex degree of gas gush in heading faces of coal mine

    Energy Technology Data Exchange (ETDEWEB)

    He, Li-wen; Shi, Shi-liang; Song, Yi; Liu, Ying [Central Southern University, Changsha (China). School of ResourceS and Safety Engineering

    2008-05-15

    Rationality about ventilation in the working face of a coal mine, the validity of measures to prevent and stop gas disasters and security of high production are directly influenced by the gas gushing in the heading faces of a coal mine. Therefore, the time series of gas gushing in the heading faces was calculated and analyzed by applying the relation dimension. The result proves that the gas gushing system of the heading faces is a nonlinear chaos system with a complex structure and chaotic attractors and 11 unattached variables or an 11-step dynamic equation is needed to describe the system. 8 refs., 3 figs.

  4. Measuring Software Test Verification for Complex Workpieces based on Virtual Gear Measuring Instrument

    Directory of Open Access Journals (Sweden)

    Yin Peili

    2017-08-01

    Full Text Available Validity and correctness test verification of the measuring software has been a thorny issue hindering the development of Gear Measuring Instrument (GMI. The main reason is that the software itself is difficult to separate from the rest of the measurement system for independent evaluation. This paper presents a Virtual Gear Measuring Instrument (VGMI to independently validate the measuring software. The triangular patch model with accurately controlled precision was taken as the virtual workpiece and a universal collision detection model was established. The whole process simulation of workpiece measurement is implemented by VGMI replacing GMI and the measuring software is tested in the proposed virtual environment. Taking involute profile measurement procedure as an example, the validity of the software is evaluated based on the simulation results; meanwhile, experiments using the same measuring software are carried out on the involute master in a GMI. The experiment results indicate a consistency of tooth profile deviation and calibration results, thus verifying the accuracy of gear measuring system which includes the measurement procedures. It is shown that the VGMI presented can be applied in the validation of measuring software, providing a new ideal platform for testing of complex workpiece-measuring software without calibrated artifacts.

  5. Measuring Software Test Verification for Complex Workpieces based on Virtual Gear Measuring Instrument

    Science.gov (United States)

    Yin, Peili; Wang, Jianhua; Lu, Chunxia

    2017-08-01

    Validity and correctness test verification of the measuring software has been a thorny issue hindering the development of Gear Measuring Instrument (GMI). The main reason is that the software itself is difficult to separate from the rest of the measurement system for independent evaluation. This paper presents a Virtual Gear Measuring Instrument (VGMI) to independently validate the measuring software. The triangular patch model with accurately controlled precision was taken as the virtual workpiece and a universal collision detection model was established. The whole process simulation of workpiece measurement is implemented by VGMI replacing GMI and the measuring software is tested in the proposed virtual environment. Taking involute profile measurement procedure as an example, the validity of the software is evaluated based on the simulation results; meanwhile, experiments using the same measuring software are carried out on the involute master in a GMI. The experiment results indicate a consistency of tooth profile deviation and calibration results, thus verifying the accuracy of gear measuring system which includes the measurement procedures. It is shown that the VGMI presented can be applied in the validation of measuring software, providing a new ideal platform for testing of complex workpiece-measuring software without calibrated artifacts.

  6. Testing robustness of relative complexity measure method constructing robust phylogenetic trees for Galanthus L. Using the relative complexity measure

    Science.gov (United States)

    2013-01-01

    Background Most phylogeny analysis methods based on molecular sequences use multiple alignment where the quality of the alignment, which is dependent on the alignment parameters, determines the accuracy of the resulting trees. Different parameter combinations chosen for the multiple alignment may result in different phylogenies. A new non-alignment based approach, Relative Complexity Measure (RCM), has been introduced to tackle this problem and proven to work in fungi and mitochondrial DNA. Result In this work, we present an application of the RCM method to reconstruct robust phylogenetic trees using sequence data for genus Galanthus obtained from different regions in Turkey. Phylogenies have been analyzed using nuclear and chloroplast DNA sequences. Results showed that, the tree obtained from nuclear ribosomal RNA gene sequences was more robust, while the tree obtained from the chloroplast DNA showed a higher degree of variation. Conclusions Phylogenies generated by Relative Complexity Measure were found to be robust and results of RCM were more reliable than the compared techniques. Particularly, to overcome MSA-based problems, RCM seems to be a reasonable way and a good alternative to MSA-based phylogenetic analysis. We believe our method will become a mainstream phylogeny construction method especially for the highly variable sequence families where the accuracy of the MSA heavily depends on the alignment parameters. PMID:23323678

  7. Testing robustness of relative complexity measure method constructing robust phylogenetic trees for Galanthus L. Using the relative complexity measure

    Directory of Open Access Journals (Sweden)

    Bakış Yasin

    2013-01-01

    Full Text Available Abstract Background Most phylogeny analysis methods based on molecular sequences use multiple alignment where the quality of the alignment, which is dependent on the alignment parameters, determines the accuracy of the resulting trees. Different parameter combinations chosen for the multiple alignment may result in different phylogenies. A new non-alignment based approach, Relative Complexity Measure (RCM, has been introduced to tackle this problem and proven to work in fungi and mitochondrial DNA. Result In this work, we present an application of the RCM method to reconstruct robust phylogenetic trees using sequence data for genus Galanthus obtained from different regions in Turkey. Phylogenies have been analyzed using nuclear and chloroplast DNA sequences. Results showed that, the tree obtained from nuclear ribosomal RNA gene sequences was more robust, while the tree obtained from the chloroplast DNA showed a higher degree of variation. Conclusions Phylogenies generated by Relative Complexity Measure were found to be robust and results of RCM were more reliable than the compared techniques. Particularly, to overcome MSA-based problems, RCM seems to be a reasonable way and a good alternative to MSA-based phylogenetic analysis. We believe our method will become a mainstream phylogeny construction method especially for the highly variable sequence families where the accuracy of the MSA heavily depends on the alignment parameters.

  8. Multifractal spectrum and lacunarity as measures of complexity of osseointegration.

    Science.gov (United States)

    de Souza Santos, Daniel; Dos Santos, Leonardo Cavalcanti Bezerra; de Albuquerque Tavares Carvalho, Alessandra; Leão, Jair Carneiro; Delrieux, Claudio; Stosic, Tatijana; Stosic, Borko

    2016-07-01

    The goal of this study is to contribute to a better quantitative description of the early stages of osseointegration, by application of fractal, multifractal, and lacunarity analysis. Fractal, multifractal, and lacunarity analysis are performed on scanning electron microscopy (SEM) images of titanium implants that were first subjected to different treatment combinations of i) sand blasting, ii) acid etching, and iii) exposition to calcium phosphate, and were then submersed in a simulated body fluid (SBF) for 30 days. All the three numerical techniques are applied to the implant SEM images before and after SBF immersion, in order to provide a comprehensive set of common quantitative descriptors. It is found that implants subjected to different physicochemical treatments before submersion in SBF exhibit a rather similar level of complexity, while the great variety of crystal forms after SBF submersion reveals rather different quantitative measures (reflecting complexity), for different treatments. In particular, it is found that acid treatment, in most combinations with the other considered treatments, leads to a higher fractal dimension (more uniform distribution of crystals), lower lacunarity (lesser variation in gap sizes), and narrowing of the multifractal spectrum (smaller fluctuations on different scales). The current quantitative description has shown the capacity to capture the main features of complex images of implant surfaces, for several different treatments. Such quantitative description should provide a fundamental tool for future large scale systematic studies, considering the large variety of possible implant treatments and their combinations. Quantitative description of early stages of osseointegration on titanium implants with different treatments should help develop a better understanding of this phenomenon, in general, and provide basis for further systematic experimental studies. Clinical practice should benefit from such studies in the long

  9. Measuring the complex behavior of the SO2 oxidation reaction

    Directory of Open Access Journals (Sweden)

    Muhammad Shahzad

    2015-09-01

    Full Text Available The two step reversible chemical reaction involving five chemical species is investigated. The quasi equilibrium manifold (QEM and spectral quasi equilibrium manifold (SQEM are used for initial approximation to simplify the mechanisms, which we want to utilize in order to investigate the behavior of the desired species. They show a meaningful picture, but for maximum clarity, the investigation method of invariant grid (MIG is employed. These methods simplify the complex chemical kinetics and deduce low dimensional manifold (LDM from the high dimensional mechanism. The coverage of the species near equilibrium point is investigated and then we shall discuss moving along the equilibrium of ODEs. The steady state behavior is observed and the Lyapunov function is utilized to study the stability of ODEs. Graphical results are used to describe the physical aspects of measurements.

  10. Mitigating wildland fire hazard using complex network centrality measures

    Science.gov (United States)

    Russo, Lucia; Russo, Paola; Siettos, Constantinos I.

    2016-12-01

    We show how to distribute firebreaks in heterogeneous forest landscapes in the presence of strong wind using complex network centrality measures. The proposed framework is essentially a two-tire one: at the inner part a state-of- the-art Cellular Automata model is used to compute the weights of the underlying lattice network while at the outer part the allocation of the fire breaks is scheduled in terms of a hierarchy of centralities which influence the most the spread of fire. For illustration purposes we applied the proposed framework to a real-case wildfire that broke up in Spetses Island, Greece in 1990. We evaluate the scheme against the benchmark of random allocation of firebreaks under the weather conditions of the real incident i.e. in the presence of relatively strong winds.

  11. Measuring robustness of community structure in complex networks

    CERN Document Server

    Li, Hui-Jia; Chen, Luonan

    2015-01-01

    The theory of community structure is a powerful tool for real networks, which can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the robustness of community structure is an urgent and important task. In this letter, we employ the critical threshold of resolution parameter in Hamiltonian function, $\\gamma_C$, to measure the robustness of a network. According to spectral theory, a rigorous proof shows that the index we proposed is inversely proportional to robustness of community structure. Furthermore, by utilizing the co-evolution model, we provides a new efficient method for computing the value of $\\gamma_C$. The research can be applied to broad clustering problems in network analysis and data mining due to its solid mathematical basis and experimental effects.

  12. Measurement of complex supercontinuum light pulses using time domain ptychography

    CERN Document Server

    Heidt, Alexander M; Brügmann, Michael; Rohwer, Erich G; Feurer, Thomas

    2016-01-01

    We demonstrate that time-domain ptychography, a recently introduced ultrafast pulse reconstruction modality, has properties ideally suited for the temporal characterization of complex light pulses with large time-bandwidth products as it achieves temporal resolution on the scale of a single optical cycle using long probe pulses, low sampling rates, and an extremely fast and robust algorithm. In comparison to existing techniques, ptychography minimizes the data to be recorded and processed, and drastically reduces the computational time of the reconstruction. Experimentally we measure the temporal waveform of an octave-spanning, 3.5~ps long supercontinuum pulse generated in photonic crystal fiber, resolving features as short as 5.7~fs with sub-fs resolution and 30~dB dynamic range using 100~fs probe pulses and similarly large delay steps.

  13. Permutation Complexity and Coupling Measures in Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Taichi Haruna

    2013-09-01

    Full Text Available Recently, the duality between values (words and orderings (permutations has been proposed by the authors as a basis to discuss the relationship between information theoretic measures for finite-alphabet stationary stochastic processes and their permutatio nanalogues. It has been used to give a simple proof of the equality between the entropy rate and the permutation entropy rate for any finite-alphabet stationary stochastic process and to show some results on the excess entropy and the transfer entropy for finite-alphabet stationary ergodic Markov processes. In this paper, we extend our previous results to hidden Markov models and show the equalities between various information theoretic complexity and coupling measures and their permutation analogues. In particular, we show the following two results within the realm of hidden Markov models with ergodic internal processes: the two permutation analogues of the transfer entropy, the symbolic transfer entropy and the transfer entropy on rank vectors, are both equivalent to the transfer entropy if they are considered as the rates, and the directed information theory can be captured by the permutation entropy approach.

  14. A high accuracy broadband measurement system for time resolved complex bioimpedance measurements.

    Science.gov (United States)

    Kaufmann, S; Malhotra, A; Ardelt, G; Ryschka, M

    2014-06-01

    Bioimpedance measurements are useful tools in biomedical engineering and life science. Bioimpedance is the electrical impedance of living tissue and can be used in the analysis of various physiological parameters. Bioimpedance is commonly measured by injecting a small well known alternating current via surface electrodes into an object under test and measuring the resultant surface voltages. It is non-invasive, painless and has no known hazards. This work presents a field programmable gate array based high accuracy broadband bioimpedance measurement system for time resolved bioimpedance measurements. The system is able to measure magnitude and phase of complex impedances under test in a frequency range of about 10-500 kHz with excitation currents from 10 µA to 5 mA. The overall measurement uncertainties stay below 1% for the impedance magnitude and below 0.5° for the phase in most measurement ranges. Furthermore, the described system has a sample rate of up to 3840 impedance spectra per second. The performance of the bioimpedance measurement system is demonstrated with a resistor based system calibration and with measurements on biological samples.

  15. Esterification by the Plasma Acidic Water: Novel Application of Plasma Acid

    Science.gov (United States)

    Gu, Ling

    2014-03-01

    This work explores the possibility of plasma acid as acid catalyst in organic reactions. Plasma acidic water was prepared by dielectric barrier discharge and used to catalyze esterification of n-heptanioc acid with ethanol. It is found that the plasma acidic water has a stable and better performance than sulfuric acid, meaning that it is an excellent acid catalyst. The plasma acidic water would be a promising alternative for classic mineral acid as a more environment friendly acid.

  16. Disassembling "evapotranspiration" in-situ with a complex measurement tool

    Science.gov (United States)

    Chormanski, Jaroslaw; Kleniewska, Malgorzata; Berezowski, Tomasz; Sporak-Wasilewska, Sylwia; Okruszko, Tomasz; Szatylowicz, Jan; Batelaan, Okke

    2014-05-01

    In this work we present a complex tool for measuring water fluxes in wetland ecosystems. The tool was designed to quantify processes related to interception storage on plants leafs. The measurements are conducted by combining readings from various instruments, including: eddy covariance tower (EC), field spectrometer, SapFlow system, rain gauges above and under canopy, soil moisture probes and other. The idea of this set-up is to provide continuous measurement of overall water flux from the ecosystem (EC tower), intercepted water volume and timing (field spectrometers), through-fall (rain gauges above and under canopy), transpiration (SapFlow), evaporation and soil moisture (soil moisture probes). Disassembling the water flux to the above components allows giving more insight to the interception related processes and differentiates them from the total evapotranspiration. The measurements are conducted in the Upper Biebrza Basin (NE Poland). The study area is part of the valley and is covered by peat soils (mainly peat moss with the exception of areas near the river) and receives no inundations waters of the Biebrza. The plant community of Agrostietum-Carici caninae has a dominant share here creating an up to 0.6 km wide belt along the river. The area is covered also by Caricion lasiocarpae as well as meadows and pastures Molinio-Arrhenatheretea, Phragmitetum communis. Sedges form a hummock pattern characteristic for the sedge communities in natural river valleys with wetland vegetation. The main result of the measurement set-up will be the analyzed characteristics and dynamics of interception storage for sedge ecosystems and a developed methodology for interception monitoring by use spectral reflectance technique. This will give a new insight to processes of evapotranspiration in wetlands and its components transpiration, evaporation from interception and evaporation from soil. Moreover, other important results of this project will be the estimation of energy and

  17. Range-limited centrality measures in complex networks

    Science.gov (United States)

    Ercsey-Ravasz, Mária; Lichtenwalter, Ryan N.; Chawla, Nitesh V.; Toroczkai, Zoltán

    2012-06-01

    Here we present a range-limited approach to centrality measures in both nonweighted and weighted directed complex networks. We introduce an efficient method that generates for every node and every edge its betweenness centrality based on shortest paths of lengths not longer than ℓ=1,...,L in the case of nonweighted networks, and for weighted networks the corresponding quantities based on minimum weight paths with path weights not larger than wℓ=ℓΔ, ℓ=1,2...,L=R/Δ. These measures provide a systematic description on the positioning importance of a node (edge) with respect to its network neighborhoods one step out, two steps out, etc., up to and including the whole network. They are more informative than traditional centrality measures, as network transport typically happens on all length scales, from transport to nearest neighbors to the farthest reaches of the network. We show that range-limited centralities obey universal scaling laws for large nonweighted networks. As the computation of traditional centrality measures is costly, this scaling behavior can be exploited to efficiently estimate centralities of nodes and edges for all ranges, including the traditional ones. The scaling behavior can also be exploited to show that the ranking top list of nodes (edges) based on their range-limited centralities quickly freezes as a function of the range, and hence the diameter-range top list can be efficiently predicted. We also show how to estimate the typical largest node-to-node distance for a network of N nodes, exploiting the afore-mentioned scaling behavior. These observations were made on model networks and on a large social network inferred from cell-phone trace logs (˜5.5×106 nodes and ˜2.7×107 edges). Finally, we apply these concepts to efficiently detect the vulnerability backbone of a network (defined as the smallest percolating cluster of the highest betweenness nodes and edges) and illustrate the importance of weight-based centrality measures in

  18. Methodology for Measuring the Complexity of Enterprise Information Systems

    Directory of Open Access Journals (Sweden)

    Ilja Holub

    2016-07-01

    Full Text Available The complexity of enterprise information systems is currently a challenge faced not only by IT professionals and project managers, but also by the users of such systems. Current methodologies and frameworks used to design and implement information systems do not specifically deal with the issue of their complexity and, apart from few exceptions, do not at all attempt to simplify the complexity. This article presents the author's own methodology for managing complexity, which can be used to complement any other methodology and which helps limit the growth of complexity. It introduces its own definition and metric of complexity, which it defines as the sum of entities of the individual UML models of the given system, which are selected according to the MMDIS methodology so as to consistently describe all relevant content dimensions of the system. The main objective is to propose a methodology to manage information system complexity and to verify it in practice on a real-life SAP implementation project.

  19. On the parity complexity measures of Boolean functions

    CERN Document Server

    Zhang, Zhiqiang; 10.1016/j.tcs.2010.03.027

    2010-01-01

    The parity decision tree model extends the decision tree model by allowing the computation of a parity function in one step. We prove that the deterministic parity decision tree complexity of any Boolean function is polynomially related to the non-deterministic complexity of the function or its complement. We also show that they are polynomially related to an analogue of the block sensitivity. We further study parity decision trees in their relations with an intermediate variant of the decision trees, as well as with communication complexity.

  20. Complex Squeezing and Force Measurement Beyond the Standard Quantum Limit

    CERN Document Server

    Buchmann, L F; Kohler, J; Spethmann, N; Stamper-Kurn, D M

    2016-01-01

    A continuous quantum field, such as a propagating beam of light, may be characterized by a squeezing spectrum that is inhomogeneous in frequency. We point out that homodyne detectors, which are commonly employed to detect quantum squeezing, are blind to squeezing spectra in which the correlation between amplitude and phase fluctuations is complex. We find theoretically that such complex squeezing is a component of ponderomotive squeezing of light through cavity optomechanics. We propose a detection scheme, called synodyne detection, which reveals complex squeezing and allows its use to improve force detection beyond the standard quantum limit.

  1. TDR measurements looking for complex dielectric permittivity and complex magnetic permeability in lossy materials

    Science.gov (United States)

    Persico, Raffaele

    2017-04-01

    TDR probes can be exploited for the measure of the electromagnetic characteristics of the soil, or of any penetrable material. They are commonly exploited as instruments for the measure of the propagation velocity of the electromagnetic waves in the probed medium [1], in its turn useful for the proper focusing of GPR data [2-5]. However, a more refined hardware and processing can allow to extrapolate from these probes also the discrimination between dielectric and magnetic characteristics of the material under test, which can be relevant for a better interpretation of the buried scenario or in order to infer physical-chemical characteristics of the material at hand. This requires a TDR probe that can work in frequency domain, and in particular that allows to retrieve the reflection coefficient at the air soil interface. It has been already shown [6] that in lossless cases this can be promising. In the present contribution, it will be shown at the EGU conference that it is possible to look for both the relative complex permittivity and the relative magnetic permeability of the probed material, on condition that the datum has an acceptable SNR and that some diversity of information is guaranteed, either by multifrequency data or by a TDR that can prolong its arms in the soil. References [1] F. Soldovieri, G. Prisco, R. Persico, Application of Microwave Tomography in Hydrogeophysics: some examples, Vadose Zone Journal, vol. 7, n. 1 pp. 160-170, Feb. 2008. [2] I. Catapano, L. Crocco, R. Persico, M. Pieraccini, F. Soldovieri, "Linear and Nonlinear Microwave Tomography Approaches for Subsurface Prospecting: Validation on Real Data", IEEE Trans. on Antennas and Wireless Propagation Letters, vol. 5, pp. 49-53, 2006. [3] G. Leucci, N. Masini, R. Persico, F. Soldovieri." GPR and sonic tomography for structural restoration : the case of the Cathedral of Tricarico", Journal of Geophysics and Engineering, vol. 8, pp. S76-S92, Aug. 2011. [4] S. Piscitelli, E. Rizzo, F. Cristallo

  2. On the parity complexity measures of Boolean functions

    OpenAIRE

    Zhang,Zhiqiang; Shi, Yaoyun

    2010-01-01

    The parity decision tree model extends the decision tree model by allowing the computation of a parity function in one step. We prove that the deterministic parity decision tree complexity of any Boolean function is polynomially related to the non-deterministic complexity of the function or its complement. We also show that they are polynomially related to an analogue of the block sensitivity. We further study parity decision trees in their relations with an intermediate variant of the decisi...

  3. Measuring the Level of Complexity of Scientific Inquiries: The LCSI Index

    Science.gov (United States)

    Eilam, Efrat

    2015-01-01

    The study developed and applied an index for measuring the level of complexity of full authentic scientific inquiry. Complexity is a fundamental attribute of real life scientific research. The level of complexity is an overall reflection of complex cognitive and metacognitive processes which are required for navigating the authentic inquiry…

  4. Effects of acidic water, aluminum, and manganese on testicular steroidogenesis in Astyanax altiparanae.

    Science.gov (United States)

    Kida, Bianca Mayumi Silva; Abdalla, Raisa Pereira; Moreira, Renata Guimarães

    2016-10-01

    Metals can influence the gonadal steroidogenesis and endocrine systems of fish, thereby affecting their reproduction. The effects of aluminum and manganese in acidic water were investigated on steroidogenesis in sexually mature male Astyanax altiparanae. Whether mature male fish recover from the effects of metals in metal-free water was also assessed. The fish were exposed to 0.5 mg L(-1) of isolated or combined aluminum and manganese in acidic pH (5.5) to keep the metals bioavailable. The fish underwent 96 h of acute exposure, and samples were taken 24 and 96 h after the beginning of the experiment. The fish were then maintained in metal-free water for 96 h. Plasma levels of testosterone, 11-ketotestosterone, 17β-estradiol, and cortisol were measured. Acidic water increased the plasma concentration of testosterone and 11-ketotestosterone. Aluminum increased the testosterone levels after 96 h of exposure. Manganese increased the 17β-estradiol levels after 24 h of exposure and maintained at high levels until the end of the experiment. With the exception of acidic pH, which increased cortisol levels after 24 h of exposure, no changes were observed in this corticosteroid during the acute experiment. Aluminum and manganese together also altered steroid levels but without a standard variation. The fish recovered from the effects of most exposure conditions after 96 h in metal-free water. A. altiparanae could use reproductive tactics to trigger changes in testicular steroidogenesis by accelerating spermatogenesis and spermiogenesis, which may interfere with their reproductive dynamics.

  5. Dynamic portfolio managment based on complex quantile risk measures

    Directory of Open Access Journals (Sweden)

    Ekaterina V. Tulupova

    2011-05-01

    Full Text Available The article focuses on effectiveness evaluation combined measures of financial risks, which are convex combinations of measures VaR, CVaR and their analogues for the right distribution tail functions of a portfolio returns.

  6. Information and complexity measures for hydrologic model evaluation

    Science.gov (United States)

    Hydrological models are commonly evaluated through the residual-based performance measures such as the root-mean square error or efficiency criteria. Such measures, however, do not evaluate the degree of similarity of patterns in simulated and measured time series. The objective of this study was to...

  7. Research and Measurement of Software Complexity Based on Wuli, Shili, Renli (WSR and Information Entropy

    Directory of Open Access Journals (Sweden)

    Rong Jiang

    2015-04-01

    Full Text Available Complexity is an important factor throughout the software life cycle. It is increasingly difficult to guarantee software quality, cost and development progress with the increase in complexity. Excessive complexity is one of the main reasons for the failure of software projects, so effective recognition, measurement and control of complexity becomes the key of project management. At first, this paper analyzes the current research situation of software complexity systematically and points out existing problems in current research. Then, it proposes a WSR framework of software complexity, which divides the complexity of software into three levels of Wuli (WL, Shili (SL and Renli (RL, so that the staff in different roles may have a better understanding of complexity. Man is the main source of complexity, but the current research focuses on WL complexity, and the research of RL complexity is extremely scarce, so this paper emphasizes the research of RL complexity of software projects. This paper not only analyzes the composing factors of RL complexity, but also provides the definition of RL complexity. Moreover, it puts forward a quantitative measurement method of the complexity of personnel organization hierarchy and the complexity of personnel communication information based on information entropy first and analyzes and validates the scientificity and rationality of this measurement method through a large number of cases.

  8. EVALUATING DISCONTINUITIES IN COMPLEX SYSTEMS: TOWARD QUANTITATIVE MEASURE OF RESILIENCE

    Science.gov (United States)

    The textural discontinuity hypothesis (TDH) is based on the observation that animal body mass distributions exhibit discontinuities that may reflect the texture of the landscape available for exploitation. This idea has been extended to other complex systems, hinting that the ide...

  9. Complexity, Accuracy and Fluency: Definitions, Measurement and Research

    Science.gov (United States)

    Housen, Alex; Kuiken, Folkert; Vedder, Ineke

    2012-01-01

    The theme of this volume, complexity, accuracy and fluency (CAF) as dimensions of second language production, proficiency and development, represents a thriving area of research that addresses two general questions that are at the heart of many studies in second language acquisition and applied linguistics: What makes a second language (L2)…

  10. How to Measure Significance of Community Structure in Complex Networks

    CERN Document Server

    Hu, Yanqing; Fan, Ying; Di, Zengru

    2010-01-01

    Community structure analysis is a powerful tool for complex networks, which can simplify their functional analysis considerably. Recently, many approaches were proposed to community structure detection, but few works were focused on the significance of community structure. Since real networks obtained from complex systems always contain error links, and most of the community detection algorithms have random factors, evaluate the significance of community structure is important and urgent. In this paper, we use the eigenvectors' stability to characterize the significance of community structures. By employing the eigenvalues of Laplacian matrix of a given network, we can evaluate the significance of its community structure and obtain the optimal number of communities, which are always hard for community detection algorithms. We apply our method to many real networks. We find that significant community structures exist in many social networks and C.elegans neural network, and that less significant community stru...

  11. Lanthanide complexes as luminogenic probes to measure sulfide levels in industrial samples.

    Science.gov (United States)

    Thorson, Megan K; Ung, Phuc; Leaver, Franklin M; Corbin, Teresa S; Tuck, Kellie L; Graham, Bim; Barrios, Amy M

    2015-10-08

    A series of lanthanide-based, azide-appended complexes were investigated as hydrogen sulfide-sensitive probes. Europium complex 1 and Tb complex 3 both displayed a sulfide-dependent increase in luminescence, while Tb complex 2 displayed a decrease in luminescence upon exposure to NaHS. The utility of the complexes for monitoring sulfide levels in industrial oil and water samples was investigated. Complex 3 provided a sensitive measure of sulfide levels in petrochemical water samples (detection limit ∼ 250 nM), while complex 1 was capable of monitoring μM levels of sulfide in partially refined crude oil.

  12. Information-Theoretic Measures Predict the Human Judgment of Rhythm Complexity.

    Science.gov (United States)

    de Fleurian, Remi; Blackwell, Tim; Ben-Tal, Oded; Müllensiefen, Daniel

    2017-04-01

    To formalize the human judgment of rhythm complexity, we used five measures from information theory and algorithmic complexity to measure the complexity of 48 artificially generated rhythmic sequences. We compared these measurements to human prediction accuracy and easiness judgments obtained from a listening experiment, in which 32 participants guessed the last beat of each sequence. We also investigated the modulating effects of musical expertise and general pattern identification ability. Entropy rate and Kolmogorov complexity were correlated with prediction accuracy, and highly correlated with easiness judgments. A logistic regression showed main effects of musical training, entropy rate, and Kolmogorov complexity, and an interaction between musical training and both entropy rate and Kolmogorov complexity. These results indicate that information-theoretic concepts capture some salient features of the human judgment of rhythm complexity, and they confirm the influence of musical expertise on complexity judgments. Copyright © 2016 Cognitive Science Society, Inc.

  13. Titan's Complex Neutral Composition as Measured by Cassini INMS

    Science.gov (United States)

    Waite, J. H.; Magee, B. A.; Gell, D. A.; Kasprzak, W. T.; Cravens, T.; Vuitton, V. S.; Yelle, R. V.

    2006-12-01

    The composition of Titan's complex neutral atmosphere above 1000 km as observed by the Cassini Ion Neutral Mass Spectrometer on recent flybys of Titan are presented. A rich mixture of hydrocarbons and nitriles are found with mixing ratios that vary from 10-4 to 10-7: acetylene, ethylene, ethane, benzene, toluene, cyanogen, propyne, propene, propane, and various nitriles. The calibration and mass deconvolution processes are presented in order to establish clear boundaries on the systematic errors that can occur in the mass deconvolution process. The role of ion neutral chemistry in forming these compounds will also be discussed.

  14. Simulation and Efficient Measurements of Intensities for Complex Imaging Sequences

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Rasmussen, Morten Fischer; Stuart, Matthias Bo

    2014-01-01

    for the intensity is from -17.6 to 9.7 %, although the measured fields are highly non-linear (several MPa) and linear simulation is used. Linear simulation can, thus, be used to accurately predict intensity levels for any advanced imaging sequence and is an efficient tool in predicting the energy distribution....... on the sequence to simulate both intensity and mechanical index (MI) according to FDA rules. A 3 MHz BK Medical 8820e convex array transducer is used with the SARUS scanner. An Onda HFL-0400 hydrophone and the Onda AIMS III system measures the pressure field for three imaging schemes: a fixed focus, single...... emission scheme, a duplex vector flow scheme, and finally a vector flow imaging scheme. The hydrophone is connected to a receive channel in SARUS, which automatically measures the emitted pressure for the complete imaging sequence. MI can be predicted with an accuracy of 16.4 to 38 %. The accuracy...

  15. Evaluating Discontinuities in Complex Systems: Toward Quantitative Measures of Resilience

    Directory of Open Access Journals (Sweden)

    Craig Stow

    2007-06-01

    Full Text Available The textural discontinuity hypothesis (TDH is based on the observation that animal body mass distributions exhibit discontinuities that may reflect the texture of the landscape available for exploitation. This idea has been extended to other complex systems, hinting that the identification and quantification of discontinuities in the distributions of appropriate variables may provide clues to emergent system properties such as resilience. We propose a discontinuity index, based on the vector norm of the full assemblage of observed discontinuities, as a means to quantify and compare this characteristic among systems. We also evaluate four methods to identify the number and location of the most prominent discontinuities. Although results of the four methods are similar, they are not identical, and we conclude that this problem is best addressed with a consistent operationally defined approach in an adaptive inference framework.

  16. Block-based test data adequacy measurement criteria and test complexity metrics

    Institute of Scientific and Technical Information of China (English)

    陈卫东; 杨建军; 叶澄清; 潘云鹤

    2002-01-01

    On the basis of software testing tools we developed for progrmnming languages, we firstly present a new control flowgraph model based on block. In view of the notion of block, we extend the traditional program-based software test data adequacy measurement criteria, and empirically analyze the subsume relation between these measurement criteria. Then, we define four test complexity metrics based on block. They are J-complexity 0; J-complexity 1 ; J-complexity 1 + ; J-complexity 2. Finally, we show the Kiviat diagram that makes software quality visible.

  17. Block-based test data adequacy measurement criteria and test complexity metrics

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    On the basis of software testing tools we developed for programming languages, we firstly present a new control flowgraph model based on block. In view of the notion of block, we extend the traditional program-based software test data adequacy measurement criteria, and empirically analyze the subsume relation between these measurement criteria. Then, we define four test complexity metrics based on block. They are J-complexity 0; J-complexity 1; J-complexity 1 +; J-complexity 2. Finally, we show the Kiviat diagram that makes software quality visible.

  18. Resolving and measuring diffusion in complex interfaces: Exploring new capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Alam, Todd M. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This exploratory LDRD targeted the use of a new high resolution spectroscopic diffusion capabilities developed at Sandia to resolve transport processes at interfaces in heterogeneous polymer materials. In particular, the combination of high resolution magic angle spinning (HRMAS) nuclear magnetic resonance (NMR) spectroscopy with pulsed field gradient (PFG) diffusion experiments were used to directly explore interface diffusion within heterogeneous polymer composites, including measuring diffusion for individual chemical species in multi-component mixtures. Several different types of heterogeneous polymer systems were studied using these HRMAS NMR diffusion capabilities to probe the resolution limitations, determine the spatial length scales involved, and explore the general applicability to specific heterogeneous systems. The investigations pursued included a) the direct measurement of the diffusion for poly(dimethyl siloxane) polymer (PDMS) on nano-porous materials, b) measurement of penetrant diffusion in additive manufactures (3D printed) processed PDMS composites, and c) the measurement of diffusion in swollen polymers/penetrant mixtures within nano-confined aluminum oxide membranes. The NMR diffusion results obtained were encouraging and allowed for an improved understanding of diffusion and transport processes at the molecular level, while at the same time demonstrating that the spatial heterogeneity that can be resolved using HRMAS NMR PFG diffusion experiment must be larger than ~μm length scales, expect for polymer transport within nanoporous carbons where additional chemical resolution improves the resolvable heterogeneous length scale to hundreds of nm.

  19. Measuring complexity, nonextensivity and chaos in the DNA sequence of the Major Histocompatibility Complex

    Science.gov (United States)

    Pavlos, G. P.; Karakatsanis, L. P.; Iliopoulos, A. C.; Pavlos, E. G.; Xenakis, M. N.; Clark, Peter; Duke, Jamie; Monos, D. S.

    2015-11-01

    We analyze 4 Mb sequences of the Major Histocompatibility Complex (MHC), which is a DNA segment on chromosome 6 with high gene density, controlling many immunological functions and associated with many diseases. The analysis is based on modern theoretical and mathematical tools of complexity theory, such as nonlinear time series analysis and Tsallis non-extensive statistics. The results revealed that the DNA complexity and self-organization can be related to fractional dynamical nonlinear processes with low-dimensional deterministic chaotic and non-extensive statistical character, which generate the DNA sequences under the extremization of Tsallis q-entropy principle. While it still remains an open question as to whether the DNA walk is a fractional Brownian motion (FBM), a static anomalous diffusion process or a non-Gaussian dynamical fractional anomalous diffusion process, the results of this study testify for the latter, providing also a possible explanation for the previously observed long-range power law correlations of nucleotides, as well as the long-range correlation properties of coding and non-coding sequences present in DNA sequences.

  20. Lifetimes and stabilities of familiar explosives molecular adduct complexes during ion mobility measurements

    Science.gov (United States)

    McKenzie, Alan; DeBord, John Daniel; Ridgeway, Mark; Park, Melvin; Eiceman, Gary; Fernandez-Lima, Francisco

    2015-01-01

    Trapped ion mobility spectrometry coupled to mass spectrometry (TIMS-MS) was utilized for the separation and identification of familiar explosives in complex mixtures. For the first time, molecular adduct complex lifetimes, relative stability, binding energies and candidate structures are reported for familiar explosives. Experimental and theoretical results showed that the adduct size and reactivity, complex binding energy and the explosive structure tailors the stability of the molecular adduct complex. TIMS flexibility to adapt the mobility separation as a function of the molecular adduct complex stability (i.e., short or long IMS experiments / low or high IMS resolution) permits targeted measurements of explosives in complex mixtures with higher confidence levels. PMID:26153567

  1. The measuring complex for detection of radioactive waste in near-earth space

    Science.gov (United States)

    Ulin, S. E.; Vlasik, K. F.; Grachev, V. M.; Dmitrenko, V. V.; Novikov, A. S.; Uteshev, Z. M.; Shustov, A. E.; Chernishova, I. V.; Bakhtigaraev, N. S.; Rykhlova, L. V.; Kazantsev, S. G.

    2017-01-01

    Description of a measuring complex intended for detection and identification of radioactive waste in the near-earth space is presented. The complex consists of several xenon gamma-ray spectrometers, developed on the base of the thin-walled impulse ionization chamber with sensitive volume of four litres. Their main physics - technical characteristics are considered. An estimation probability for detection of various elements comprising radioactive waste by means of the measuring complex on board the spacecraft “Meteor” is given.

  2. Statistical measure of complexity of hard-sphere gas: applications to nuclear matter

    OpenAIRE

    Moustakidis, Ch. C.; Chatzisavvas, K. Ch.; Nikolaidis, N. S.; Panos, C. P.

    2010-01-01

    We apply the statistical measure of complexity, introduced by L\\'{o}pez-Ruiz, Mancini and Calbet to a hard-sphere dilute Fermi gas whose particles interact via a repulsive hard-core potential. We employ the momentum distribution of this system to calculate the information entropy, the disequilibrium and the statistical complexity. We examine possible connections between the particle correlations and energy of the system with those information and complexity measures. The hard-sphere model ser...

  3. Urban sustainability : complex interactions and the measurement of risk

    Directory of Open Access Journals (Sweden)

    Lidia Diappi

    1999-05-01

    Full Text Available This paper focuses on the concept of asustainable city and its theoretical implications for the urban system. Urban sustainability is based on positive interactions among three different urban sub-systems : social, economic and physical, where social well-being coexists with economic development and environmental quality. This utopian scenario doesn’t appear. Affluent economy is often associated with poverty and criminality, labour variety and urban efficiency coexist with pollution and congestion. The research subject is the analysis of local risk and opportunity conditions, based on the application of a special definition of risk elaborated and made operative with the production of a set of maps representing the multidimensional facets of spatial organisation in urban sustainability. The interactions among the economic/social and environmental systems are complex and unpredictable and present the opportunity for a new methodology of scientific investigation : the connectionistic approach, processed by Self-Reflexive Neural Networks (SRNN. These Networks are a useful instrument of investigation and analogic questioning of the Data Base. Once the SRNN has learned the structure of the weights from the DB, by querying the network with the maximization or minimization of specific groups of attributes, it is possible to read the related properties and to rank the areas. The survey scale assumed by the research is purposefully aimed at the micro-scale and concerns the Municipality of Milan which is spatially divided into 144 zones.

  4. GraphCom: A multidimensional measure of graphic complexity applied to 131 written languages.

    Science.gov (United States)

    Chang, Li-Yun; Chen, Yen-Chi; Perfetti, Charles A

    2017-04-19

    We report a new multidimensional measure of visual complexity (GraphCom) that captures variability in the complexity of graphs within and across writing systems. We applied the measure to 131 written languages, allowing comparisons of complexity and providing a basis for empirical testing of GraphCom. The measure includes four dimensions whose value in capturing the different visual properties of graphs had been demonstrated in prior reading research-(1) perimetric complexity, sensitive to the ratio of a written form to its surrounding white space (Pelli, Burns, Farell, & Moore-Page, 2006); (2) number of disconnected components, sensitive to discontinuity (Gibson, 1969); (3) number of connected points, sensitive to continuity (Lanthier, Risko, Stolz, & Besner, 2009); and (4) number of simple features, sensitive to the strokes that compose graphs (Wu, Zhou, & Shu, 1999). In our analysis of the complexity of 21,550 graphs, we (a) determined the complexity variation across writing systems along each dimension, (b) examined the relationships among complexity patterns within and across writing systems, and (c) compared the dimensions in their abilities to differentiate the graphs from different writing systems, in order to predict human perceptual judgments (n = 180) of graphs with varying complexity. The results from the computational and experimental comparisons showed that GraphCom provides a measure of graphic complexity that exceeds previous measures in its empirical validation. The measure can be universally applied across writing systems, providing a research tool for studies of reading and writing.

  5. Reconstruction of Complex Materials by Integral Geometric Measures

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The goal of much research in computational materials science is to quantify necessary morphological information and then to develop stochastic models which both accurately reflect the material morphology and allow one to estimate macroscopic physical properties. A novel method of characterizing the morphology of disordered systems is presented based on the evolution of a family of integral geometric measures during erosion and dilation operations.The method is used to determine the accuracy of model reconstructions of random systems. It is shown that the use of erosion/dilation operations on the original image leads to a more accurate discrimination of morphology than previous methods.

  6. Measuring the complex field scattered by single submicron particles

    Directory of Open Access Journals (Sweden)

    Marco A. C. Potenza

    2015-11-01

    Full Text Available We describe a method for simultaneous measurements of the real and imaginary parts of the field scattered by single nanoparticles illuminated by a laser beam, exploiting a self-reference interferometric scheme relying on the fundamentals of the Optical Theorem. Results obtained with calibrated spheres of different materials are compared to the expected values obtained through a simplified analytical model without any free parameters, and the method is applied to a highly polydisperse water suspension of Poly(D,L-lactide-co-glycolide nanoparticles. Advantages with respect to existing methods and possible applications are discussed.

  7. Measuring the complex field scattered by single submicron particles

    Energy Technology Data Exchange (ETDEWEB)

    Potenza, Marco A. C., E-mail: marco.potenza@unimi.it; Sanvito, Tiziano [Department of Physics, University of Milan, via Celoria, 16 – I-20133 Milan (Italy); CIMAINA, University of Milan, via Celoria, 16 – I-20133 Milan (Italy); EOS s.r.l., viale Ortles 22/4, I-20139 Milan (Italy); Pullia, Alberto [Department of Physics, University of Milan, via Celoria, 16 – I-20133 Milan (Italy)

    2015-11-15

    We describe a method for simultaneous measurements of the real and imaginary parts of the field scattered by single nanoparticles illuminated by a laser beam, exploiting a self-reference interferometric scheme relying on the fundamentals of the Optical Theorem. Results obtained with calibrated spheres of different materials are compared to the expected values obtained through a simplified analytical model without any free parameters, and the method is applied to a highly polydisperse water suspension of Poly(D,L-lactide-co-glycolide) nanoparticles. Advantages with respect to existing methods and possible applications are discussed.

  8. Ibogaine: complex pharmacokinetics, concerns for safety, and preliminary efficacy measures.

    Science.gov (United States)

    Mash, D C; Kovera, C A; Pablo, J; Tyndale, R F; Ervin, F D; Williams, I C; Singleton, E G; Mayor, M

    2000-09-01

    Ibogaine is an indole alkaloid found in the roots of Tabernanthe Iboga (Apocynaceae family), a rain forest shrub that is native to western Africa. Ibogaine is used by indigenous peoples in low doses to combat fatigue, hunger and thirst, and in higher doses as a sacrament in religious rituals. Members of American and European addict self-help groups have claimed that ibogaine promotes long-term drug abstinence from addictive substances, including psychostimulants and opiates. Anecdotal reports attest that a single dose of ibogaine eliminates opiate withdrawal symptoms and reduces drug craving for extended periods of time. The purported efficacy of ibogaine for the treatment of drug dependence may be due in part to an active metabolite. The majority of ibogaine biotransformation proceeds via CYP2D6, including the O-demethylation of ibogaine to 12-hydroxyibogamine (noribogaine). Blood concentration-time effect profiles of ibogaine and noribogaine obtained for individual subjects after single oral dose administrations demonstrate complex pharmacokinetic profiles. Ibogaine has shown preliminary efficacy for opiate detoxification and for short-term stabilization of drug-dependent persons as they prepare to enter substance abuse treatment. We report here that ibogaine significantly decreased craving for cocaine and heroin during inpatient detoxification. Self-reports of depressive symptoms were also significantly lower after ibogaine treatment and at 30 days after program discharge. Because ibogaine is cleared rapidly from the blood, the beneficial aftereffects of the drug on craving and depressed mood may be related to the effects of noribogaine on the central nervous system.

  9. Complexity and Information: Measuring Emergence, Self-organization, and Homeostasis at Multiple Scales

    CERN Document Server

    Gershenson, Carlos

    2012-01-01

    Concepts used in the scientific study of complex systems have become so widespread that their use and abuse has led to ambiguity and confusion in their meaning. In this paper we use information theory to provide abstract and concise measures of complexity, emergence, self-organization, and homeostasis. The purpose is to clarify the meaning of these concepts with the aid of the proposed formal measures. In a simplified version of the measures (focussing on the information produced by a system), emergence becomes the opposite of self-organization, while complexity represents their balance. We use computational experiments on random Boolean networks and elementary cellular automata to illustrate our measures at multiple scales.

  10. Prediction of Software Requirements Stability Based on Complexity Point Measurement Using Multi-Criteria Fuzzy Approach

    Directory of Open Access Journals (Sweden)

    D. Francis Xavier Christopher

    2012-12-01

    Full Text Available Many software projects fail due to instable requirements and lack of managing the requirements changesefficiently. Software Requirements Stability Index Metric (RSI helps to evaluate the overall stability ofrequirements and also keep track of the project status. Higher the stability, less changes tends topropagate. The existing system use Function Point modeling for measuring the Requirements Stability.However, the main drawback of the existing modeling is that the complexity of non-functional requirementshas not been measured for Requirements Stability. The Non-Functional Factors plays a vital role inassessing the Requirements Stability. Numerous Measurement methods have been proposed for measuringthe software complexity. This paper proposes Multi-criteria Fuzzy Based approach for finding out thecomplexity weight based on Requirement Complexity Attributes such as Functional RequirementComplexity, Non-Functional Requirement Complexity, Input Output Complexity, Interface and FileComplexity. Based on the complexity weight, this paper computes the software complexity point. And thenpredict the Software Requirements Stability based on Software Complexity Point changes. The advantageof this model is that it is able to estimate the software complexity early which in turn predicts the SoftwareRequirement Stability during the software development life cycle.

  11. A comparison of LMC and SDL complexity measures on binomial distributions

    Science.gov (United States)

    Piqueira, José Roberto C.

    2016-02-01

    The concept of complexity has been widely discussed in the last forty years, with a lot of thinking contributions coming from all areas of the human knowledge, including Philosophy, Linguistics, History, Biology, Physics, Chemistry and many others, with mathematicians trying to give a rigorous view of it. In this sense, thermodynamics meets information theory and, by using the entropy definition, López-Ruiz, Mancini and Calbet proposed a definition for complexity that is referred as LMC measure. Shiner, Davison and Landsberg, by slightly changing the LMC definition, proposed the SDL measure and the both, LMC and SDL, are satisfactory to measure complexity for a lot of problems. Here, SDL and LMC measures are applied to the case of a binomial probability distribution, trying to clarify how the length of the data set implies complexity and how the success probability of the repeated trials determines how complex the whole set is.

  12. Statistical measure of complexity of hard-sphere gas: applications to nuclear matter

    CERN Document Server

    Moustakidis, Ch C; Nikolaidis, N S; Panos, C P

    2010-01-01

    We apply the statistical measure of complexity, introduced by L\\'{o}pez-Ruiz, Mancini and Calbet to a hard-sphere dilute Fermi gas whose particles interact via a repulsive hard-core potential. We employ the momentum distribution of this system to calculate the information entropy, the disequilibrium and the statistical complexity. We examine possible connections between the particle correlations and energy of the system with those information and complexity measures. The hard-sphere model serves as a test bed for concepts about complexity.

  13. Matrix Energy as a Measure of Topological Complexity of a Graph

    CERN Document Server

    Sinha, Kaushik

    2016-01-01

    The complexity of highly interconnected systems is rooted in the interwoven architecture defined by its connectivity structure. In this paper, we develop matrix energy of the underlying connectivity structure as a measure of topological complexity and highlight interpretations about certain global features of underlying system connectivity patterns. The proposed complexity metric is shown to satisfy the Weyuker criteria as a measure of its validity as a formal complexity metric. We also introduce the notion of P point in the graph density space. The P point acts as a boundary between multiple connectivity regimes for finite-size graphs.

  14. Measurements of student understanding on complex scientific reasoning problems

    Science.gov (United States)

    Izumi, Alisa Sau-Lin

    While there has been much discussion of cognitive processes underlying effective scientific teaching, less is known about the response nature of assessments targeting processes of scientific reasoning specific to biology content. This study used multiple-choice (m-c) and short-answer essay student responses to evaluate progress in high-order reasoning skills. In a pilot investigation of student responses on a non-content-based test of scientific thinking, it was found that some students showed a pre-post gain on the m-c test version while showing no gain on a short-answer essay version of the same questions. This result led to a subsequent research project focused on differences between alternate versions of tests of scientific reasoning. Using m-c and written responses from biology tests targeted toward the skills of (1) reasoning with a model and (2) designing controlled experiments, test score frequencies, factor analysis, and regression models were analyzed to explore test format differences. Understanding the format differences in tests is important for the development of practical ways to identify student gains in scientific reasoning. The overall results suggested test format differences. Factor analysis revealed three interpretable factors---m-c format, genetics content, and model-based reasoning. Frequency distributions on the m-c and open explanation portions of the hybrid items revealed that many students answered the m-c portion of an item correctly but gave inadequate explanations. In other instances students answered the m-c portion incorrectly yet demonstrated sufficient explanation or answered the m-c correctly and also provided poor explanations. When trying to fit test score predictors for non-associated student measures---VSAT, MSAT, high school grade point average, or final course grade---the test scores accounted for close to zero percent of the variance. Overall, these results point to the importance of using multiple methods of testing and of

  15. Examining complexity across domains: relating subjective and objective measures of affective environmental scenes, paintings and music.

    Directory of Open Access Journals (Sweden)

    Manuela M Marin

    Full Text Available Subjective complexity has been found to be related to hedonic measures of preference, pleasantness and beauty, but there is no consensus about the nature of this relationship in the visual and musical domains. Moreover, the affective content of stimuli has been largely neglected so far in the study of complexity but is crucial in many everyday contexts and in aesthetic experiences. We thus propose a cross-domain approach that acknowledges the multidimensional nature of complexity and that uses a wide range of objective complexity measures combined with subjective ratings. In four experiments, we employed pictures of affective environmental scenes, representational paintings, and Romantic solo and chamber music excerpts. Stimuli were pre-selected to vary in emotional content (pleasantness and arousal and complexity (low versus high number of elements. For each set of stimuli, in a between-subjects design, ratings of familiarity, complexity, pleasantness and arousal were obtained for a presentation time of 25 s from 152 participants. In line with Berlyne's collative-motivation model, statistical analyses controlling for familiarity revealed a positive relationship between subjective complexity and arousal, and the highest correlations were observed for musical stimuli. Evidence for a mediating role of arousal in the complexity-pleasantness relationship was demonstrated in all experiments, but was only significant for females with regard to music. The direction and strength of the linear relationship between complexity and pleasantness depended on the stimulus type and gender. For environmental scenes, the root mean square contrast measures and measures of compressed file size correlated best with subjective complexity, whereas only edge detection based on phase congruency yielded equivalent results for representational paintings. Measures of compressed file size and event density also showed positive correlations with complexity and arousal in

  16. Examining complexity across domains: relating subjective and objective measures of affective environmental scenes, paintings and music.

    Science.gov (United States)

    Marin, Manuela M; Leder, Helmut

    2013-01-01

    Subjective complexity has been found to be related to hedonic measures of preference, pleasantness and beauty, but there is no consensus about the nature of this relationship in the visual and musical domains. Moreover, the affective content of stimuli has been largely neglected so far in the study of complexity but is crucial in many everyday contexts and in aesthetic experiences. We thus propose a cross-domain approach that acknowledges the multidimensional nature of complexity and that uses a wide range of objective complexity measures combined with subjective ratings. In four experiments, we employed pictures of affective environmental scenes, representational paintings, and Romantic solo and chamber music excerpts. Stimuli were pre-selected to vary in emotional content (pleasantness and arousal) and complexity (low versus high number of elements). For each set of stimuli, in a between-subjects design, ratings of familiarity, complexity, pleasantness and arousal were obtained for a presentation time of 25 s from 152 participants. In line with Berlyne's collative-motivation model, statistical analyses controlling for familiarity revealed a positive relationship between subjective complexity and arousal, and the highest correlations were observed for musical stimuli. Evidence for a mediating role of arousal in the complexity-pleasantness relationship was demonstrated in all experiments, but was only significant for females with regard to music. The direction and strength of the linear relationship between complexity and pleasantness depended on the stimulus type and gender. For environmental scenes, the root mean square contrast measures and measures of compressed file size correlated best with subjective complexity, whereas only edge detection based on phase congruency yielded equivalent results for representational paintings. Measures of compressed file size and event density also showed positive correlations with complexity and arousal in music, which is

  17. Measurement of Characteristic Self-Similarity and Self-Diversity for Complex Mechanical Systems

    Institute of Scientific and Technical Information of China (English)

    ZHOU Meili; LAI Jiangfeng

    2006-01-01

    Based on similarity science and complex system theory, a new concept of characteristic self-diversity and corresponding relations between self-similarity and self-diversity for complex mechanical systems are presented in this paper. Methods of system self-similarity and self-diversity measure between main system and sub-system are studied. Numerical calculations show that the characteristic self-similarity and self-diversity measure method is validity. A new theory and method of self-similarity and self-diversity measure for complexity mechanical system is presented.

  18. Measurements of complex impedance in microwave high power systems with a new bluetooth integrated circuit.

    Science.gov (United States)

    Roussy, Georges; Dichtel, Bernard; Chaabane, Haykel

    2003-01-01

    By using a new integrated circuit, which is marketed for bluetooth applications, it is possible to simplify the method of measuring the complex impedance, complex reflection coefficient and complex transmission coefficient in an industrial microwave setup. The Analog Devices circuit AD 8302, which measures gain and phase up to 2.7 GHz, operates with variable level input signals and is less sensitive to both amplitude and frequency fluctuations of the industrial magnetrons than are mixers and AM crystal detectors. Therefore, accurate gain and phase measurements can be performed with low stability generators. A mechanical setup with an AD 8302 is described; the calibration procedure and its performance are presented.

  19. The Complex Trauma Questionnaire (ComplexTQ:Development and preliminary psychometric properties of an instrument for measuring early relational trauma

    Directory of Open Access Journals (Sweden)

    Carola eMaggiora Vergano

    2015-09-01

    Full Text Available Research on the etiology of adult psychopathology and its relationship with childhood trauma has focused primarily on specific forms of maltreatment. This study developed an instrument for the assessment of childhood and adolescence trauma that would aid in identifying the role of co-occurring childhood stressors and chronic adverse conditions. The Complex Trauma Questionnaire (ComplexTQ, in both clinician and self-report versions, is a measure for the assessment of multi-type maltreatment: physical, psychological, and sexual abuse; physical and emotional neglect as well as other traumatic experiences, such rejection, role reversal, witnessing domestic violence, separations, and losses. The four-point Likert scale allows to specifically indicate with which caregiver the traumatic experience has occurred. A total of 229 participants, a sample of 79 nonclinical and that of 150 high-risk and clinical participants, were assessed with the ComplexTQ clinician version applied to Adult Attachment Interview (AAI transcripts. Initial analyses indicate acceptable inter-rater reliability. A good fit to a 6-factor model regarding the experience with the mother and to a 5-factor model with the experience with the father was obtained; the internal consistency of factors derived was good. Convergent validity was provided with the AAI scales. ComplexTQ factors discriminated normative from high-risk and clinical samples. The findings suggest a promising, reliable, and valid measurement of early relational trauma that is reported; furthermore, it is easy to complete and is useful for both research and clinical practice.

  20. The complexity of measuring power in generalized opinion leader decision models

    OpenAIRE

    Molinero Albareda, Xavier; Serna Iglesias, María José

    2016-01-01

    We analyze the computational complexity of the power measure in models of collective decision: the generalized opinion leader-follower model and the oblivious and non-oblivious infuence models. We show that computing the power measure is #P-hard in all these models, and provide two subfamilies in which the power measure can be computed in polynomial time. Peer Reviewed

  1. Measurement of the total solar energy transmittance (g-value) for complex glazings

    DEFF Research Database (Denmark)

    Duer, Karsten

    1999-01-01

    Four different complex glazings have been investigated in the Danish experimental setup METSET.The purpose of the measurements is to increase the confidence in the calorimetric measurements and to perform measurements and corrections according to a method developed in the ALTSET project...

  2. MEASURING OF COMPLEX STRUCTURE TRANSFER FUNCTION AND CALCULATING OF INNER SOUND FIELD

    Institute of Scientific and Technical Information of China (English)

    Chen Yuan; Huang Qibai; Shi Hanmin

    2005-01-01

    In order to measure complex structure transfer function and calculate inner sound field, transfer function of integration is mentioned. By establishing virtual system, transfer function of integration can be measured and the inner sound field can also be calculated. In the experiment, automobile body transfer function of integration is measured and experimental method of establishing virtual system is very valid.

  3. Measuring quantum effects in photosynthetic light-harvesting complexes with multipartite entanglement

    Science.gov (United States)

    Smyth, Cathal

    This thesis is a compilation of studies on delocalization measures, entanglement, and the role of quantum coherence in electronic energy transfer (EET) in light-harvesting complexes. The first two chapters after the introduction provide foundational knowledge of quantum information and light-harvesting, respectively. Chapter 2 introduces concepts from quantum information such as purity, bipartite entanglement and criteria for its measurement. The peripheral light-harvesting complex LH2, isolated from the anoxygenic purple bacterium Rhodopseudomonas acidophila, is employed as model system of interest. This light-harvesting complex, along with a description of the process of light-harvesting, the presence of quantum coherence, and the different models used to simulate EET, are described in chapter 3. In combination these two chapters lay the foundation for chapter 4, a critical assessment of the current measures of delocalization employed in EET studies, their relationship, and overall effectiveness. The conclusion is that entanglement based measures are most effective at measuring quantum effects, and that they can be related to more conventional delocalization measures such as the inverse participation ratio (IPR) by taking into account the entropy of the system under study. All the measures within this chapter are known as bipartite measures, and only measure the strength of correlation between two sites. The fifth chapter presents the core of this thesis. Following a brief introduction to the concept of multipartite entanglement, the development of multipartite delocalization measures that give high-resolution information on quantum coherence in light-harvesting complexes is detailed. In contrast to other measures, these analytical measures can detect many body correlations in large systems undergoing decoherence. We determine that, much like the bipartite entanglement based measures of chapter 4, these measures are also a function of system entropy, and have a

  4. Examining complexity across domains: relating subjective and objective measures of affective environmental scenes, paintings and music

    National Research Council Canada - National Science Library

    Marin, Manuela M; Leder, Helmut

    2013-01-01

    Subjective complexity has been found to be related to hedonic measures of preference, pleasantness and beauty, but there is no consensus about the nature of this relationship in the visual and musical domains...

  5. Normalized entropy of rank distribution: a novel measure of heterogeneity of complex networks

    Institute of Scientific and Technical Information of China (English)

    Wu Jun; Tan Yue-Jin; Deng Hong-Zhong; Zhu Da-Zhi

    2007-01-01

    Many unique properties of complex networks result from heterogeneity. The measure and analysis of heterogeneity are important and desirable to the research of the properties and functions of complex networks. In this paper, the rank distribution is proposed as a new statistic feature of complex networks. Based on the rank distribution, a novel measure of the heterogeneity called a normalized entropy of rank distribution (NERD) is proposed. The NERD accords with the normal meaning of heterogeneity within the context of complex networks compared with conventional measures. The heterogeneity of scale-free networks is studied using the NERD. It is shown that scale-free networks become more heterogeneous as the scaling exponent decreases and the NERD of scale-free networks is independent of the number of vertices, which indicates that the NERD is a suitable and effective measure of heterogeneity for networks with different sizes.

  6. Analysis of 31.4GHz Atmospheric Noise Temperature Measurements at Madrid Deep Space Communications Complex

    Science.gov (United States)

    Shambayati, S.; Keihm, S.

    1998-01-01

    The atmospheric noise temperature at 31.4GHz was measured at NASA's Deep Space Communications Complex at Madrid from September 1990 to December 1996 excluding February 1991 and May 1992 using a Water Vapor Radiometer.

  7. Formation and growth of molecular clusters containing sulfuric acid, water, ammonia, and dimethylamine.

    Science.gov (United States)

    DePalma, Joseph W; Doren, Douglas J; Johnston, Murray V

    2014-07-24

    The structures and thermochemistry of molecular clusters containing sulfuric acid, water, ammonia, and/or dimethylamine ((CH3)2NH or DMA) are explored using a combination of Monte Carlo configuration sampling, semiempirical calculations, and density functional theory (DFT) calculations. Clusters are of the general form [(BH(+))n(HSO4(-))n(H2O)y], where B = NH3 or DMA, 2 ≤ n ≤ 8, and 0 ≤ y ≤ 10. Cluster formulas are written based on the computed structures, which uniformly show proton transfer from each sulfuric acid molecule to a base molecule while the water molecules remain un-ionized. Cluster formation is energetically favorable, owing to strong electrostatic attraction among the ions. Water has a minor effect on the energetics of cluster formation, lowering the free energy of formation by ∼ 10% depending on the cluster size and number of water molecules. Cluster growth (addition of one base molecule and one sulfuric acid molecule to a pre-existing cluster) and base substitution (substituting DMA for ammonia) are also energetically favorable processes for both anhydrous and hydrated clusters. However, the effect of water is different for different bases. Hydrated ammonium bisulfate clusters have a more favorable free energy for growth (i.e., incrementing n with fixed y) than anhydrous clusters, while the reverse is observed for dimethylammonium bisulfate clusters, where the free energy for growth is more favorable for anhydrous clusters. The substitution of DMA for ammonia in bisulfate clusters is favorable but exhibits a complex water dependence. Base substitution in smaller bisulfate clusters is enhanced by the presence of water, while base substitution in larger bisulfate clusters is less favorable for hydrated clusters than that for anhydrous clusters. While DMA substitution can stabilize small clusters containing one or a few sulfuric acid molecules, the free energy advantage of forming amine clusters relative to ammonia clusters becomes less

  8. How does habitat complexity affect ant foraging success? A test using functional measures on three continents.

    Science.gov (United States)

    Gibb, H; Parr, C L

    2010-12-01

    Habitat complexity can mediate key processes that structure local assemblages through effects on factors such as competition, predation and foraging behaviour. While most studies address assemblage responses to habitat complexity within one locality, a more global approach allows conclusions with greater independence from the phylogenetic constraints of the target assemblages, thus allowing greater generality. We tested the effects of natural and manipulated habitat complexities on ant assemblages from South Africa, Australia and Sweden, in order to determine if there were globally consistent responses in how functional measures of foraging success are regulated by habitat complexity. Specifically, we considered how habitat complexity affected ant foraging rates including the speed of discovery and rate of monopolisation. We also tested if habitat complexity affected the body size index, a size-related morphological trait, of ants discovering resources and occupying and monopolising the resources after 180 min. Ants were significantly slower to discover baits in the more complex treatments, consistent with predictions that they would move more slowly through more complex environments. The monopolisation index was also lower in the more complex treatments, suggesting that resources were more difficult to defend. Our index of ant body size showed trends in the predicted direction for complexity treatments. In addition, ants discovering, occupying and monopolising resources were smaller in simple than in complex natural habitats. Responses of discovering ants to resources in natural habitats were clear in only one of three regions. Consistent with our predictions, habitat complexity thus affected functional measures of the foraging success of ants in terms of measures of discovery and monopolisation rates and body size traits of successful ants. However, patterns were not always equally clear in manipulative and mensurative components of the study.

  9. Calibration procedure for plasma polarimetry based on the complex amplitude ratio measurements

    Energy Technology Data Exchange (ETDEWEB)

    Bieg, Bohdan, E-mail: b.bieg@am.szczecin.pl [Maritime University, Szczecin (Poland); Kravtsov, Yury A.; Cieplik, Marek [Maritime University, Szczecin (Poland)

    2013-10-15

    New methodology for plasma polarimeters calibration is suggested, based on the complex amplitude ratio (CAR) measurements. This methodology reduces calibration to determination of three complex parameters of transfer matrix, characterizing polarization changes in optical system. Having obtained transfer matrix, full characteristic of the optic system could be obtained: eigenstates, phase shift and relative attenuation of the slow and fast wave. Polarization state of the sounding electromagnetic wave after the plasma is determined from the measured complex amplitude ratio by simple inversion of transfer matrix. Calibration procedure under discussion is simpler, more transparent and reliable than traditional procedures, using Stokes vector technique or angular parameters of polarization ellipse.

  10. Variances as order parameter and complexity measure for random Boolean networks

    Energy Technology Data Exchange (ETDEWEB)

    Luque, Bartolo [Departamento de Matematica Aplicada y EstadIstica, Escuela Superior de Ingenieros Aeronauticos, Universidad Politecnica de Madrid, Plaza Cardenal Cisneros 3, Madrid 28040 (Spain); Ballesteros, Fernando J [Observatori Astronomic, Universitat de Valencia, Ed. Instituts d' Investigacio, Pol. La Coma s/n, E-46980 Paterna, Valencia (Spain); Fernandez, Manuel [Departamento de Matematica Aplicada y EstadIstica, Escuela Superior de Ingenieros Aeronauticos, Universidad Politecnica de Madrid, Plaza Cardenal Cisneros 3, Madrid 28040 (Spain)

    2005-02-04

    Several order parameters have been considered to predict and characterize the transition between ordered and disordered phases in random Boolean networks, such as the Hamming distance between replicas or the stable core, which have been successfully used. In this work, we propose a natural and clear new order parameter: the temporal variance. We compute its value analytically and compare it with the results of numerical experiments. Finally, we propose a complexity measure based on the compromise between temporal and spatial variances. This new order parameter and its related complexity measure can be easily applied to other complex systems.

  11. Evaporation kinetics of acetic acid-water solutions

    Science.gov (United States)

    Duffey, K.; Wong, N.; Saykally, R.; Cohen, R. C.

    2012-12-01

    The transport of water molecules across vapor-liquid interfaces in the atmosphere is a crucial step in the formation and evolution of cloud droplets. Despite decades of study, the effects of solutes on the mechanism and rate of evaporation and condensation remain poorly characterized. The present work aims to determine the effect of atmospherically-relevant solutes on the evaporation rate of water. In our experiments, we create a train of micron-sized droplets and measure their temperature via Raman thermometry as they undergo evaporation without condensation. Analysis of the cooling rate yields the evaporation coefficient (γ). Previous work has shown that inorganic salts have little effect on γ, with surface-adsorbing anions causing a slight reduction in the coefficient from that measured for pure water. Organic acids are ubiquitous in aqueous aerosol and have been shown to disrupt the surface structure of water. Here we describe measurements of the evaporation rate of acetic acid solutions, showing that acetic acid reduces γ to a larger extent than inorganic ions, and that γ decreases with increasing acetic acid concentration.

  12. Lifetimes and stabilities of familiar explosive molecular adduct complexes during ion mobility measurements.

    Science.gov (United States)

    McKenzie-Coe, Alan; DeBord, John Daniel; Ridgeway, Mark; Park, Melvin; Eiceman, Gary; Fernandez-Lima, Francisco

    2015-08-21

    Trapped ion mobility spectrometry coupled to mass spectrometry (TIMS-MS) was utilized for the separation and identification of familiar explosives in complex mixtures. For the first time, molecular adduct complex lifetimes, relative stability, binding energies and candidate structures are reported for familiar explosives. Experimental and theoretical results showed that the adduct size and reactivity, complex binding energy and the explosive structure tailor the stability of the molecular adduct complex. The flexibility of TIMS to adapt the mobility separation as a function of the molecular adduct complex stability (i.e., short or long IMS experiments/low or high IMS resolution) permits targeted measurements of explosives in complex mixtures with high confidence levels.

  13. On Generalized Stam Inequalities and Fisher–Rényi Complexity Measures

    Directory of Open Access Journals (Sweden)

    Steeve Zozor

    2017-09-01

    Full Text Available Information-theoretic inequalities play a fundamental role in numerous scientific and technological areas (e.g., estimation and communication theories, signal and information processing, quantum physics, … as they generally express the impossibility to have a complete description of a system via a finite number of information measures. In particular, they gave rise to the design of various quantifiers (statistical complexity measures of the internal complexity of a (quantum system. In this paper, we introduce a three-parametric Fisher–Rényi complexity, named ( p , β , λ -Fisher–Rényi complexity, based on both a two-parametic extension of the Fisher information and the Rényi entropies of a probability density function ρ characteristic of the system. This complexity measure quantifies the combined balance of the spreading and the gradient contents of ρ , and has the three main properties of a statistical complexity: the invariance under translation and scaling transformations, and a universal bounding from below. The latter is proved by generalizing the Stam inequality, which lowerbounds the product of the Shannon entropy power and the Fisher information of a probability density function. An extension of this inequality was already proposed by Bercher and Lutwak, a particular case of the general one, where the three parameters are linked, allowing to determine the sharp lower bound and the associated probability density with minimal complexity. Using the notion of differential-escort deformation, we are able to determine the sharp bound of the complexity measure even when the three parameters are decoupled (in a certain range. We determine as well the distribution that saturates the inequality: the ( p , β , λ -Gaussian distribution, which involves an inverse incomplete beta function. Finally, the complexity measure is calculated for various quantum-mechanical states of the harmonic and hydrogenic systems, which are the two main

  14. A study on development of the step complexity measure for emergency operating procedures using entropy concepts

    Energy Technology Data Exchange (ETDEWEB)

    Park, J. K.; Jung, W. D.; Kim, J. W.; Ha, J. J

    2001-04-01

    In complex systems, such as nuclear power plants (NPPs) or airplane control systems, human errors play a major role in many accidents. For example, it was reported that about 70% of aviation accidents are due to human errors, and that approximately 28% of accidents in process industries are caused by human errors. According to related studies, written manuals or operating procedures are revealed as one of the most important factors in aviation and manufacturing industries. In case of NPPs, the importance of procedures is more salient than other industries because not only over 50% of human errors were due to procedures but also about 18% of accidents were caused by the failure of following procedures. Thus, the provision of emergency operating procedures (EOPs) that are designed so that the possibility of human errors can be reduced is very important. To accomplish this goal, a quantitative and objective measure that can evaluate EOPs is indispensable. The purpose of this study is the development of a method that can quantify the complexity of a step included in EOPs. In this regard, the step complexity measure (SC) is developed based on three sub-measures such as the SIC (step information complexity), the SLC (step logic complexity) and the SSC (step size complexity). To verify the SC measure, not only quantitative validations (such as comparing SC scores with subjective evaluation results and with averaged step performance time) but also qualitative validations to clarify physical meanings of the SC measure are performed.

  15. Using Complexity Measure to Characterize Information Transmission of Human Brain Cortex

    Institute of Scientific and Technical Information of China (English)

    徐京华; 吴祥宝

    1994-01-01

    The information transmission among various parts of the cortex are computed with the the-ory of mutual information from the data of the electroencephalogram(EEG)time series of normal humansubjects.The intensities of these transmissions are characterized by the"complexity"measures.These mea-sures have revealed to be sensitively related to the functional conditions of human beings.

  16. Complexity metric as a complement to measurement based IMRT/VMAT patient-specific QA

    Science.gov (United States)

    Götstedt, J.; Karlsson Hauer, A.; Bäck, A.

    2015-01-01

    IMRT/VMAT treatment plans contain treatment fields with MLC openings of various size and shape. Clinical dose calculation algorithms show limitations in calculating the correct dose in small and irregular parts of a MLC opening which leads to differences between the planned and delivered dose distributions. The patient-specific IMRT QA is often designed to compare planned and measured dose distributions and is therefore heavily dependent on the measurement equipment and the evaluation method. The purpose of this study is to develop a complexity metric based on shape and size of MLC openings that correlates to the dose differences between planned and delivered 3D dose distributions. Different MLC openings are measured and evaluated and used to determine a penalty function to steer the complexity metric and make the complexity scores correlate to dose difference pass rates. Results of this initial study show that a correlation was found between complexity scores and dose difference pass rates for static fields with varied complexity. Preliminary results also show that the complexity metric can distinguish clinical IMRT fields with higher complexity.

  17. Measurement and documentation of complex PTSD in treatment seeking traumatized refugees

    DEFF Research Database (Denmark)

    Palic, Sabina

    and personality dysfunction following extreme traumatization. Importantly, patterns of severe traumatic exposure in refugees may represent a group vulnerable to complex PTSD. However, there are currently only a few validated psychiatric measures for the assessment of traumatized refugees, which are limited...... to measuring symptoms of PTSD, anxiety, and depression. This renders documentation, measurement, and treatment of possible complex traumatic adaptations in traumatized refugees very difficult. The thesis comprises two studies using different measures and different samples. The first study investigated complex...... traumatization as Disorders of Extreme Stress Not Otherwise Specified (DESNOS). The first article from this study demonstrated that DESNOS in a clinical sample of refugees, primarily resembled the Schizotypal, and Paranoid personality disorders (PD), when compared to Axis I and Axis II syndromes on self...

  18. [Investigation of the efficacy of electrolyzed acid water on the standard strains of some pathogenic microorganisms].

    Science.gov (United States)

    Ileri, Ciğdem; Sezen, Yavuz; Dimoglo, Anatoli

    2006-10-01

    Many of the studies have indicated that electrolyzed acid water (EAW) has a strong microbicidal activity. In this study, EAW was obtained by the exposure of NaCl (10 g/L) and tap water mixture to direct electric current (2 ampere) during 15 minutes, in an instrument designed by the study group. EAW was tested for its inactivation efficacy on the standard strains of Staphylococcus aureus, Candida albicans and Pseudomonas aeruginosa in different concentrations and for different periods (0, 10, 30 and 60 seconds). The EAW dilutions were prepared by using sterile deionized water in the rates of 100% (undiluted), 20%, 10%, 5%, 2% and 1%, while deionized water alone was used as control. The oxidation-reduction potency, pH, and free cloride amounts were separately measured in different concentrations of EAW. UNE-EN 1276 standard was used to investigate the inhibitory efficacy of EAW on S. aureus ATCC 29213, C. albicans ATCC 10231 and P. aeruginosa ATCC 9027 through the use of membrane filtration method. As a result, all of the microorganisms have been completely inactivated at the end of 10th second, in all of the EAW concentrations, except 1% dilution. However, after the treatment with 1% EAW during 60 seconds, it was determined that an average population of 4.09 log cfu/ml, 4.56 log cfu/ml, and 3.62 log cfu/ml survived, respectively for S. aureus, C. albicans and P. aeruginosa. Our data showed that 2% concentration of EAW had a bactericidal effect and may be used for the surface disinfection in practice.

  19. Biological iron(II) oxidation as pre-treatment to limestone neutralisation of acid water

    CSIR Research Space (South Africa)

    Maree, JP

    1998-01-01

    Full Text Available Iron (II) should be oxidised to iron (III) before the neutralisation of acid water with limestone, otherwise the oxidation will occur downstream of the neutralisation plant with the formation of acid (reactions 1 and 2). This study aimed...

  20. The Measurement of Financial System Complexity%金融系统复杂性度量

    Institute of Scientific and Technical Information of China (English)

    邱奕奎

    2014-01-01

    The measurement of financial system complexity is the intrinsic request of formalizing financial system complexity. It is also one of the basic problems of in theoretical research on financial system complexity. According to the parameters of system, the financial system complexity can be divided into four aspects: the structural complexity, the environment complexity, the function complexity and the dynamic complexity. The structural complexity is related to the relationship of components. The environment complexity is related to the changing of natural environment , economic environment and rule environment. The function complexity is related to the uncertainty of system function and the complexity of system function realization. The dynamic complexity pays attention to the complexity of financial activities and the uncertainty of financial system evolution. The relationship of system components can be divided into dependence and decomposition. By the concept of information entropy , the structure complexity can be measured. Through the classification and valuation of the metrics , the measurement index of the next three aspects' complexity can be got. By dividing the complexity of financial system into four levels respectively and studying the complexity measurement of every level, the difficult of measuring the overall complexity metric system can be solved. So the model of financial system overall complexity can be constructed.%金融系统复杂性的度量是金融系统复杂性形式化的内在要求,也是金融系统复杂性理论研究的基本问题之一。根据系统基本参量的构成,可以将金融系统的复杂性划分为:结构复杂性、环境复杂性、功能复杂性和动态复杂性四个层面。金融系统结构复杂性与系统组元之间的关系相关,环境复杂性与自然环境的突变、经济环境的波动、制度环境的变化相关,功能复杂性涉及到系统功能发挥的不确定性与系统功

  1. Solubilities of Isophthalic Acid in Acetic Acid + Water Solvent Mixtures

    Institute of Scientific and Technical Information of China (English)

    CHENG Youwei; HUO Lei; LI Xi

    2013-01-01

    The solubilities of isophthalic acid (1) in binary acetic acid (2) + water (3) solvent mixtures were determined in a pressurized vessel.The temperature range was from 373.2 to 473.2K and the range of the mole fraction of acetic acid in the solvent mixtures was from x2 =0 to 1.A new method to measure the solubility was developed,which solved the problem of sampling at high temperature.The experimental results indicated that within the temperature range studied,the solubilities of isophthalic acid in all mixtures showed an increasing trend with increasing temperature.The experimental solubilities were correlated by the Buchowski equation,and the calculate results showed good agreement with the experimental solubilities.Furthermore,the mixed solvent systems were found to exhibit a maximum solubility effect on the solubility,which may be attributed to the intermolecular association between the solute and the solvent mixture.The maximum solubility effect was well modeled by the modified Wilson equation.

  2. Raven’s Progressive Matrices, manipulations of complexity and measures of accuracy, speed and confidence

    OpenAIRE

    LAZAR STANKOV; KARL SCHWEIZER

    2007-01-01

    This paper examines the effects of complexity-enhancing manipulations of two cognitive tasks – Swaps and Triplet Numbers tests (Stankov, 2000) – on their relationship with Raven’s Progressive Matrices test representing aspects of fluid intelligence. The complexity manipulations involved four treatment levels, each requiring an increasing number of components and relationships among these components. The accuracy, speed of processing, and confidence measures were decomposed into experimental a...

  3. The effect of electrode contact resistance and capacitive coupling on Complex Resistivity measurements

    DEFF Research Database (Denmark)

    Ingeman-Nielsen, Thomas

    2006-01-01

    The effect of electrode contact resistance and capacitive coupling on complex resistivity (CR) measurements is studied in this paper. An equivalent circuit model for the receiver is developed to describe the effects. The model shows that CR measurements are severely affected even at relatively low...... contact resistances. The model suggests proportionality between the error in the phase measurements and the product of the wire-to-ground capacitance, the contact resistance, the dipole size and the frequency of the measurement. The model behavior is illustrated and confirmed by field data collected...... with the contact resistance artificially increased by resistors. The results emphasize the importance of keeping contact resistance low in CR measurements....

  4. Lanthanide complexes as luminogenic probes to measure sulfide levels in industrial samples

    Energy Technology Data Exchange (ETDEWEB)

    Thorson, Megan K. [Department of Medicinal Chemistry, University of Utah College of Pharmacy, Salt Lake City, UT 84108 (United States); Ung, Phuc [Monash Institute of Pharmaceutical Sciences, Monash University, Victoria 3052 (Australia); Leaver, Franklin M. [Water & Energy Systems Technology, Inc., Kaysville, UT 84037 (United States); Corbin, Teresa S. [Quality Services Laboratory, Tesoro Refining and Marketing, Salt Lake City, UT 84103 (United States); Tuck, Kellie L., E-mail: kellie.tuck@monash.edu [School of Chemistry, Monash University, Victoria 3800 (Australia); Graham, Bim, E-mail: bim.graham@monash.edu [Monash Institute of Pharmaceutical Sciences, Monash University, Victoria 3052 (Australia); Barrios, Amy M., E-mail: amy.barrios@utah.edu [Department of Medicinal Chemistry, University of Utah College of Pharmacy, Salt Lake City, UT 84108 (United States)

    2015-10-08

    A series of lanthanide-based, azide-appended complexes were investigated as hydrogen sulfide-sensitive probes. Europium complex 1 and Tb complex 3 both displayed a sulfide-dependent increase in luminescence, while Tb complex 2 displayed a decrease in luminescence upon exposure to NaHS. The utility of the complexes for monitoring sulfide levels in industrial oil and water samples was investigated. Complex 3 provided a sensitive measure of sulfide levels in petrochemical water samples (detection limit ∼ 250 nM), while complex 1 was capable of monitoring μM levels of sulfide in partially refined crude oil. - Highlights: • Lanthanide–azide based sulfide sensors were synthesized and characterized. • The probes have excitation and emission profiles compatible with sulfide-contaminated samples from the petrochemical industry. • A terbium-based probe was used to measure the sulfide concentration in oil refinery wastewater. • A europium-based probe had compatibility with partially refined crude oil samples.

  5. Complex span and n-back measures of working memory: a meta-analysis.

    Science.gov (United States)

    Redick, Thomas S; Lindsey, Dakota R B

    2013-12-01

    Working memory is a construct of primary relevance to many areas of psychology. Two types of tasks have been used to measure working memory, primarily in different research areas: Complex span tasks are commonly used in behavioral studies in the cognitive and individual-differences literature, whereas n-back tasks have been used more frequently in cognitive neuroscience studies investigating the neural underpinnings of working memory. Despite both categories of tasks being labeled as "working memory" measures, previous empirical studies have provided mixed evidence regarding the shared amount of overlapping processes between complex span and n-back tasks. The present meta-analysis showed that the complex span and n-back tasks are weakly correlated, although significant heterogeneity across studies was observed. A follow-up analysis of unpublished data indicated that the sample composition affects the relationship between the complex span and n-back tasks, following the law of diminishing returns. Finally, a separate meta-analysis indicated that the simple span and n-back tasks are correlated to the same extent as are the complex span and n-back tasks. The present findings indicate that the complex span and n-back tasks cannot be used interchangeably as working memory measures in research applications.

  6. Measuring streetscape complexity based on the statistics of local contrast and spatial frequency.

    Directory of Open Access Journals (Sweden)

    André Cavalcante

    Full Text Available Streetscapes are basic urban elements which play a major role in the livability of a city. The visual complexity of streetscapes is known to influence how people behave in such built spaces. However, how and which characteristics of a visual scene influence our perception of complexity have yet to be fully understood. This study proposes a method to evaluate the complexity perceived in streetscapes based on the statistics of local contrast and spatial frequency. Here, 74 streetscape images from four cities, including daytime and nighttime scenes, were ranked for complexity by 40 participants. Image processing was then used to locally segment contrast and spatial frequency in the streetscapes. The statistics of these characteristics were extracted and later combined to form a single objective measure. The direct use of statistics revealed structural or morphological patterns in streetscapes related to the perception of complexity. Furthermore, in comparison to conventional measures of visual complexity, the proposed objective measure exhibits a higher correlation with the opinion of the participants. Also, the performance of this method is more robust regarding different time scenarios.

  7. Measuring streetscape complexity based on the statistics of local contrast and spatial frequency.

    Science.gov (United States)

    Cavalcante, André; Mansouri, Ahmed; Kacha, Lemya; Barros, Allan Kardec; Takeuchi, Yoshinori; Matsumoto, Naoji; Ohnishi, Noboru

    2014-01-01

    Streetscapes are basic urban elements which play a major role in the livability of a city. The visual complexity of streetscapes is known to influence how people behave in such built spaces. However, how and which characteristics of a visual scene influence our perception of complexity have yet to be fully understood. This study proposes a method to evaluate the complexity perceived in streetscapes based on the statistics of local contrast and spatial frequency. Here, 74 streetscape images from four cities, including daytime and nighttime scenes, were ranked for complexity by 40 participants. Image processing was then used to locally segment contrast and spatial frequency in the streetscapes. The statistics of these characteristics were extracted and later combined to form a single objective measure. The direct use of statistics revealed structural or morphological patterns in streetscapes related to the perception of complexity. Furthermore, in comparison to conventional measures of visual complexity, the proposed objective measure exhibits a higher correlation with the opinion of the participants. Also, the performance of this method is more robust regarding different time scenarios.

  8. Measuring Complexity and Predictability of Time Series with Flexible Multiscale Entropy for Sensor Networks.

    Science.gov (United States)

    Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue

    2017-04-06

    Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques.

  9. Random walk-based similarity measure method for patterns in complex object

    Directory of Open Access Journals (Sweden)

    Liu Shihu

    2017-04-01

    Full Text Available This paper discusses the similarity of the patterns in complex objects. The complex object is composed both of the attribute information of patterns and the relational information between patterns. Bearing in mind the specificity of complex object, a random walk-based similarity measurement method for patterns is constructed. In this method, the reachability of any two patterns with respect to the relational information is fully studied, and in the case of similarity of patterns with respect to the relational information can be calculated. On this bases, an integrated similarity measurement method is proposed, and algorithms 1 and 2 show the performed calculation procedure. One can find that this method makes full use of the attribute information and relational information. Finally, a synthetic example shows that our proposed similarity measurement method is validated.

  10. Multi-complexity ensemble measures for gait time series analysis: application to diagnostics, monitoring and biometrics.

    Science.gov (United States)

    Gavrishchaka, Valeriy; Senyukova, Olga; Davis, Kristina

    2015-01-01

    Previously, we have proposed to use complementary complexity measures discovered by boosting-like ensemble learning for the enhancement of quantitative indicators dealing with necessarily short physiological time series. We have confirmed robustness of such multi-complexity measures for heart rate variability analysis with the emphasis on detection of emerging and intermittent cardiac abnormalities. Recently, we presented preliminary results suggesting that such ensemble-based approach could be also effective in discovering universal meta-indicators for early detection and convenient monitoring of neurological abnormalities using gait time series. Here, we argue and demonstrate that these multi-complexity ensemble measures for gait time series analysis could have significantly wider application scope ranging from diagnostics and early detection of physiological regime change to gait-based biometrics applications.

  11. Statistical complexity measures as telltale of relevant scales in emergent dynamics of spatial systems

    Science.gov (United States)

    Arbona, A.; Bona, C.; Miñano, B.; Plastino, A.

    2014-09-01

    The definition of complexity through Statistical Complexity Measures (SCM) has recently seen major improvements. Mostly, the effort is concentrated in measures on time series. We propose a SCM definition for spatial dynamical systems. Our definition is in line with the trend to combine entropy with measures of structure (such as disequilibrium). We study the behaviour of our definition against the vectorial noise model of Collective Motion. From a global perspective, we show how our SCM is minimal at both the microscale and macroscale, while it reaches a maximum at the ranges that define the mesoscale in this model. From a local perspective, the SCM is minimum both in highly ordered and disordered areas, while it reaches a maximum at the edges between such areas. These characteristics suggest this is a good candidate for detecting the mesoscale of arbitrary dynamical systems as well as regions where the complexity is maximal in such systems.

  12. An attractor-based complexity measurement for Boolean recurrent neural networks.

    Science.gov (United States)

    Cabessa, Jérémie; Villa, Alessandro E P

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of ω-automata, and then translating the most refined classification of ω-automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits.

  13. An attractor-based complexity measurement for Boolean recurrent neural networks.

    Directory of Open Access Journals (Sweden)

    Jérémie Cabessa

    Full Text Available We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of ω-automata, and then translating the most refined classification of ω-automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits.

  14. Classification of periodic, chaotic and random sequences using approximate entropy and Lempel–Ziv complexity measures

    Indian Academy of Sciences (India)

    Karthi Balasubramanian; Silpa S Nair; Nithin Nagaraj

    2015-03-01

    ‘Complexity’ has several definitions in diverse fields. These measures are indicators of some aspects of the nature of the signal. Such measures are used to analyse and classify signals and as a signal diagnostics tool to distinguish between periodic, quasiperiodic, chaotic and random signals. Lempel–Ziv (LZ) complexity and approximate entropy (ApEn) are such popular complexity measures that are widely used for characterizing biological signals also. In this paper, we compare the utility of ApEn, LZ complexities and Shannon’s entropy in characterizing data from a nonlinear chaotic map (logistic map). In this work, we show that LZ and ApEn complexity measures can characterize the data complexities correctly for data sequences as short as 20 in length while Shannon’s entropy fails for length less than 50. In the case of noisy sequences with 10% uniform noise, Shannon’s entropy works only for lengths greater than 200 while LZ and ApEn are successful with sequences of lengths greater than 30 and 20, respectively.

  15. Complexity

    CERN Document Server

    Gershenson, Carlos

    2011-01-01

    The term complexity derives etymologically from the Latin plexus, which means interwoven. Intuitively, this implies that something complex is composed by elements that are difficult to separate. This difficulty arises from the relevant interactions that take place between components. This lack of separability is at odds with the classical scientific method - which has been used since the times of Galileo, Newton, Descartes, and Laplace - and has also influenced philosophy and engineering. In recent decades, the scientific study of complexity and complex systems has proposed a paradigm shift in science and philosophy, proposing novel methods that take into account relevant interactions.

  16. Direct measurement and modulation of single-molecule coordinative bonding forces in a transition metal complex

    DEFF Research Database (Denmark)

    Hao, Xian; Zhu, Nan; Gschneidtner, Tina

    2013-01-01

    Coordination chemistry has been a consistently active branch of chemistry since Werner's seminal theory of coordination compounds inaugurated in 1893, with the central focus on transition metal complexes. However, control and measurement of metal-ligand interactions at the single-molecule level...... remain a daunting challenge. Here we demonstrate an interdisciplinary and systematic approach that enables measurement and modulation of the coordinative bonding forces in a transition metal complex. Terpyridine is derived with a thiol linker, facilitating covalent attachment of this ligand on both gold...

  17. Three-dimensional quantification of structures in trabecular bone using measures of complexity.

    Science.gov (United States)

    Marwan, Norbert; Kurths, Jürgen; Thomsen, Jesper Skovhus; Felsenberg, Dieter; Saparin, Peter

    2009-02-01

    The study of pathological changes of bone is an important task in diagnostic procedures of patients with metabolic bone diseases such as osteoporosis as well as in monitoring the health state of astronauts during long-term space flights. The recent availability of high-resolution three-dimensional (3D) imaging of bone challenges the development of data analysis techniques able to assess changes of the 3D microarchitecture of trabecular bone. We introduce an approach based on spatial geometrical properties and define structural measures of complexity for 3D image analysis. These measures evaluate different aspects of organization and complexity of 3D structures, such as complexity of its surface or shape variability. We apply these measures to 3D data acquired by high-resolution microcomputed tomography (microCT) from human proximal tibiae and lumbar vertebrae at different stages of osteoporotic bone loss. The outcome is compared to the results of conventional static histomorphometry and exhibits clear relationships between the analyzed geometrical features of trabecular bone and loss of bone density, but also indicate that the measures reveal additional information about the structural composition of bone, which were not revealed by the static histomorphometry. Finally, we have studied the dependency of the developed measures of complexity on the spatial resolution of the microCT data sets.

  18. Direct measurement of interaction forces between a platinum dichloride complex and DNA molecules.

    Science.gov (United States)

    Muramatsu, Hiroshi; Shimada, Shogo; Okada, Tomoko

    2017-06-29

    The interaction forces between a platinum dichloride complex and DNA molecules have been studied using atomic force microscopy (AFM). The platinum dichloride complex, di-dimethylsulfoxide-dichloroplatinum (II) (Pt(DMSO)2Cl2), was immobilized on an AFM probe by coordinating the platinum to two amino groups to form a complex similar to Pt(en)Cl2, which is structurally similar to cisplatin. The retraction forces were measured between the platinum complex and DNA molecules immobilized on mica plates using force curve measurements. The histogram of the retraction force for λ-DNA showed several peaks; the unit retraction force was estimated to be 130 pN for a pulling rate of 60 nm/s. The retraction forces were also measured separately for four single-base DNA oligomers (adenine, guanine, thymine, and cytosine). Retraction forces were frequently observed in the force curves for the DNA oligomers of guanine and adenine. For the guanine DNA oligomer, the most frequent retraction force was slightly lower than but very similar to the retraction force for λ-DNA. A higher retraction force was obtained for the adenine DNA oligomer than for the guanine oligomer. This result is consistent with a higher retraction activation energy of adenine with the Pt complex being than that of guanine because the kinetic rate constant for retraction correlates to exp(FΔx - ΔE) where ΔE is an activation energy, F is an applied force, and Δx is a displacement of distance.

  19. Comparing entropy with tests for randomness as a measure of complexity in time series

    CERN Document Server

    Gan, Chee Chun

    2015-01-01

    Entropy measures have become increasingly popular as an evaluation metric for complexity in the analysis of time series data, especially in physiology and medicine. Entropy measures the rate of information gain, or degree of regularity in a time series e.g. heartbeat. Ideally, entropy should be able to quantify the complexity of any underlying structure in the series, as well as determine if the variation arises from a random process. Unfortunately current entropy measures mostly are unable to perform the latter differentiation. Thus, a high entropy score indicates a random or chaotic series, whereas a low score indicates a high degree of regularity. This leads to the observation that current entropy measures are equivalent to evaluating how random a series is, or conversely the degree of regularity in a time series. This raises the possibility that existing tests for randomness, such as the runs test or permutation test, may have similar utility in diagnosing certain conditions. This paper compares various t...

  20. Effects of lability of metal complex on free ion measurement using DMT.

    Science.gov (United States)

    Weng, Liping; Van Riemsdijk, Willem H; Temminghoff, Erwin J M

    2010-04-01

    Very low concentrations of free metal ion in natural samples can be measured using the Donnan membrane technique (DMT) based on ion transport kinetics. In this paper, the possible effects of slow dissociation of metal complexes on the interpretation of kinetic DMT are investigated both theoretically and experimentally. The expressions of the lability parameter, Lgrangian , were derived for DMT. Analysis of new experimental studies using synthetic solution containing NTA as the ligand and Cu(2+) ions shows that when the ionic strength is low (DMT measurement. In natural waters, dissolved organic matter (DOM) is the most important source of ligands that complex metals. By comparing the fraction of labile species measured using other dynamic sensors (DGT, GIME) in several freshwaters, it is concluded that in most waters ion transport in DMT is controlled by diffusion in the membrane. Only in very soft waters (DMT. In this case, neglecting this effect may lead to an underestimation of the free metal ion concentration measured.

  1. Counterions release from electrostatic complexes of polyelectrolytes and proteins of opposite charge : a direct measurement

    CERN Document Server

    Gummel, Jérémie; Boué, François

    2009-01-01

    Though often considered as one of the main driving process of the complexation of species of opposite charges, the release of counterions has never been experimentally directly measured on polyelectrolyte/proteins complexes. We present here the first structural determination of such a release by Small Angle Neutron Scattering in complexes made of lysozyme, a positively charged protein and of PSS, a negatively charged polyelectrolyte. Both components have the same neutron density length, so their scattering can be switched off simultaneously in an appropriate "matching" solvent; this enables determination of the spatial distribution of the single counterions within the complexes. The counterions (including the one subjected to Manning condensation) are expelled from the cores where the species are at electrostatic stoichiometry.

  2. Node-weighted interacting network measures improve the representation of real-world complex systems

    CERN Document Server

    Wiedermann, Marc; Heitzig, Jobst; Kurths, Jürgen

    2013-01-01

    Network theory provides a rich toolbox consisting of methods, measures, and models for studying the structure and dynamics of complex systems found in nature, society, or technology. Recently, it has been pointed out that many real-world complex systems are more adequately mapped by networks of interacting or interdependent networks, e.g., a power grid showing interdependency with a communication network. Additionally, in many real-world situations it is reasonable to include node weights into complex network statistics to reflect the varying size or importance of subsystems that are represented by nodes in the network of interest. E.g., nodes can represent vastly different surface area in climate networks, volume in brain networks or economic capacity in trade networks. In this letter, combining both ideas, we derive a novel class of statistical measures for analysing the structure of networks of interacting networks with heterogeneous node weights. Using a prototypical spatial network model, we show that th...

  3. A large scale analysis of information-theoretic network complexity measures using chemical structures.

    Directory of Open Access Journals (Sweden)

    Matthias Dehmer

    Full Text Available This paper aims to investigate information-theoretic network complexity measures which have already been intensely used in mathematical- and medicinal chemistry including drug design. Numerous such measures have been developed so far but many of them lack a meaningful interpretation, e.g., we want to examine which kind of structural information they detect. Therefore, our main contribution is to shed light on the relatedness between some selected information measures for graphs by performing a large scale analysis using chemical networks. Starting from several sets containing real and synthetic chemical structures represented by graphs, we study the relatedness between a classical (partition-based complexity measure called the topological information content of a graph and some others inferred by a different paradigm leading to partition-independent measures. Moreover, we evaluate the uniqueness of network complexity measures numerically. Generally, a high uniqueness is an important and desirable property when designing novel topological descriptors having the potential to be applied to large chemical databases.

  4. The Word Complexity Measure: Description and Application to Developmental Phonology and Disorders

    Science.gov (United States)

    Stoel-Gammon, Carol

    2010-01-01

    Miccio's work included a number of articles on the assessment of phonology in children with phonological disorders, typically using measures of correct articulation, using the PCC, or analyses of errors, using the framework of phonological processes. This paper introduces an approach to assessing phonology by examining the phonetic complexity of…

  5. Using measures of information content and complexity of time series as hydrologic metrics

    Science.gov (United States)

    The information theory has been previously used to develop metrics that allowed to characterize temporal patterns in soil moisture dynamics, and to evaluate and to compare performance of soil water flow models. The objective of this study was to apply information and complexity measures to characte...

  6. Quantification of spatial structure of human proximal tibial bone biopsies using 3D measures of complexity

    DEFF Research Database (Denmark)

    Saparin, Peter I.; Thomsen, Jesper Skovhus; Prohaska, Steffen

    2005-01-01

    Changes in trabecular bone composition during development of osteoporosis are used as a model for bone loss in microgravity conditions during a space flight. Symbolic dynamics and measures of complexity are proposed and applied to assess quantitatively the structural composition of bone tissue fr...

  7. The Word Complexity Measure: Description and Application to Developmental Phonology and Disorders

    Science.gov (United States)

    Stoel-Gammon, Carol

    2010-01-01

    Miccio's work included a number of articles on the assessment of phonology in children with phonological disorders, typically using measures of correct articulation, using the PCC, or analyses of errors, using the framework of phonological processes. This paper introduces an approach to assessing phonology by examining the phonetic complexity of…

  8. Measuring the pollutant transport capacity of dissolved organic matter in complex matrixes

    DEFF Research Database (Denmark)

    Persson, L.; Alsberg, T.; Odham, G.

    2003-01-01

    were used and evaluated, head-space solid-phase micro-extraction (HS-SPME), enhanced solubility (ES) and fluorescence quenching (FQ). It was concluded that for samples with complex matrixes it was possible to measure the net effect of the DOM binding capacity and the salting out effect of the matrix...

  9. Effects of Lability of Metal Complex on Free Ion Measurement Using DMT

    NARCIS (Netherlands)

    Weng, L.P.; Riemsdijk, van W.H.; Temminghoff, E.J.M.

    2010-01-01

    Very low concentrations of free metal ion in natural samples can be measured using the Donnan membrane technique (DMT) based on ion transport kinetics. In this paper, the possible effects of slow dissociation of metal complexes on the interpretation of kinetic DMT are investigated both theoretically

  10. Peer-Mediated vs. Individual Writing: Measuring Fluency, Complexity, and Accuracy in Writing

    Science.gov (United States)

    Soleimani, Maryam; Modirkhamene, Sima; Sadeghi, Karim

    2017-01-01

    Drawing upon Vygotsky's Sociocultural Theory (SCT), this study aimed at investigating the effect of two writing modes, namely, peer-mediated/collaborative vs. individual writing on measures of fluency, accuracy, and complexity of female EFL learners' writing. Based on an in-house placement test and the First Certificate in English writing paper, a…

  11. Complex decay patterns in atomic core photoionization disentangled by ion-recoil measurements

    Energy Technology Data Exchange (ETDEWEB)

    Guillemin, Renaud; Bomme, Cedric; Marin, Thierry; Journel, Loic; Marchenko, Tatiana; Kushawaha, Rajesh K.; Piancastelli, Maria Novella; Simon, Marc [Universite Pierre et Marie Curie, Universite Paris 06, Laboratoire de Chimie Physique Matiere et Rayonement, 11 rue Pierre et Marie Curie, FR-75231 Paris Cedex 05 (France); Centre National de la Recherche Scientifique, Laboratoire de Chimie Physique Matiere et Rayonement (UMR7614), 11 rue Pierre et Marie Curie, FR-75231 Paris Cedex 05 (France); Trcera, Nicolas [Synchrotron SOLEIL, l' Orme des Merisiers, Saint-Aubin, BP 48, FR-91192 Gif-sur-Yvette Cedex (France)

    2011-12-15

    Following core 1s ionization and resonant excitation of argon atoms, we measure the recoil energy of the ions due to momentum conservation during the emission of Auger electrons. We show that such ion momentum spectroscopy can be used to disentangle to some degree complex decay patterns, involving both radiative and nonradiative decays.

  12. Complex

    African Journals Online (AJOL)

    CLEMENT O BEWAJI

    Schiff bases and their complex compounds have been studied for their .... establishing coordination of the N–(2 – hydroxybenzyl) - L - α - valine Schiff base ..... (1967); “Spectrophotometric Identification of Organic Compounds”, Willey, New.

  13. Solid/liquid phase diagram of the ammonium sulfate/succinic acid/water system.

    Science.gov (United States)

    Pearson, Christian S; Beyer, Keith D

    2015-05-14

    We have studied the low-temperature phase diagram and water activities of the ammonium sulfate/succinic acid/water system using differential scanning calorimetry and infrared spectroscopy of thin films. Using the results from our experiments, we have mapped the solid/liquid ternary phase diagram, determined the water activities based on the freezing point depression, and determined the ice/succinic acid phase boundary as well as the ternary eutectic composition and temperature. We also compared our results to the predictions of the extended AIM aerosol thermodynamics model (E-AIM) and found good agreement for the ice melting points in the ice primary phase field of this system; however, differences were found with respect to succinic acid solubility temperatures. We also compared the results of this study with those of previous studies that we have published on ammonium sulfate/dicarboxylic acid/water systems.

  14. Complex permittivity measurements and mixing laws of ceramic materials and application to microwave processing

    Science.gov (United States)

    Gershon, David Louis

    . Interestingly, the imaginary part of the complex permittivity of alumina/silicon carbide did depend on heating method. The electrostatic simulations were found to be of limited value in predicting the permittivity when there is a lack of data of the volume fraction or permittivity of minor constituents, which contribute significantly to the overall effective permittivity. Several dielectric measurement techniques were specifically developed for this research. A stainless steel open-ended coaxial probe accurately measured the complex permittivity of solid dielectric materials up to 1000C and over a broad frequency range of 0.3 to 6 GHz. The probe's insensitivity to low loss materials constrained accurate dielectric measurements of materials with a loss tangent greater than 0.05. A nondestructive resonant cavity was developed to measure the dielectric properties of low loss materials with variable dimensions.

  15. Application of optical shape measurement for the nondestructive evaluation of complex objects

    Science.gov (United States)

    Osten, Wolfgang

    2000-01-01

    Holographic interferometry makes it possible to measure high-precision displacement data in the range of the wavelength of the laser light used. However, for a precise determination of 3D displacement vectors of complex objects the 3D shape of the surface is required. Modem optical shape measurement technologies enable a very effective approach to finding the Cartesian coordinates of complex surfaces. These data are used to calculate the spatially variable sensitivity vectors for the displacement measurement. The shape data are measured with white-light fringe projection using a multiwavelength technique to acquire absolute phase values. To make the shape data available for 3D displacement measurement they have to be transferred into a reference coordinate system of the interferometric setup, where the deformation of the object caused by operational load is measured precisely. For this purpose a registration procedure is applied. For engineering applications it is useful to make the data available for computer-aided engineering systems. The object surface has to be approximated analytically from the measured point cloud to generate a surface mesh. The displacement vectors can be assigned to the nodes of this surface mesh for visualization of the deformation of the object under test. They also can be compared with the results of finite-element calculations or can be used as boundary conditions for further numerical investigations. The described procedure is demonstrated on an automotive component. Thus more accurate and effective measurement techniques make it possible to bring experimental and numerical displacement analysis closer together.

  16. Measuring economic complexity of countries and products: which metric to use?

    Science.gov (United States)

    Mariani, Manuel Sebastian; Vidmer, Alexandre; Medo, Matsúš; Zhang, Yi-Cheng

    2015-11-01

    Evaluating the economies of countries and their relations with products in the global market is a central problem in economics, with far-reaching implications to our theoretical understanding of the international trade as well as to practical applications, such as policy making and financial investment planning. The recent Economic Complexity approach aims to quantify the competitiveness of countries and the quality of the exported products based on the empirical observation that the most competitive countries have diversified exports, whereas developing countries only export few low quality products - typically those exported by many other countries. Two different metrics, Fitness-Complexity and the Method of Reflections, have been proposed to measure country and product score in the Economic Complexity framework. We use international trade data and a recent ranking evaluation measure to quantitatively compare the ability of the two metrics to rank countries and products according to their importance in the network. The results show that the Fitness-Complexity metric outperforms the Method of Reflections in both the ranking of products and the ranking of countries. We also investigate a generalization of the Fitness-Complexity metric and show that it can produce improved rankings provided that the input data are reliable.

  17. Measuring Complexity, Development Time and Understandability of a Program: A Cognitive Approach

    Directory of Open Access Journals (Sweden)

    Amit Kumar Jakhar

    2014-11-01

    Full Text Available One of the central problems in software engineering is the inherent complexity. Since software is the result of human creative activity and cognitive informatics plays an important role in understanding its fundamental characteristics. This paper models one of the fundamental characteristics of software complexity by examining the cognitive weights of basic software control structures. Cognitive weights are the degree of the difficulty or relative time and effort required for comprehending a given piece of software, which satisfy the definition of complexity. Based on this approach a new concept of New Weighted Method Complexity (NWMC of software is developed. Twenty programs are distributed among 5 PG students and development time is noted of all of them and mean is considered as the actual time needed time to develop the programs and Understandability (UA is also measured of all the programs means how much time needed to understand the code. This paper considers Jingqiu Shao et al Cognitive Functional Size (CFS of software for study. In order to validate the new complexity metrics we have calculated the correlation between proposed metric and CFS with respect to actual development time and performed analysis of NWMC with CFS with Mean Relative Error (MRE and Standard Deviation (Std.. Finally, the authors found that the accuracy to estimate the development time with proposed measure is far better than CFS.

  18. Sequential Washing with Electrolyzed Alkaline and Acidic Water Effectively Removes Pathogens from Metal Surfaces

    Science.gov (United States)

    Nakano, Yuichiro; Akamatsu, Norihiko; Mori, Tsuyoshi; Sano, Kazunori; Satoh, Katsuya; Nagayasu, Takeshi; Miyoshi, Yoshiaki; Sugio, Tomomi; Sakai, Hideyuki; Sakae, Eiji; Ichimiya, Kazuko; Hamada, Masahisa; Nakayama, Takehisa; Fujita, Yuhzo; Yanagihara, Katsunori; Nishida, Noriyuki

    2016-01-01

    Removal of pathogenic organisms from reprocessed surgical instruments is essential to prevent iatrogenic infections. Some bacteria can make persistent biofilms on medical devices. Contamination of non-disposable equipment with prions also represents a serious risk to surgical patients. Efficient disinfection of prions from endoscopes and other instruments such as high-resolution cameras remains problematic because these instruments do not tolerate aggressive chemical or heat treatments. Herein, we develop a new washing system that uses both the alkaline and acidic water produced by electrolysis. Electrolyzed acidic water, containing HCl and HOCl as active substances, has been reported to be an effective disinfectant. A 0.15% NaCl solution was electrolyzed and used immediately to wash bio-contaminated stainless steel model systems with alkaline water (pH 11.9) with sonication, and then with acidic water (pH 2.7) without sonication. Two bacterial species (Staphylococcus aureus and Pseudomonas aeruginosa) and a fungus (Candida albicans) were effectively removed or inactivated by the washing process. In addition, this process effectively removed or inactivated prions from the stainless steel surfaces. This washing system will be potentially useful for the disinfection of clinical devices such as neuroendoscopes because electrolyzed water is gentle to both patients and equipment and is environmentally sound. PMID:27223116

  19. Sequential Washing with Electrolyzed Alkaline and Acidic Water Effectively Removes Pathogens from Metal Surfaces.

    Science.gov (United States)

    Nakano, Yuichiro; Akamatsu, Norihiko; Mori, Tsuyoshi; Sano, Kazunori; Satoh, Katsuya; Nagayasu, Takeshi; Miyoshi, Yoshiaki; Sugio, Tomomi; Sakai, Hideyuki; Sakae, Eiji; Ichimiya, Kazuko; Hamada, Masahisa; Nakayama, Takehisa; Fujita, Yuhzo; Yanagihara, Katsunori; Nishida, Noriyuki

    2016-01-01

    Removal of pathogenic organisms from reprocessed surgical instruments is essential to prevent iatrogenic infections. Some bacteria can make persistent biofilms on medical devices. Contamination of non-disposable equipment with prions also represents a serious risk to surgical patients. Efficient disinfection of prions from endoscopes and other instruments such as high-resolution cameras remains problematic because these instruments do not tolerate aggressive chemical or heat treatments. Herein, we develop a new washing system that uses both the alkaline and acidic water produced by electrolysis. Electrolyzed acidic water, containing HCl and HOCl as active substances, has been reported to be an effective disinfectant. A 0.15% NaCl solution was electrolyzed and used immediately to wash bio-contaminated stainless steel model systems with alkaline water (pH 11.9) with sonication, and then with acidic water (pH 2.7) without sonication. Two bacterial species (Staphylococcus aureus and Pseudomonas aeruginosa) and a fungus (Candida albicans) were effectively removed or inactivated by the washing process. In addition, this process effectively removed or inactivated prions from the stainless steel surfaces. This washing system will be potentially useful for the disinfection of clinical devices such as neuroendoscopes because electrolyzed water is gentle to both patients and equipment and is environmentally sound.

  20. Sequential Washing with Electrolyzed Alkaline and Acidic Water Effectively Removes Pathogens from Metal Surfaces.

    Directory of Open Access Journals (Sweden)

    Yuichiro Nakano

    Full Text Available Removal of pathogenic organisms from reprocessed surgical instruments is essential to prevent iatrogenic infections. Some bacteria can make persistent biofilms on medical devices. Contamination of non-disposable equipment with prions also represents a serious risk to surgical patients. Efficient disinfection of prions from endoscopes and other instruments such as high-resolution cameras remains problematic because these instruments do not tolerate aggressive chemical or heat treatments. Herein, we develop a new washing system that uses both the alkaline and acidic water produced by electrolysis. Electrolyzed acidic water, containing HCl and HOCl as active substances, has been reported to be an effective disinfectant. A 0.15% NaCl solution was electrolyzed and used immediately to wash bio-contaminated stainless steel model systems with alkaline water (pH 11.9 with sonication, and then with acidic water (pH 2.7 without sonication. Two bacterial species (Staphylococcus aureus and Pseudomonas aeruginosa and a fungus (Candida albicans were effectively removed or inactivated by the washing process. In addition, this process effectively removed or inactivated prions from the stainless steel surfaces. This washing system will be potentially useful for the disinfection of clinical devices such as neuroendoscopes because electrolyzed water is gentle to both patients and equipment and is environmentally sound.

  1. Measuring self-complexity: a critical analysis of Linville's H statistic.

    Science.gov (United States)

    Luo, Wenshu; Watkins, David; Lam, Raymond Y H

    2008-01-01

    The paper argues that the most commonly used measure of self-complexity, Linville's H statistic, cannot measure this construct appropriately. It first examines the mathematical properties of H and its relationships with five related indices: the number of self-aspects, the overlap among self-aspects, the average inter-aspect correlation, the ratio of endorsement, and the HICLAS attribute class number. Then, a demonstration study using simulations is reported. Three conclusions are drawn. H and the HICLAS attribute class number are similar in the way they are calculated. Both indices are highly related to the number of self-aspects, while their relationship to overlap is not monotonic. Overlap is affected by the ratio of endorsement and the average inter-aspect correlation but cannot represent the notion of redundancy among traits which directly determines Linville's H statistic. These conclusions are employed to explain the inconsistent findings relating self-complexity and adaptation and an alternative measurement approach is proposed.

  2. Analysis of complexity measures and information planes of selected molecules in position and momentum spaces.

    Science.gov (United States)

    Esquivel, Rodolfo O; Angulo, Juan Carlos; Antolín, Juan; Dehesa, Jesús S; López-Rosa, Sheila; Flores-Gallegos, Nelson

    2010-07-14

    The Fisher-Shannon and LMC shape complexities and the Shannon-disequilibrium, Fisher-Shannon and Fisher-disequilibrium information planes, which consist of two localization-delocalization factors, are computed in both position and momentum spaces for the one-particle densities of 90 selected molecules of various chemical types, at the CISD/6-311++G(3df,2p) level of theory. We found that while the two measures of complexity show general trends only, the localization-delocalization planes clearly exhibit chemically significant patterns. Several molecular properties (energy, ionization potential, total dipole moment, hardness, electrophilicity) are analyzed and used to interpret and understand the chemical nature of the composite information-theoretic measures above mentioned. Our results show that these measures detect not only randomness or localization but also pattern and organization.

  3. Upgrade of the Fast Beam Intensity Measurement System for the CERN PS Complex

    CERN Document Server

    Allica, JC; Andreazza, W; Belohrad, D; Favre, G; Favre, N; Jensen, L; Lenardon, F; Vollenberg, W

    2014-01-01

    The CERN Proton Synchrotron complex (CPS) has been operational for over 50 years. During this time the Fast Beam Current Transformers (FBCTs) have only been repaired when they ceased to function, or individually modified to cope with new requests. This strategy resulted in a large variation of designs, making their maintenance difficult and limiting the precision with which comparisons could be made between transformers for the measurement of beam intensity transmission. During the first long shutdown of the CERN LHC and its injectors (LS1) these systems have undergone a major consolidation, with detectors and acquisition electronics upgraded to provide a uniform measurement system throughout the PS complex. This paper discusses the solutions used and analyses the first beam measurement results.

  4. Recurrence Plot Based Measures of Complexity and its Application to Heart Rate Variability Data

    CERN Document Server

    Marwan, N; Meyerfeldt, U; Schirdewan, A; Kurths, J

    2002-01-01

    In complex systems the knowledge of transitions between regular, laminar or chaotic behavior is essential to understand the processes going on there. Linear approaches are often not sufficient to describe these processes and several nonlinear methods require rather long time observations. To overcome these difficulties, we propose measures of complexity based on vertical structures in recurrence plots and apply them to the logistic map as well as to heart rate variability data. For the logistic map these measures enable us to detect transitions between chaotic and periodic states, as well as to identify additional laminar states, i.e. chaos-chaos transitions. Traditional recurrence quantification analysis fails to detect these latter transitions. Applying our new measures to the heart rate variability data, we are able to detect and quantify laminar phases before a life-threatening cardiac arrhythmia and, thus, to enable a prediction of such an event. Our findings could be of importance for the therapy of mal...

  5. Measuring the intangibles: a metrics for the economic complexity of countries and products.

    Science.gov (United States)

    Cristelli, Matthieu; Gabrielli, Andrea; Tacchella, Andrea; Caldarelli, Guido; Pietronero, Luciano

    2013-01-01

    We investigate a recent methodology we have proposed to extract valuable information on the competitiveness of countries and complexity of products from trade data. Standard economic theories predict a high level of specialization of countries in specific industrial sectors. However, a direct analysis of the official databases of exported products by all countries shows that the actual situation is very different. Countries commonly considered as developed ones are extremely diversified, exporting a large variety of products from very simple to very complex. At the same time countries generally considered as less developed export only the products also exported by the majority of countries. This situation calls for the introduction of a non-monetary and non-income-based measure for country economy complexity which uncovers the hidden potential for development and growth. The statistical approach we present here consists of coupled non-linear maps relating the competitiveness/fitness of countries to the complexity of their products. The fixed point of this transformation defines a metrics for the fitness of countries and the complexity of products. We argue that the key point to properly extract the economic information is the non-linearity of the map which is necessary to bound the complexity of products by the fitness of the less competitive countries exporting them. We present a detailed comparison of the results of this approach directly with those of the Method of Reflections by Hidalgo and Hausmann, showing the better performance of our method and a more solid economic, scientific and consistent foundation.

  6. Intracellular distribution of fluorescent copper and zinc bis(thiosemicarbazonato) complexes measured with fluorescence lifetime spectroscopy.

    Science.gov (United States)

    Hickey, James L; James, Janine L; Henderson, Clare A; Price, Katherine A; Mot, Alexandra I; Buncic, Gojko; Crouch, Peter J; White, Jonathan M; White, Anthony R; Smith, Trevor A; Donnelly, Paul S

    2015-10-05

    The intracellular distribution of fluorescently labeled copper and zinc bis(thiosemicarbazonato) complexes was investigated in M17 neuroblastoma cells and primary cortical neurons with a view to providing insights into the neuroprotective activity of a copper bis(thiosemicarbazonato) complex known as Cu(II)(atsm). Time-resolved fluorescence measurements allowed the identification of the Cu(II) and Zn(II) complexes as well as the free ligand inside the cells by virtue of the distinct fluorescence lifetime of each species. Confocal fluorescent microscopy of cells treated with the fluorescent copper(II)bis(thiosemicarbazonato) complex revealed significant fluorescence associated with cytoplasmic puncta that were identified to be lysosomes in primary cortical neurons and both lipid droplets and lysosomes in M17 neuroblastoma cells. Fluorescence lifetime imaging microscopy confirmed that the fluorescence signal emanating from the lipid droplets could be attributed to the copper(II) complex but also that some degree of loss of the metal ion led to diffuse cytosolic fluorescence that could be attributed to the metal-free ligand. The accumulation of the copper(II) complex in lipid droplets could be relevant to the neuroprotective activity of Cu(II)(atsm) in models of amyotrophic lateral sclerosis and Parkinson's disease.

  7. Complex Hand Dexterity: A Review of Biomechanical Methods for Measuring Musical Performance

    Directory of Open Access Journals (Sweden)

    Cheryl Diane Metcalf

    2014-05-01

    Full Text Available Complex hand dexterity is fundamental to our interactions with the physical, social and cultural environment. Dexterity can be an expression of creativity and precision in a range of activities, including musical performance. Little is understood about complex hand dexterity or how virtuoso expertise is acquired, due to the versatility of movement combinations available to complete any given task. This has historically limited progress of the field because of difficulties in measuring movements of the hand. Recent developments in methods of motion capture and analysis mean it is now possible to explore the intricate movements of the hand and fingers. These methods allow us insights into the neurophysiological mechanisms underpinning complex hand dexterity and motor learning. They also allow investigation into the key factors that contribute to injury, recovery and functional compensation.The application of such analytical techniques within musical performance provides a multidisciplinary framework for purposeful investigation into the process of learning and skill acquisition in instrumental performance. These highly skilled manual and cognitive tasks present the ultimate achievement in complex hand dexterity. This paper will review methods of assessing instrumental performance in music, focusing specifically on biomechanical measurement and the associated technical challenges faced when measuring highly dexterous activities.

  8. Aeolian sediment fluxes measured over various plant/soil complexes in the Chihuahuan desert

    Science.gov (United States)

    Bergametti, G.; Gillette, D. A.

    2010-09-01

    Measurements of horizontal flux of sediment were performed over the period 1998-2005 at different vegetated areas within the Jornada Long Term Ecological Research site. Sediment trap samples were collected during successive nominal 3-month periods at 15 sites: three independent sites at each of the five dominant plant/soil complexes encountered in this part of the Chihuahuan desert (mesquite, creosote, tarbush, grama grass, and playa grass). Mesquite vegetated areas have significantly higher sediment fluxes than the four other plant/soil complexes. The other types of vegetation complexes yield sediment fluxes that cannot be statistically distinguished from each other. An analysis of the temporal variability of the sediment fluxes indicates that only the annual sediment fluxes from mesquite sites are correlated with the annual occurrence of high wind speeds. Examination of the vertical profile of the fluxes of sediment and the fast response Sensit measurements confirms that a local saltation mechanism is responsible for sediment fluxes measured at mesquite sites. However, the local saltation mechanism cannot explain sediment fluxes measured on nonmesquite sites. Sediment fluxes at nonmesquite sites are only rarely carried in from upwind sources. Additionally, our data for sediment flux showed that off-site (drifting in) flux of sediment cannot explain the differences of mesquite and nonmesquite sediment fluxes. We suggest dust devils to be the mechanism that causes sediment emissions at both nonmesquite and mesquite lands, but their effect is trivial compared to the fluxes caused by mesoscale meteorological winds at the mesquite sites.

  9. Measurement and documentation of complex PTSD in treatment seeking traumatized refugees

    DEFF Research Database (Denmark)

    Palic, Sabina

    The aim of the thesis is to study complex traumatization and its measurement in treatment seeking traumatized refugees. Historically there have been repeated attempts to create a diagnosis for complex posttraumatic stress disorder (complex PTSD) to capture the more diverse, trauma related symptoms...... traumatization as Disorders of Extreme Stress Not Otherwise Specified (DESNOS). The first article from this study demonstrated that DESNOS in a clinical sample of refugees, primarily resembled the Schizotypal, and Paranoid personality disorders (PD), when compared to Axis I and Axis II syndromes on self...... is considered a predominant risk factor for DESNOS and PD). However, there was also overlap between DESNOS and Axis I syndromes – specifically, depression, dissociation, somatization and PTSD. It was therefore concluded, that categorization of DESNOS in refugees under either Axis I or Axis II depends...

  10. Methods and instrumental techniques for the study of acidic water systems; Metodologias y tecnicas instrumentales para el estudio de sistemas de aguas acidas

    Energy Technology Data Exchange (ETDEWEB)

    Acero Salazar, P.; Asta Andres, M. P.; Torrento Aguerri, C.; Gimeno Serrano, M. J.; Auque Sanz, L. F.; Gomez Jimenez, J. B.

    2011-07-01

    From a geochemical point of view acidic waters are very complex systems in which many interaction processes take place between surface and ground waters, gases (particularly atmospheric oxygen), acid-generating minerals, solid phases responsible for the natural attenuation of elements in solution and also many types of biological activity. Owing to this high complexity, the quality and reliability of any geochemical study focusing on this type of system will depend largely upon the use of appropriate methods of sampling, preservation and analysis of waters, minerals, gases and biological samples. We describe here the main methods and techniques used in geochemical studies of acid waters associated with sulphide mineral environments, taking into account not only the various sample types but also the features of the main types of system (open pits, tailings ponds, acid streams etc.). We also explain the main applications and limitations of each method or technique and provide references to earlier technical and scientific studies in which further information can be obtained. (Author) 97 refs.

  11. Eddy-Covariance Flux Measurements in the Complex Terrain of an Alpine Valley in Switzerland

    Science.gov (United States)

    Hiller, Rebecca; Zeeman, Matthias J.; Eugster, Werner

    2008-06-01

    We measured the surface energy budget of an Alpine grassland in highly complex terrain to explore possibilities and limitations for application of the eddy-covariance technique, also for CO2 flux measurements, at such non-ideal locations. This paper focuses on the influence of complex terrain on the turbulent energy measurements of a characteristic high Alpine grassland on Crap Alv (Alp Weissenstein) in the Swiss Alps during the growing season 2006. Measurements were carried out on a topographic terrace with a slope of 25◦ inclination. Flux data quality is assessed via the closure of the energy budget and the quality flag method used within the CarboEurope project. During 93% of the time the wind direction was along the main valley axis (43% upvalley and 50% downvalley directions). During the transition times of the typical twice daily wind direction changes in a mountain valley the fraction of high and good quality flux data reached a minimum of ≈50%, whereas during the early afternoon ≈70% of all records yielded good to highest quality (CarboEurope flags 0 and 1). The overall energy budget closure was 74 ± 2%. An angular correction for the shortwave energy input to the slope improved the energy budget closure slightly to 82 ± 2% for afternoon conditions. In the daily total, the measured turbulent energy fluxes are only underestimated by around 8% of net radiation. In summary, our results suggest that it is possible to yield realistic energy flux measurements under such conditions. We thus argue that the Crap Alv site and similar topographically complex locations with short-statured vegetation should be well suited also for CO2 flux measurements.

  12. Measurement and Hemodialysis Effect of Complex Relative Permittivity for Blood of Kidney Patients Using Open-Ended Coaxial Measurement Probe

    Science.gov (United States)

    Takeda, Akira; Takata, Kazuyuki; Nagao, Hirotomo; Wang, Jianqing; Fujiwara, Osamu

    Before evaluating the quality of hemodialysis from the limited volume of human blood using a commercially available open-ended coaxial probe, we previously measured the complex relative permittivity of pure water from 200 MHz to 6 GHz with respect to its measured liquid volume, and revealed that 1.9 ml water in a beaker with a diameter of 24 mm and a depth of 2 mm gives a variation within ±0.5 % for the real part and ±7 % for the imaginary part. Based on the above finding, we measured the dielectric properties of 2.5 ml whole blood at 25°C for 10 normal healthy subjects and 9 hemodialysis patients. The measured results on healthy subjects show good agreement with the data reported by Gabriel for human blood at 37°C, while they provide different dispersion characteristics of straight lines for their Cole-Cole plots. The measured results on the patients give further different dispersion characteristics in comparison with the healthy subjects. In order to investigate the above differences statistically, the student t-test was conducted to reveal that permittivity at infinite frequency for the Cole-Cole plots is significantly different with a level of 1 % among its averaged values for normal healthy subjects and patients before dialysis.

  13. Multi-attribute integrated measurement of node importance in complex networks

    Science.gov (United States)

    Wang, Shibo; Zhao, Jinlou

    2015-11-01

    The measure of node importance in complex networks is very important to the research of networks stability and robustness; it also can ensure the security of the whole network. Most researchers have used a single indicator to measure the networks node importance, so that the obtained measurement results only reflect certain aspects of the networks with a loss of information. Meanwhile, because of the difference of networks topology, the nodes' importance should be described by combining the character of the networks topology. Most of the existing evaluation algorithms cannot completely reflect the circumstances of complex networks, so this paper takes into account the degree of centrality, the relative closeness centrality, clustering coefficient, and topology potential and raises an integrated measuring method to measure the nodes' importance. This method can reflect nodes' internal and outside attributes and eliminate the influence of network structure on the node importance. The experiments of karate network and dolphin network show that networks topology structure integrated measure has smaller range of metrical result than a single indicator and more universal. Experiments show that attacking the North American power grid and the Internet network with the method has a faster convergence speed than other methods.

  14. Measuring the evolution of ontology complexity: the gene ontology case study.

    Science.gov (United States)

    Dameron, Olivier; Bettembourg, Charles; Le Meur, Nolwenn

    2013-01-01

    Ontologies support automatic sharing, combination and analysis of life sciences data. They undergo regular curation and enrichment. We studied the impact of an ontology evolution on its structural complexity. As a case study we used the sixty monthly releases between January 2008 and December 2012 of the Gene Ontology and its three independent branches, i.e. biological processes (BP), cellular components (CC) and molecular functions (MF). For each case, we measured complexity by computing metrics related to the size, the nodes connectivity and the hierarchical structure. The number of classes and relations increased monotonously for each branch, with different growth rates. BP and CC had similar connectivity, superior to that of MF. Connectivity increased monotonously for BP, decreased for CC and remained stable for MF, with a marked increase for the three branches in November and December 2012. Hierarchy-related measures showed that CC and MF had similar proportions of leaves, average depths and average heights. BP had a lower proportion of leaves, and a higher average depth and average height. For BP and MF, the late 2012 increase of connectivity resulted in an increase of the average depth and average height and a decrease of the proportion of leaves, indicating that a major enrichment effort of the intermediate-level hierarchy occurred. The variation of the number of classes and relations in an ontology does not provide enough information about the evolution of its complexity. However, connectivity and hierarchy-related metrics revealed different patterns of values as well as of evolution for the three branches of the Gene Ontology. CC was similar to BP in terms of connectivity, and similar to MF in terms of hierarchy. Overall, BP complexity increased, CC was refined with the addition of leaves providing a finer level of annotations but decreasing slightly its complexity, and MF complexity remained stable.

  15. Crater size-frequency distribution measurements and age of the Compton-Belkovich Volcanic Complex

    Science.gov (United States)

    Shirley, K. A.; Zanetti, M.; Jolliff, B.; van der Bogert, C. H.; Hiesinger, H.

    2016-07-01

    The Compton-Belkovich Volcanic Complex (CBVC) is a 25 × 35 km feature on the lunar farside marked by elevated topography, high albedo, high thorium concentration, and high silica content. Morphologies indicate that the complex is volcanic in origin and compositions indicate that it represents rare silicic volcanism on the Moon. Constraining the timing of silicic volcanism at the complex is necessary to better understand the development of evolved magmas and when they were active on the lunar surface. We employ image analysis and crater size-frequency distribution (CSFD) measurements on several locations within the complex and at surrounding impact craters, Hayn (87 km diameter), and Compton (160 km diameter), to determine relative and absolute model ages of regional events. Using CSFD measurements, we establish a chronology dating regional resurfacing events and the earliest possible onset of CBVC volcanism at ∼3.8 Ga, the formation of Compton Crater at 3.6 Ga, likely resurfacing by volcanism at the CBVC at ∼3.5 Ga, and the formation of Hayn Crater at ∼1 Ga. For the CBVC, we find the most consistent results are obtained using craters larger than 300 m in diameter; the small crater population is affected by their approach to an equilibrium condition and by the physical properties of regolith at the CBVC.

  16. Dynamical influence: how to measure individual contributions to collective dynamics in complex networks

    CERN Document Server

    Klemm, Konstantin; Eguiluz, Victor M; Miguel, Maxi San

    2010-01-01

    Identifying key players in complex networks remains a challenge affecting a great variety of research fields, from the efficient dissemination of ideas to drug target discovery in biomedical problems. The difficulty lies at several levels: how to single out the role of individual elements in such intermingled systems, or which is the best way to quantify their importance. Centrality measures aim at capturing the influence of a node from its position in a network. The key issue obviated is that the contribution of a node to the collective behaviour is not uniquely determined by the structure of the system but a result of both dynamics and network structure. Here we define dynamical influence as an explicit measure of how strongly a node's dynamical state affects collective behavior. Influence is derived analytically for dissipative processes in complex networks, directed or undirected. We show that it quantifies precisely how efficiently real systems may be driven by manipulating the state of single nodes. It ...

  17. In vivo and in situ measurement and modelling of intra-body effective complex permittivity

    DEFF Research Database (Denmark)

    Nadimi, Esmaeil S; Blanes-Vidal, Victoria; Harslund, Jakob L F

    2015-01-01

    Radio frequency tracking of medical micro-robots in minimally invasive medicine is usually investigated upon the assumption that the human body is a homogeneous propagation medium. In this Letter, the authors conducted various trial programs to measure and model the effective complex permittivity ε...... contractions and simulated peristaltic movements of the GI tract organs inside the abdominal cavity and in the presence of the abdominal wall on the measurements and variations of ε' and ε''. They advanced the previous models of effective complex permittivity of a multilayer inhomogeneous medium, by estimating...... an analytical model that accounts for reflections between the layers and calculates the attenuation that the wave encounters as it traverses the GI tract and the abdominal wall. They observed that deviation from the specified nominal layer thicknesses due to non-geometric boundaries of GI tract morphometric...

  18. Theoretical Study on Measure of Hydrogen Bonding Strength: R-C≡N…pyrrole Complexes

    Institute of Scientific and Technical Information of China (English)

    史福强; 安静仪; 俞稼镛

    2005-01-01

    The R-C≡N…pyrrole (R=H, CH3, CH2F, CHF2, CF3, NH2, BH2, OH, F, CH2Cl, CHCl2, CCl3, Li, Na) complexes were considered as the simple sample for measure of hydrogen bonding strength. Density functional theory B3LYP/6-311 + + G** level was applied to the optimization of geometries of complexes and monomers. Measure of hydrogen bonding strength based on geometrical and topological parameters, which were derived from the AIM theory, was analyzed. Additionally, natural bond orbital (NBO) analysis and frequency calculations were performed.From the computation results it was found that the electronic density at N-H bond critical points was also strictly correlated with the hydrogen bonding strength.

  19. Information entropy to measure the spatial and temporal complexity of solute transport in heterogeneous porous media

    Science.gov (United States)

    Li, Weiyao; Huang, Guanhua; Xiong, Yunwu

    2016-04-01

    The complexity of the spatial structure of porous media, randomness of groundwater recharge and discharge (rainfall, runoff, etc.) has led to groundwater movement complexity, physical and chemical interaction between groundwater and porous media cause solute transport in the medium more complicated. An appropriate method to describe the complexity of features is essential when study on solute transport and conversion in porous media. Information entropy could measure uncertainty and disorder, therefore we attempted to investigate complexity, explore the contact between the information entropy and complexity of solute transport in heterogeneous porous media using information entropy theory. Based on Markov theory, two-dimensional stochastic field of hydraulic conductivity (K) was generated by transition probability. Flow and solute transport model were established under four conditions (instantaneous point source, continuous point source, instantaneous line source and continuous line source). The spatial and temporal complexity of solute transport process was characterized and evaluated using spatial moment and information entropy. Results indicated that the entropy increased as the increase of complexity of solute transport process. For the point source, the one-dimensional entropy of solute concentration increased at first and then decreased along X and Y directions. As time increased, entropy peak value basically unchanged, peak position migrated along the flow direction (X direction) and approximately coincided with the centroid position. With the increase of time, spatial variability and complexity of solute concentration increase, which result in the increases of the second-order spatial moment and the two-dimensional entropy. Information entropy of line source was higher than point source. Solute entropy obtained from continuous input was higher than instantaneous input. Due to the increase of average length of lithoface, media continuity increased, flow and

  20. FALCON or how to compute measures time efficiently on dynamically evolving dense complex networks?

    Science.gov (United States)

    Franke, R; Ivanova, G

    2014-02-01

    A large number of topics in biology, medicine, neuroscience, psychology and sociology can be generally described via complex networks in order to investigate fundamental questions of structure, connectivity, information exchange and causality. Especially, research on biological networks like functional spatiotemporal brain activations and changes, caused by neuropsychiatric pathologies, is promising. Analyzing those so-called complex networks, the calculation of meaningful measures can be very long-winded depending on their size and structure. Even worse, in many labs only standard desktop computers are accessible to perform those calculations. Numerous investigations on complex networks regard huge but sparsely connected network structures, where most network nodes are connected to only a few others. Currently, there are several libraries available to tackle this kind of networks. A problem arises when not only a few big and sparse networks have to be analyzed, but hundreds or thousands of smaller and conceivably dense networks (e.g. in measuring brain activation over time). Then every minute per network is crucial. For these cases there several possibilities to use standard hardware more efficiently. It is not sufficient to apply just standard algorithms for dense graph characteristics. This article introduces the new library FALCON developed especially for the exploration of dense complex networks. Currently, it offers 12 different measures (like clustering coefficients), each for undirected-unweighted, undirected-weighted and directed-unweighted networks. It uses a multi-core approach in combination with comprehensive code and hardware optimizations. There is an alternative massively parallel GPU implementation for the most time-consuming measures, too. Finally, a comparing benchmark is integrated to support the choice of the most suitable library for a particular network issue. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Quantifying the improvement of surrogate indices of hepatic insulin resistance using complex measurement techniques.

    Directory of Open Access Journals (Sweden)

    John G Hattersley

    Full Text Available We evaluated the ability of simple and complex surrogate-indices to identify individuals from an overweight/obese cohort with hepatic insulin-resistance (HEP-IR. Five indices, one previously defined and four newly generated through step-wise linear regression, were created against a single-cohort sample of 77 extensively characterised participants with the metabolic syndrome (age 55.6 ± 1.0 years, BMI 31.5 ± 0.4 kg/m(2; 30 males. HEP-IR was defined by measuring endogenous-glucose-production (EGP with [6-6(2H(2] glucose during fasting and euglycemic-hyperinsulinemic clamps and expressed as EGP*fasting plasma insulin. Complex measures were incorporated into the model, including various non-standard biomarkers and the measurement of body-fat distribution and liver-fat, to further improve the predictive capability of the index. Validation was performed against a data set of the same subjects after an isoenergetic dietary intervention (4 arms, diets varying in protein and fiber content versus control. All five indices produced comparable prediction of HEP-IR, explaining 39-56% of the variance, depending on regression variable combination. The validation of the regression equations showed little variation between the different proposed indices (r(2 = 27-32% on a matched dataset. New complex indices encompassing advanced measurement techniques offered an improved correlation (r = 0.75, P<0.001. However, when validated against the alternative dataset all indices performed comparably with the standard homeostasis model assessment for insulin resistance (HOMA-IR (r = 0.54, P<0.001. Thus, simple estimates of HEP-IR performed comparable to more complex indices and could be an efficient and cost effective approach in large epidemiological investigations.

  2. Randomness Representation of Turbulence in Canopy Flows Using Kolmogorov Complexity Measures

    Directory of Open Access Journals (Sweden)

    Dragutin Mihailović

    2017-09-01

    Full Text Available Turbulence is often expressed in terms of either irregular or random fluid flows, without quantification. In this paper, a methodology to evaluate the randomness of the turbulence using measures based on the Kolmogorov complexity (KC is proposed. This methodology is applied to experimental data from a turbulent flow developing in a laboratory channel with canopy of three different densities. The methodology is even compared with the traditional approach based on classical turbulence statistics.

  3. An Assessment of Wind Plant Complex Flows Using Advanced Doppler Radar Measurements

    Science.gov (United States)

    Gunter, W. S.; Schroeder, J.; Hirth, B.; Duncan, J.; Guynes, J.

    2015-12-01

    As installed wind energy capacity continues to steadily increase, the need for comprehensive measurements of wind plant complex flows to further reduce the cost of wind energy has been well advertised by the industry as a whole. Such measurements serve diverse perspectives including resource assessment, turbine inflow and power curve validation, wake and wind plant layout model verification, operations and maintenance, and the development of future advanced wind plant control schemes. While various measurement devices have been matured for wind energy applications (e.g. meteorological towers, LIDAR, SODAR), this presentation will focus on the use of advanced Doppler radar systems to observe the complex wind flows within and surrounding wind plants. Advanced Doppler radars can provide the combined advantage of a large analysis footprint (tens of square kilometers) with rapid data analysis updates (a few seconds to one minute) using both single- and dual-Doppler data collection methods. This presentation demonstrates the utility of measurements collected by the Texas Tech University Ka-band (TTUKa) radars to identify complex wind flows occurring within and nearby operational wind plants, and provide reliable forecasts of wind speeds and directions at given locations (i.e. turbine or instrumented tower sites) 45+ seconds in advance. Radar-derived wind maps reveal commonly observed features such as turbine wakes and turbine-to-turbine interaction, high momentum wind speed channels between turbine wakes, turbine array edge effects, transient boundary layer flow structures (such as wind streaks, frontal boundaries, etc.), and the impact of local terrain. Operational turbine or instrumented tower data are merged with the radar analysis to link the observed complex flow features to turbine and wind plant performance.

  4. A Quasi-Optical Method for Measuring the Complex Permittivity of Materials.

    Science.gov (United States)

    1984-09-01

    millimeter wavelengths, waveguide, cavity, and various forms of quasi-optical methods are utilized to measure the complex permittivity of materials...conjunction with an interferometer, Fourier transform spectrometry can be utilized to derive the * permittivity of materials (Ref. 17). Breeden and...pp. 75-84, 1971. [17] J. E. Chamberlain, J. E. Gibbs, and H. A. Gebbie, " Refractometry in the far infra-red using a two-beam interferometer," Nature

  5. Quantifying the improvement of surrogate indices of hepatic insulin resistance using complex measurement techniques.

    Science.gov (United States)

    Hattersley, John G; Möhlig, Matthias; Roden, Michael; Arafat, Ayman M; Loeffelholz, Christian V; Nowotny, Peter; Machann, Jürgen; Hierholzer, Johannes; Osterhoff, Martin; Khan, Michael; Pfeiffer, Andreas F H; Weickert, Martin O

    2012-01-01

    We evaluated the ability of simple and complex surrogate-indices to identify individuals from an overweight/obese cohort with hepatic insulin-resistance (HEP-IR). Five indices, one previously defined and four newly generated through step-wise linear regression, were created against a single-cohort sample of 77 extensively characterised participants with the metabolic syndrome (age 55.6 ± 1.0 years, BMI 31.5 ± 0.4 kg/m(2); 30 males). HEP-IR was defined by measuring endogenous-glucose-production (EGP) with [6-6(2)H(2)] glucose during fasting and euglycemic-hyperinsulinemic clamps and expressed as EGP*fasting plasma insulin. Complex measures were incorporated into the model, including various non-standard biomarkers and the measurement of body-fat distribution and liver-fat, to further improve the predictive capability of the index. Validation was performed against a data set of the same subjects after an isoenergetic dietary intervention (4 arms, diets varying in protein and fiber content versus control). All five indices produced comparable prediction of HEP-IR, explaining 39-56% of the variance, depending on regression variable combination. The validation of the regression equations showed little variation between the different proposed indices (r(2) = 27-32%) on a matched dataset. New complex indices encompassing advanced measurement techniques offered an improved correlation (r = 0.75, Presistance (HOMA-IR) (r = 0.54, P<0.001). Thus, simple estimates of HEP-IR performed comparable to more complex indices and could be an efficient and cost effective approach in large epidemiological investigations.

  6. The Born rule from a consistency requirement on hidden measurements in complex Hilbert space

    CERN Document Server

    Aerts, S

    2002-01-01

    We formalize the hidden measurement approach within the very general notion of an interactive probability model. We narrow down the model by assuming the state space of a physical entity is a complex Hilbert space and introduce the principle of consistent interaction which effectively partitions the space of apparatus states. The normalized measure of the set of apparatus states that interact with a pure state giving rise to a fixed outcome is shown to be in accordance with the probability obtained using the Born rule.

  7. An improvement on measure methods of the complexity theory and its applications

    Institute of Scientific and Technical Information of China (English)

    Wang Fu-Lai; Yang Hui-Huang

    2009-01-01

    A new method is proposed to transform the time series gained from a dynamic system to a symbolic series which extracts both overall and local information of the time series. Based on the transformation,two measures are defined to characterize the complexity of the symbolic series. The measures reflect the sensitive dependence of chaotic systems on initial conditions and the randomness of a time series,and thus can distinguish periodic or completely random series from chaotic time series even though the lengths of the time series are not long. Finally,the logistic map and the two-parameter Hen6n map are studied and the results are satisfactory.

  8. RNACompress: Grammar-based compression and informational complexity measurement of RNA secondary structure

    Directory of Open Access Journals (Sweden)

    Chen Chun

    2008-03-01

    Full Text Available Abstract Background With the rapid emergence of RNA databases and newly identified non-coding RNAs, an efficient compression algorithm for RNA sequence and structural information is needed for the storage and analysis of such data. Although several algorithms for compressing DNA sequences have been proposed, none of them are suitable for the compression of RNA sequences with their secondary structures simultaneously. This kind of compression not only facilitates the maintenance of RNA data, but also supplies a novel way to measure the informational complexity of RNA structural data, raising the possibility of studying the relationship between the functional activities of RNA structures and their complexities, as well as various structural properties of RNA based on compression. Results RNACompress employs an efficient grammar-based model to compress RNA sequences and their secondary structures. The main goals of this algorithm are two fold: (1 present a robust and effective way for RNA structural data compression; (2 design a suitable model to represent RNA secondary structure as well as derive the informational complexity of the structural data based on compression. Our extensive tests have shown that RNACompress achieves a universally better compression ratio compared with other sequence-specific or common text-specific compression algorithms, such as Gencompress, winrar and gzip. Moreover, a test of the activities of distinct GTP-binding RNAs (aptamers compared with their structural complexity shows that our defined informational complexity can be used to describe how complexity varies with activity. These results lead to an objective means of comparing the functional properties of heteropolymers from the information perspective. Conclusion A universal algorithm for the compression of RNA secondary structure as well as the evaluation of its informational complexity is discussed in this paper. We have developed RNACompress, as a useful tool

  9. MEASURING OBJECT-ORIENTED SYSTEMS BASED ON THE EXPERIMENTAL ANALYSIS OF THE COMPLEXITY METRICS

    Directory of Open Access Journals (Sweden)

    J.S.V.R.S.SASTRY,

    2011-05-01

    Full Text Available Metrics are used to help a software engineer in quantitative analysis to assess the quality of the design before a system is built. The focus of Object-Oriented metrics is on the class which is the fundamental building block of the Object-Oriented architecture. These metrics are focused on internal object structure and external object structure. Internal object structure reflects the complexity of each individual entity such as methods and classes. External complexity measures the interaction among entities such as Coupling and Inheritance. This paper mainly focuses on a set of object oriented metrics that can be used to measure the quality of an object oriented design. Two types of complexity metrics in Object-Oriented paradigm namely Mood metrics and Lorenz & Kidd metrics. Mood metrics consist of Method inheritance factor(MIF, Coupling factor(CF, Attribute inheritance factor(AIF, Method hiding factor(MHF, Attribute hiding factor(AHF, and polymorphism factor(PF. Lorenz & Kidd metrics consist of Number of operations overridden (NOO, Number operations added (NOA, Specialization index(SI. Mood metrics and Lorenz & Kidd metrics measurements are used mainly by designers and testers. Designers uses these metrics to access the software early in process,making changes that will reduce complexity and improve the continuing capability of the design. Testers use to test the software for finding the complexity, performance of the system, quality of the software. This paper reviews Mood metrics and Lorenz & Kidd metrics are validates theoretically and empirically methods. In thispaper, work has been done to explore the quality of design of software components using object oriented paradigm. A number of object oriented metrics have been proposed in the literature for measuring the design attributes such as inheritance, coupling, polymorphism etc. This paper, metrics have been used to analyzevarious features of software component. Complexity of methods

  10. An information complexity index for probability measures on ℝ with all moments

    Science.gov (United States)

    Accardi, Luigi; Barhoumi, Abdessatar; Rhaima, Mohamed

    2016-08-01

    We prove that, each probability meassure on ℝ, with all moments, is canonically associated with (i) a ∗-Lie algebra; (ii) a complexity index labeled by pairs of natural integers. The measures with complexity index (0,K) consist of two disjoint classes: that of all measures with finite support and the semi-circle-arcsine class (the discussion in Sec. 4.1 motivates this name). The class C(μ) = (0, 0) coincides with the δ-measures in the finite support case and includes the semi-circle laws in the infinite support case. In the infinite support case, the class C(μ) = (0, 1) includes the arcsine laws, and the class C(μ) = (0, 2) appeared in central limit theorems of quantum random walks in the sense of Konno. The classes C(μ) = (0,K), with K ≥ 3, do not seem to be present in the literature. The class (1, 0) includes the Gaussian and Poisson measures and the associated ∗-Lie algebra is the Heisenberg algebra. The class (2, 0) includes the non-standard (i.e. neither Gaussian nor Poisson) Meixner distributions and the associated ∗-Lie algebra is a central extension of sl(2, ℝ). Starting from n = 3, the ∗-Lie algebra associated to the class (n, 0) is infinite dimensional and the corresponding classes include the higher powers of the standard Gaussian.

  11. Influence of olanzapine on QT variability and complexity measures of heart rate in patients with schizophrenia.

    Science.gov (United States)

    Bär, Karl-Jürgen; Koschke, Mandy; Berger, Sandy; Schulz, Steffen; Tancer, Manuel; Voss, Andreas; Yeragani, Vikram K

    2008-12-01

    Previous studies have shown that untreated patients with acute schizophrenia present with reduced heart rate variability and complexity as well as increased QT variability. This autonomic dysregulation might contribute to increased cardiac morbidity and mortality in these patients. However, the additional effects of newer antipsychotics on autonomic dysfunction have not been investigated, applying these new cardiac parameters to gain information about the regulation at sinus node level as well as the susceptibility to arrhythmias. We have investigated 15 patients with acute schizophrenia before and after established olanzapine treatment and compared them with matched controls. New nonlinear parameters (approximate entropy, compression entropy, fractal dimension) of heart rate variability and also the QT-variability index were calculated. In accordance with previous results, we have observed reduced complexity of heart rate regulation in untreated patients. Furthermore, the QT-variability index was significantly increased in unmedicated patients, indicating increased repolarization lability. Reduction of the heart rate regulation complexity after olanzapine treatment was seen, as measured by compression entropy of heart rate. No change in QT variability was observed after treatment. This study shows that unmedicated patients with acute schizophrenia experience autonomic dysfunction. Olanzapine treatment seems to have very little additional impact in regard to the QT variability. However, the decrease in heart rate complexity after olanzapine treatment suggests decreased cardiac vagal function, which may increase the risk for cardiac mortality. Further studies are warranted to gain more insight into cardiac regulation in schizophrenia and the effect of novel antipsychotics.

  12. Glycinin-gum arabic complex formation: Turbidity measurement and charge neutralization analysis.

    Science.gov (United States)

    Dong, Die; Hua, Yufei

    2016-11-01

    The interaction between glycinin and anionic polysaccharides has gained considerable attention recently because of its scientific impact on the stability of acid soymilk systems. In this study, the formation of glycinin/gum arabic complexes driven by electrostatic interactions was investigated. Turbidity titrations at different glycinin/gum arabic ratios were conducted and critical pH values (pHφ1) where insoluble complexes began forming were determined firstly. The corresponding pHφ1 values at glycinin/gum arabic ratios of 1:4, 1:2, 1:1, 2:1, 4:1 and 8:1 were 2.85, 3.25, 3.70, 4.40, 4.85 and 5.35, respectively. Afterwards, electromobilities for glycinin and gum arabic at the pH values between 4.1 and 2.6 were measured, and charge densities (ZN) for glycinin and gum arabic were calculated based on the soft particle analysis theory. Further analysis indicated that the product of glycinin/gum arabic ratio (ρ) and ZN ratio of glycinin/gum arabic was approximate 1 at any pHφ1 values. It was revealed that charge neutralization was achieved when glycinin/gum arabic insoluble complexes began forming. NaCl displayed multiple effects on glycinin/gum arabic complex formation according to turbidity and compositional analysis. The present study could provide basic guidance in acid soymilk designing. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. A Measure for Brain Complexity: Relating Functional Segregation and Integration in the Nervous System

    Science.gov (United States)

    Tononi, Giulio; Sporns, Olaf; Edelman, Gerald M.

    1994-05-01

    In brains of higher vertebrates, the functional segregation of local areas that differ in their anatomy and physiology contrasts sharply with their global integration during perception and behavior. In this paper, we introduce a measure, called neural complexity (C_N), that captures the interplay between these two fundamental aspects of brain organization. We express functional segregation within a neural system in terms of the relative statistical independence of small subsets of the system and functional integration in terms of significant deviations from independence of large subsets. C_N is then obtained from estimates of the average deviation from statistical independence for subsets of increasing size. C_N is shown to be high when functional segregation coexists with integration and to be low when the components of a system are either completely independent (segregated) or completely dependent (integrated). We apply this complexity measure in computer simulations of cortical areas to examine how some basic principles of neuroanatomical organization constrain brain dynamics. We show that the connectivity patterns of the cerebral cortex, such as a high density of connections, strong local connectivity organizing cells into neuronal groups, patchiness in the connectivity among neuronal groups, and prevalent reciprocal connections, are associated with high values of C_N. The approach outlined here may prove useful in analyzing complexity in other biological domains such as gene regulation and embryogenesis.

  14. Measures of metabolism and complexity in the brain of patients with disorders of consciousness.

    Science.gov (United States)

    Bodart, Olivier; Gosseries, Olivia; Wannez, Sarah; Thibaut, Aurore; Annen, Jitka; Boly, Melanie; Rosanova, Mario; Casali, Adenauer G; Casarotto, Silvia; Tononi, Giulio; Massimini, Marcello; Laureys, Steven

    2017-01-01

    Making an accurate diagnosis in patients with disorders of consciousness remains challenging. (18)F-fluorodeoxyglucose (FDG)-PET has been validated as a diagnostic tool in this population, and allows identifying unresponsive patients with a capacity for consciousness. In parallel, the perturbational complexity index (PCI), a new measure based on the analysis of the electroencephalographic response to transcranial magnetic stimulation, has also been suggested as a tool to distinguish between unconscious and conscious states. The aim of the study was to cross-validate FDG-PET and PCI, and to identify signs of consciousness in otherwise unresponsive patients. We jointly applied the Coma Recovery Scale-Revised, FDG-PET and PCI to assess 24 patients with non-acute disorders of consciousness or locked-in syndrome (13 male; 19-54 years old; 12 traumatic; 9 unresponsive wakefulness syndrome, 11 minimally conscious state; 2 emergence from the minimally conscious state, and 2 locked-in syndrome). FDG-PET and PCI provided congruent results in 22 patients, regardless of their behavioural diagnosis. Notably, FDG-PET and PCI revealed preserved metabolic rates and high complexity levels in four patients who were behaviourally unresponsive. We propose that jointly measuring the metabolic activity and the electrophysiological complexity of cortical circuits is a useful complement to the diagnosis and stratification of patients with disorders of consciousness.

  15. Application of a Dual-Arm Robot in Complex Sample Preparation and Measurement Processes.

    Science.gov (United States)

    Fleischer, Heidi; Drews, Robert Ralf; Janson, Jessica; Chinna Patlolla, Bharath Reddy; Chu, Xianghua; Klos, Michael; Thurow, Kerstin

    2016-10-01

    Automation systems with applied robotics have already been established in industrial applications for many years. In the field of life sciences, a comparable high level of automation can be found in the areas of bioscreening and high-throughput screening. Strong deficits still exist in the development of flexible and universal fully automated systems in the field of analytical measurement. Reasons are the heterogeneous processes with complex structures, which include sample preparation and transport, analytical measurements using complex sensor systems, and suitable data analysis and evaluation. Furthermore, the use of nonstandard sample vessels with various shapes and volumes results in an increased complexity. The direct use of existing automation solutions from bioscreening applications is not possible. A flexible automation system for sample preparation, analysis, and data evaluation is presented in this article. It is applied for the determination of cholesterol in biliary endoprosthesis using gas chromatography-mass spectrometry (GC-MS). A dual-arm robot performs both transport and active manipulation tasks to ensure human-like operation. This general robotic concept also enables the use of manual laboratory devices and equipment and is thus suitable in areas with a high standardization grade.

  16. Normal contour error measurement on-machine and compensation method for polishing complex surface by MRF

    Science.gov (United States)

    Chen, Hua; Chen, Jihong; Wang, Baorui; Zheng, Yongcheng

    2016-10-01

    The Magnetorheological finishing (MRF) process, based on the dwell time method with the constant normal spacing for flexible polishing, would bring out the normal contour error in the fine polishing complex surface such as aspheric surface. The normal contour error would change the ribbon's shape and removal characteristics of consistency for MRF. Based on continuously scanning the normal spacing between the workpiece and the finder by the laser range finder, the novel method was put forward to measure the normal contour errors while polishing complex surface on the machining track. The normal contour errors was measured dynamically, by which the workpiece's clamping precision, multi-axis machining NC program and the dynamic performance of the MRF machine were achieved for the verification and security check of the MRF process. The unit for measuring the normal contour errors of complex surface on-machine was designed. Based on the measurement unit's results as feedback to adjust the parameters of the feed forward control and the multi-axis machining, the optimized servo control method was presented to compensate the normal contour errors. The experiment for polishing 180mm × 180mm aspherical workpiece of fused silica by MRF was set up to validate the method. The results show that the normal contour error was controlled in less than 10um. And the PV value of the polished surface accuracy was improved from 0.95λ to 0.09λ under the conditions of the same process parameters. The technology in the paper has been being applied in the PKC600-Q1 MRF machine developed by the China Academe of Engineering Physics for engineering application since 2014. It is being used in the national huge optical engineering for processing the ultra-precision optical parts.

  17. Growing complex network of citations of scientific papers -- measurements and modeling

    CERN Document Server

    Golosovsky, M

    2016-01-01

    To quantify the mechanism of a complex network growth we focus on the network of citations of scientific papers and use a combination of the theoretical and experimental tools to uncover microscopic details of this network growth. Namely, we develop a stochastic model of citation dynamics based on copying/redirection/triadic closure mechanism. In a complementary and coherent way, the model accounts both for statistics of references of scientific papers and for their citation dynamics. Originating in empirical measurements, the model is cast in such a way that it can be verified quantitatively in every aspect. Such verification is performed by measuring citation dynamics of Physics papers. The measurements revealed nonlinear citation dynamics, the nonlinearity being intricately related to network topology. The nonlinearity has far-reaching consequences including non-stationary citation distributions, diverging citation trajectory of similar papers, runaways or "immortal papers" with infinite citation lifetime ...

  18. Comparison of Different Measurement Techniques and a CFD Simulation in Complex Terrain

    Science.gov (United States)

    Schulz, Christoph; Hofsäß, Martin; Anger, Jan; Rautenberg, Alexander; Lutz, Thorsten; Cheng, Po Wen; Bange, Jens

    2016-09-01

    This paper deals with a comparison of data collected by measurements and a simulation for a complex terrain test site in southern Germany. Lidar, met mast, unmanned aerial vehicle (UAV) measurements of wind speed and direction and Computational Fluid Dynamics (CFD) data are compared to each other. The site is characterised regarding its flow features and the suitability for a wind turbine test field. A Delayed-Detached-Eddy- Simulation (DES) was employed using measurement data to generate generic turbulent inflow. A good agreement of the wind profiles between the different approaches was reached. The terrain slope leads to a speed-up, a change of turbulence intensity as well as to flow angle variations.

  19. Isolation, analytical measurements, and cell line studies of the iron-bryostatin-1 complex.

    Science.gov (United States)

    Plummer, Sydney; Manning, Thomas; Baker, Tess; McGreggor, Tysheon; Patel, Mehulkumar; Wylie, Greg; Phillips, Dennis

    2016-05-15

    Bryostatin-1 is a marine natural product that has demonstrated medicinal activity in pre-clinical and clinical trials for the treatment of cancer, Alzheimer's disease, effects of stroke, and HIV. In this study, iron-bryostatin-1 was obtained using a pharmaceutical aquaculture technique developed by our lab that cultivates marine bacteria for marine natural product extraction. Analytical measurements (1)H and (13)C NMR, mass spectrometry, and flame atomic absorption were utilized to confirm the presence of an iron-bryostatin-1 complex. The iron-bryostatin-1 complex produced was then tested against the National Cancer Institute's 60 cell line panel. Adding iron to bryostatin-1 lowered the anti-cancer efficacy of the compound. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Measuring working memory in aphasia: Comparing performance on complex span and N-back tasks

    Directory of Open Access Journals (Sweden)

    Maria Ivanova

    2014-04-01

    No significant correlations were observed between performance on complex span task and N-back tasks.Furthermore, performance on the modified listening span was related to performance on the comprehension subtest of the QASA, while no relationship was found for 2-back and 0-back tasks.Our results mirror studies in healthy controls that demonstrated no relationship between performance on the two tasks(Jaeggi et al., 2010; Kane et al., 2007. Thus although N-back tasks seem similar to traditional complex span measures and may also index abilities related to cognitive processing, the evidence to date does not warrant their direct association with the construct of WM. Implications for future investigation of cognitive deficits in aphasia will be discussed.

  1. Estimation of Defect proneness Using Design complexity Measurements in Object- Oriented Software

    CERN Document Server

    Selvarani, R; Prasad, V Kamakshi

    2010-01-01

    Software engineering is continuously facing the challenges of growing complexity of software packages and increased level of data on defects and drawbacks from software production process. This makes a clarion call for inventions and methods which can enable a more reusable, reliable, easily maintainable and high quality software systems with deeper control on software generation process. Quality and productivity are indeed the two most important parameters for controlling any industrial process. Implementation of a successful control system requires some means of measurement. Software metrics play an important role in the management aspects of the software development process such as better planning, assessment of improvements, resource allocation and reduction of unpredictability. The process involving early detection of potential problems, productivity evaluation and evaluating external quality factors such as reusability, maintainability, defect proneness and complexity are of utmost importance. Here we d...

  2. Positron life time and annihilation Doppler broadening measurements on transition metal complexes

    Energy Technology Data Exchange (ETDEWEB)

    Levay, B. (Eoetvoes Lorand Tudomanyegyetem, Budapest (Hungary). Fizikai Kemiai es Radiologiai Tanszek); Varhelyi, Cs. (Babes-Bolyai Univ., Cluj (Romania)); Burger, K. (Eoetvoes Lorand Tudomanyegyetem, Budapest (Hungary). Szervetlen es Analitikai Kemiai Intezet)

    1982-01-01

    Positron life time and annihilation Doppler broadening measurements have been carried out on 44 solid coordination compounds. Several correlations have been found between the annihilation life time (tau/sub 1/) and line shape parameters (L) and the chemical structure of the compounds. Halide ligands were the most active towards positrons. This fact supports the assumption on the possible formation of (e/sup +/X/sup -/) positron-halide bound state. The life time was decreasing and the annihilation energy spectra were broadening with the increasing negative character of the halides. The aromatic base ligands affected the positron-halide interaction according to their basicity and space requirement and thus they indirectly affected the annihilation parameters, too. In the planar and tetrahedral complexes the electron density on the central met--al ion affected directly the annihilation parameters, while in the octahedral mixed complexes it had only an ind--irect effect through the polarization of the halide ligands.

  3. Simulation of complex glazing products; from optical data measurements to model based predictive controls

    Energy Technology Data Exchange (ETDEWEB)

    Kohler, Christian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-04-01

    Complex glazing systems such as venetian blinds, fritted glass and woven shades require more detailed optical and thermal input data for their components than specular non light-redirecting glazing systems. Various methods for measuring these data sets are described in this paper. These data sets are used in multiple simulation tools to model the thermal and optical properties of complex glazing systems. The output from these tools can be used to generate simplified rating values or as an input to other simulation tools such as whole building annual energy programs, or lighting analysis tools. I also describe some of the challenges of creating a rating system for these products and which factors affect this rating. A potential future direction of simulation and building operations is model based predictive controls, where detailed computer models are run in real-time, receiving data for an actual building and providing control input to building elements such as shades.

  4. Evolution in functional complexity of heart rate dynamics: a measure of cardiac allograft adaptability.

    Science.gov (United States)

    Kresh, J Y; Izrailtyan, I

    1998-09-01

    The capacity of self-organized systems to adapt is embodied in the functional organization of intrinsic control mechanisms. Evolution in functional complexity of heart rate variability (HRV) was used as measure of the capacity of the transplanted heart to express newly emergent regulatory order. In a cross-sectional study of 100 patients after (0-10 yr) heart transplantation (HTX), heart rate dynamics were assessed using pointwise correlation dimension (PD2) analysis. A new observation is that, commencing with the acute event of allograft transplantation, the dynamics of rhythm formation proceed through complex phase transitions. At implantation, the donor heart manifested metronome-like chronotropic behavior (PD2 approximately 1.0). At 11-100 days, dimensional complexity of HRV reached a peak (PD2 approximately 2.0) associated with resurgence in the high-frequency component (0.15-0.5 Hz) of the power spectral density. Subsequent dimensional loss to PD2 approximately 1.0 at 20-30 mo after HTX was followed by a progressive near-linear gain in system complexity, reaching PD2 approximately 3.0 7-10 yr after HTX. The "dynamic reorganization" in the allograft rhythm-generating system, seen in the first 100 days, is a manifestation of the adaptive capacity of intrinsic control mechanisms. The loss of HRV 2 yr after HTX implies a withdrawal of intrinsic autonomic control and/or development of an entrained dynamic pattern characteristic of extrinsic sympathetic input. The subsequent long-term progressive rise in dimensional complexity of HRV can be attributed to the restoration of a functional order patterning parasympathetic control. The recognition that the decentralized heart can restitute the multidimensional state space of HR generator dynamics independent of external autonomic signaling may provide a new perspective on principles that constitute homeodynamic regulation.

  5. Combining measurements to estimate properties and characterization extent of complex biochemical mixtures; applications to Heparan Sulfate

    Science.gov (United States)

    Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan

    2016-04-01

    Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.

  6. Recurrence-plot-based measures of complexity and their application to heart-rate-variability data.

    Science.gov (United States)

    Marwan, Norbert; Wessel, Niels; Meyerfeldt, Udo; Schirdewan, Alexander; Kurths, Jürgen

    2002-08-01

    The knowledge of transitions between regular, laminar or chaotic behaviors is essential to understand the underlying mechanisms behind complex systems. While several linear approaches are often insufficient to describe such processes, there are several nonlinear methods that, however, require rather long time observations. To overcome these difficulties, we propose measures of complexity based on vertical structures in recurrence plots and apply them to the logistic map as well as to heart-rate-variability data. For the logistic map these measures enable us not only to detect transitions between chaotic and periodic states, but also to identify laminar states, i.e., chaos-chaos transitions. The traditional recurrence quantification analysis fails to detect the latter transitions. Applying our measures to the heart-rate-variability data, we are able to detect and quantify the laminar phases before a life-threatening cardiac arrhythmia occurs thereby facilitating a prediction of such an event. Our findings could be of importance for the therapy of malignant cardiac arrhythmias.

  7. The biparametric Fisher-Rényi complexity measure and its application to the multidimensional blackbody radiation

    Science.gov (United States)

    Puertas-Centeno, D.; Toranzo, I. V.; Dehesa, J. S.

    2017-04-01

    We introduce a biparametric Fisher-Rényi complexity measure for general probability distributions and we discuss its properties. This notion, which is composed of two entropy-like components (the Rényi entropy and the biparametric Fisher information), generalizes the basic Fisher-Shannon measure and the previous complexity quantifiers of Fisher-Rényi type. Then, we illustrate the usefulness of this notion by carrying out a information-theoretical analysis of the spectral energy density of a d-dimensional blackbody at temperature T. It is shown that the biparametric Fisher-Rényi measure of this quantum system has a universal character in the sense that it does not depend on temperature nor on any physical constant (e.g. Planck constant, speed of light, Boltzmann constant), but only on the space dimensionality d. Moreover, it decreases when d is increasing, but exhibits a non trivial behavior for a fixed d and a varying parameter, which somehow brings up a non standard structure of the blackbody d-dimensional density distribution.

  8. Recurrence-plot-based measures of complexity and their application to heart-rate-variability data

    Science.gov (United States)

    Marwan, Norbert; Wessel, Niels; Meyerfeldt, Udo; Schirdewan, Alexander; Kurths, Jürgen

    2002-08-01

    The knowledge of transitions between regular, laminar or chaotic behaviors is essential to understand the underlying mechanisms behind complex systems. While several linear approaches are often insufficient to describe such processes, there are several nonlinear methods that, however, require rather long time observations. To overcome these difficulties, we propose measures of complexity based on vertical structures in recurrence plots and apply them to the logistic map as well as to heart-rate-variability data. For the logistic map these measures enable us not only to detect transitions between chaotic and periodic states, but also to identify laminar states, i.e., chaos-chaos transitions. The traditional recurrence quantification analysis fails to detect the latter transitions. Applying our measures to the heart-rate-variability data, we are able to detect and quantify the laminar phases before a life-threatening cardiac arrhythmia occurs thereby facilitating a prediction of such an event. Our findings could be of importance for the therapy of malignant cardiac arrhythmias.

  9. Operational Complexity of Supplier-Customer Systems Measured by Entropy—Case Studies

    Directory of Open Access Journals (Sweden)

    Ladislav Lukáš

    2016-04-01

    Full Text Available This paper discusses a unified entropy-based approach for the quantitative measurement of operational complexity of company supplier-customer relations. Classical Shannon entropy is utilized. Beside this quantification tool, we also explore the relations between Shannon entropy and (c,d-entropy in more details. An analytic description of so called iso-quant curves is given, too. We present five case studies, albeit in an anonymous setting, describing various details of general procedures for measuring the operational complexity of supplier-customer systems. In general, we assume a problem-oriented database exists, which contains detailed records of all product forecasts, orders and deliveries both in quantity and time, scheduled and realized, too. Data processing detects important flow variations both in volumes and times, e.g., order—forecast, delivery—order, and actual production—scheduled one. The unifying quantity used for entropy computation is the time gap between actual delivery time and order issue time, which is nothing else but a lead time in inventory control models. After data consistency checks, histograms and empirical distribution functions are constructed. Finally, the entropy, information-theoretic measure of supplier-customer operational complexity, is calculated. Basic steps of the algorithm are mentioned briefly, too. Results of supplier-customer system analysis from selected Czech small and medium-sized enterprises (SMEs are presented in various computational and managerial decision making details. An enterprise is ranked as SME one, if it has at most 250 employees and its turnover does not exceed 50 million USD per year, or its balance sheet total does not exceed 43 million USD per year, alternatively.

  10. Using continuous underway isotope measurements to map water residence time in hydrodynamically complex tidal environments

    Science.gov (United States)

    Downing, Bryan D.; Bergamaschi, Brian; Kendall, Carol; Kraus, Tamara; Dennis, Kate J.; Carter, Jeffery A.; von Dessonneck, Travis

    2016-01-01

    Stable isotopes present in water (δ2H, δ18O) have been used extensively to evaluate hydrological processes on the basis of parameters such as evaporation, precipitation, mixing, and residence time. In estuarine aquatic habitats, residence time (τ) is a major driver of biogeochemical processes, affecting trophic subsidies and conditions in fish-spawning habitats. But τ is highly variable in estuaries, owing to constant changes in river inflows, tides, wind, and water height, all of which combine to affect τ in unpredictable ways. It recently became feasible to measure δ2H and δ18O continuously, at a high sampling frequency (1 Hz), using diffusion sample introduction into a cavity ring-down spectrometer. To better understand the relationship of τ to biogeochemical processes in a dynamic estuarine system, we continuously measured δ2H and δ18O, nitrate and water quality parameters, on board a small, high-speed boat (5 to >10 m s–1) fitted with a hull-mounted underwater intake. We then calculated τ as is classically done using the isotopic signals of evaporation. The result was high-resolution (∼10 m) maps of residence time, nitrate, and other parameters that showed strong spatial gradients corresponding to geomorphic attributes of the different channels in the area. The mean measured value of τ was 30.5 d, with a range of 0–50 d. We used the measured spatial gradients in both τ and nitrate to calculate whole-ecosystem uptake rates, and the values ranged from 0.006 to 0.039 d–1. The capability to measure residence time over single tidal cycles in estuaries will be useful for evaluating and further understanding drivers of phytoplankton abundance, resolving differences attributable to mixing and water sources, explicitly calculating biogeochemical rates, and exploring the complex linkages among time-dependent biogeochemical processes in hydrodynamically complex environments such as estuaries.

  11. A Thorax Simulator for Complex Dynamic Bioimpedance Measurements With Textile Electrodes.

    Science.gov (United States)

    Ulbrich, Mark; Muhlsteff, Jens; Teichmann, Daniel; Leonhardt, Steffen; Walter, Marian

    2015-06-01

    Bioimpedance measurements on the human thorax are suitable for assessment of body composition or hemodynamic parameters, such as stroke volume; they are non-invasive, easy in application and inexpensive. When targeting personal healthcare scenarios, the technology can be integrated into textiles to increase ease, comfort and coverage of measurements. Bioimpedance is generally measured using two electrodes injecting low alternating currents (0.5-10 mA) and two additional electrodes to measure the corresponding voltage drop. The impedance is measured either spectroscopically (bioimpedance spectroscopy, BIS) between 5 kHz and 1 MHz or continuously at a fixed frequency around 100 kHz (impedance cardiography, ICG). A thorax simulator is being developed for testing and calibration of bioimpedance devices and other new developments. For the first time, it is possible to mimic the complete time-variant properties of the thorax during an impedance measurement. This includes the dynamic real part and dynamic imaginary part of the impedance with a peak-to-peak value of 0.2 Ω and an adjustable base impedance (24.6 Ω ≥ Z0 ≥ 51.6 Ω). Another novelty is adjustable complex electrode-skin contact impedances for up to 8 electrodes to evaluate bioimpedance devices in combination with textile electrodes. In addition, an electrocardiographic signal is provided for cardiographic measurements which is used in ICG devices. This provides the possibility to generate physiologic impedance changes, and in combination with an ECG, all parameters of interest such as stroke volume (SV), pre-ejection period (PEP) or extracellular resistance (Re) can be simulated. The speed of all dynamic signals can be altered. The simulator was successfully tested with commercially available BIS and ICG devices and the preset signals are measured with high correlation (r = 0.996).

  12. Measuring trauma: considerations for assessing complex and non-PTSD Criterion A childhood trauma.

    Science.gov (United States)

    McDonald, Molly K; Borntrager, Cameo F; Rostad, Whitney

    2014-01-01

    The current definition of a traumatic event in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5; American Psychiatric Association, 2013) may be too narrow to describe the myriad of difficult childhood experiences. Furthermore, youth may develop a distinct pattern of symptoms in relation to complex or multiple childhood trauma experiences, the proposed developmental trauma disorder (DTD; B. A. van der Kolk, 2005 ) We developed and utilized a new measure, the Potentially Traumatic Experiences Questionnaire (PTEQ), to assess patterns in childhood trauma exposure. We used 2 item formats (open ended vs. closed ended) in order to explore potential differences in reporting. Furthermore, we assessed for symptoms associated with DTD following exposure to complex childhood trauma in a sample of adolescents. Participants were 186 adolescents ages 18 and 19 years old who were asked to report retrospectively on their difficult childhood experiences. The results showed that participants reported multiple events that would not be considered traumatic according to DSM-5 Posttraumatic Stress Disorder Criterion A, and those who completed the PTEQ with closed-ended items reported more differentiated trauma types than participants who completed the open-ended questionnaire. Also, participants who reported multiple or chronic events were more likely to endorse symptoms associated with DTD. This study has implications for the diagnosis and treatment of complex trauma experiences in youth.

  13. Three-dimensional quantification of structures in trabecular bone using measures of complexity

    DEFF Research Database (Denmark)

    Marwan, Norbert; Kurths, Jürgen; Thomsen, Jesper Skovhus

    2009-01-01

    The study of pathological changes of bone is an important task in diagnostic procedures of patients with metabolic bone diseases such as osteoporosis as well as in monitoring the health state of astronauts during long-term space flights. The recent availability of high-resolution three-dimensiona......The study of pathological changes of bone is an important task in diagnostic procedures of patients with metabolic bone diseases such as osteoporosis as well as in monitoring the health state of astronauts during long-term space flights. The recent availability of high-resolution three......-dimensional (3D) imaging of bone challenges the development of data analysis techniques able to assess changes of the 3D microarchitecture of trabecular bone. We introduce an approach based on spatial geometrical properties and define structural measures of complexity for 3D image analysis. These measures...

  14. Measurements of complex coupling coefficients in a ring resonator of a laser gyroscope

    Science.gov (United States)

    Bessonov, A. S.; Makeev, A. P.; Petrukhin, E. A.

    2017-07-01

    A method is proposed for measuring complex coupling coefficients in a ring optical resonator in the absence of an active gas mixture. A setup is described on which measurements are performed in ring resonators of ring He-Ne lasers with a wavelength of 632.8 nm. A model of backscattering field interference between conservative and dissipative sources is presented. Within the framework of this model, the unusual behaviour of backscattering fields in ring resonators observed in experiments is explained: a significant difference in the moduli of coupling coefficients of counterpropagating waves and variation of the magnitude of the total phase shift in a wide range. It is proposed to use this method as a metrological method when assembling and aligning a ring resonator of a laser gyroscope.

  15. Electrical Conductivity of Synthetic Quartz Crystals at High Temperature and Pressure from Complex Impedance Measurements

    Institute of Scientific and Technical Information of China (English)

    王多君; 李和平; 刘丛强; 易丽; 丁东业; 苏根利; 张卫刚

    2002-01-01

    An electrical conductivity measurement system under high-pressure conditions with a multi-anvil high-pressure apparatus by an ac complex impedance method was set up. With this system, we have successfully measured the electrical conductivity of synthetic quartz under pressure up to approximately 1.0 GPa in the temperature range 661-987K. The values of electrical conductivity decrease with the increasing pressure and increase with the increasing temperature. The activation enthalpies for the α-quartz crystals are 1.10-1.28eV. The electrical conductivity of α-quartz is ionic, with Na ions moving in channels parallel to the c-axis being the predominant current carrier.

  16. On complex Langevin dynamics and zeroes of the measure II: Fermionic determinant

    CERN Document Server

    Aarts, G; Sexty, D; Stamatescu, I -O

    2016-01-01

    Lattice QCD at non-vanishing chemical potential is studied using the complex Langevin equation (CLE). One of the conditions for the correctness of the results of the CLE is that the zeroes of the measure coming from the fermionic determinant are outside of the distribution of the configurations, or at least in a region where support for the distribution is very much suppressed. We investigate this issue for Heavy Dense QCD (HDQCD) and full QCD at high temperatures. In HDQCD it is found that the configurations move closest to the zeroes of the measure around the critical chemical potential of the onset transition, where the sign problem is diminished, but results remain largely unaffected. In full QCD at high temperatures the investigation of the spectrum of the Dirac operator yields a similar observation: the results are unaffected by the issue of the poles.

  17. Using Complexity Metrics With R-R Intervals and BPM Heart Rate Measures

    DEFF Research Database (Denmark)

    Wallot, Sebastian; Fusaroli, Riccardo; Tylén, Kristian

    2013-01-01

    Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker...... of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are reported in the literature. As complexity metrics of heart rate variability depend critically......-of-concept, we employ a simple rest-exercise-rest task and show that non-linear statistics – fractal (DFA) and recurrence (RQA) analyses – reveal information about heart beat activity above and beyond the simple level of heart rate. Non-linear statistics unveil sustained post-exercise effects on heart rate...

  18. Fine-grained permutation entropy as a measure of natural complexity for time series

    Institute of Scientific and Technical Information of China (English)

    Liu Xiao-Feng; Wang Yue

    2009-01-01

    In a recent paper [2002 Phys. Rev. Lett. 88 174102], Bandt and Pompe propose permutation entropy (PE)as a natural complexity measure for arbitrary time series which may be stationary or nonstationary, deterministic or stochastic. Their method is based on a comparison of neighbouring values. This paper further develops PE, and proposes the concept of fine-grained PE (FGPE) defined by the order pattern and magnitude of the difference between neighbouring values. This measure excludes the case where vectors with a distinct appearance are mistakenly mapped onto the same permutation type, and consequently FGPE becomes more sensitive to the dynamical change of time series than does PE, according to our simulation and experimental results.

  19. Single-step stereolithography of complex anatomical models for optical flow measurements.

    Science.gov (United States)

    de Zélicourt, Diane; Pekkan, Kerem; Kitajima, Hiroumi; Frakes, David; Yoganathan, Ajit P

    2005-02-01

    Transparent stereolithographic rapid prototyping (RP) technology has already demonstrated in literature to be a practical model construction tool for optical flow measurements such as digital particle image velocimetry (DPIV), laser doppler velocimetry (LDV), and flow visualization. Here, we employ recently available transparent RP resins and eliminate time-consuming casting and chemical curing steps from the traditional approach. This note details our methodology with relevant material properties and highlights its advantages. Stereolithographic model printing with our procedure is now a direct single-step process, enabling faster geometric replication of complex computational fluid dynamics (CFD) models for exact experimental validation studies. This methodology is specifically applied to the in vitro flow modeling of patient-specific total cavopulmonary connection (TCPC) morphologies. The effect of RP machining grooves, surface quality, and hydrodynamic performance measurements as compared with the smooth glass models are also quantified.

  20. The discrete strategy improvement algorithm for parity games and complexity measures for directed graphs

    Directory of Open Access Journals (Sweden)

    Felix Canavoi

    2012-10-01

    Full Text Available For some time the discrete strategy improvement algorithm due to Jurdzinski and Voge had been considered as a candidate for solving parity games in polynomial time. However, it has recently been proved by Oliver Friedmann that the strategy improvement algorithm requires super-polynomially many iteration steps, for all popular local improvements rules, including switch-all (also with Fearnley's snare memorisation, switch-best, random-facet, random-edge, switch-half, least-recently-considered, and Zadeh's Pivoting rule. We analyse the examples provided by Friedmann in terms of complexity measures for directed graphs such as treewidth, DAG-width, Kelly-width, entanglement, directed pathwidth, and cliquewidth. It is known that for every class of parity games on which one of these parameters is bounded, the winning regions can be efficiently computed. It turns out that with respect to almost all of these measures, the complexity of Friedmann's counterexamples is bounded, and indeed in most cases by very small numbers. This analysis strengthens in some sense Friedmann's results and shows that the discrete strategy improvement algorithm is even more limited than one might have thought. Not only does it require super-polynomial running time in the general case, where the problem of polynomial-time solvability is open, it even has super-polynomial lower time bounds on natural classes of parity games on which efficient algorithms are known.

  1. Complexity-Measure-Based Sequential Hypothesis Testing for Real-Time Detection of Lethal Cardiac Arrhythmias

    Directory of Open Access Journals (Sweden)

    Chen Szi-Wen

    2007-01-01

    Full Text Available A novel approach that employs a complexity-based sequential hypothesis testing (SHT technique for real-time detection of ventricular fibrillation (VF and ventricular tachycardia (VT is presented. A dataset consisting of a number of VF and VT electrocardiogram (ECG recordings drawn from the MIT-BIH database was adopted for such an analysis. It was split into two smaller datasets for algorithm training and testing, respectively. Each ECG recording was measured in a 10-second interval. For each recording, a number of overlapping windowed ECG data segments were obtained by shifting a 5-second window by a step of 1 second. During the windowing process, the complexity measure (CM value was calculated for each windowed segment and the task of pattern recognition was then sequentially performed by the SHT procedure. A preliminary test conducted using the database produced optimal overall predictive accuracy of . The algorithm was also implemented on a commercial embedded DSP controller, permitting a hardware realization of real-time ventricular arrhythmia detection.

  2. The complex ion structure of warm dense carbon measured by spectrally resolved x-ray scattering

    Energy Technology Data Exchange (ETDEWEB)

    Kraus, D.; Barbrel, B.; Falcone, R. W. [Department of Physics, University of California, Berkeley, California 94720 (United States); Vorberger, J. [Max-Planck-Institut für Physik komplexer Systeme, Nöthnitzer Straße 38, 01187 Dresden (Germany); Helfrich, J.; Frydrych, S.; Ortner, A.; Otten, A.; Roth, F.; Schaumann, G.; Schumacher, D.; Siegenthaler, K.; Wagner, F.; Roth, M. [Institut für Kernphysik, Technische Universität Darmstadt, Schlossgartenstraße 9, 64289 Darmstadt (Germany); Gericke, D. O.; Wünsch, K. [Centre for Fusion, Space and Astrophysics, Department of Physics, University of Warwick, Coventry CV4 7AL (United Kingdom); Bachmann, B.; Döppner, T. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Bagnoud, V.; Blažević, A. [GSI Helmholtzzentrum für Schwerionenforschung GmbH, Planckstraße 1, 64291 Darmstadt (Germany); and others

    2015-05-15

    We present measurements of the complex ion structure of warm dense carbon close to the melting line at pressures around 100 GPa. High-pressure samples were created by laser-driven shock compression of graphite and probed by intense laser-generated x-ray sources with photon energies of 4.75 keV and 4.95 keV. High-efficiency crystal spectrometers allow for spectrally resolving the scattered radiation. Comparing the ratio of elastically and inelastically scattered radiation, we find evidence for a complex bonded liquid that is predicted by ab-initio quantum simulations showing the influence of chemical bonds under these conditions. Using graphite samples of different initial densities we demonstrate the capability of spectrally resolved x-ray scattering to monitor the carbon solid-liquid transition at relatively constant pressure of 150 GPa. Showing first single-pulse scattering spectra from cold graphite of unprecedented quality recorded at the Linac Coherent Light Source, we demonstrate the outstanding possibilities for future high-precision measurements at 4th Generation Light Sources.

  3. [Sample pretreatment for the measurement of phthalate esters in complex matrices].

    Science.gov (United States)

    Liang, Jing; Zhuang, Wan'e; Lin, Fang; Yao, Wensong

    2014-11-01

    Sample pretreatment methods for the measurement of phthalate esters (PAEs) by gas chromatography-mass spectrometry (GC-MS) in various complex matrices, including sediment, soil, suspended particle matter, urban surface dust, Sinonovacula Constricta, cosmet- ic, leather, plastic and coastal/estuarine seawater, were proposed. The pretreatment which was appropriate for GC-MS detection was focused on the investigation and optimization of oper- ating parameters for the extraction and purification, such as the extraction solvent, the eluant and the adsorbent of solid phase extraction. The results of the study of pretreatment for various complex matrices showed that methylene chloride was the best solvent for the ultrasonic extraction when solid-liquid extraction was used; silica gel was the economical and practical adsorbent for solid-phase extraction for purification; C18 was the most commonly adsorbent for preconcentration of PAE in coastal/estuarine seawater sample; the mixed solution of n-hexane and ethyl acetate with a certain proportion was the suitable SPE eluent. Under the optimized conditions, the spiked recoveries were above 58% and the relative standard deviations (RSDs) were less than 10.5% (n = 6). The detection limits (DL, 3σ) were in the range of 0.3 μg/kg (dibutyl phthalate)--5.2 μg/kg ( diisononyl phthalate) for sediment, and 6 ng/L (dipropyl phthalate)--67 ng/L (diisodecyl phthalate) for costal/estuarine seawater. The pretreatment meth- od for various complex matrices is prominent for the measurement of the 16 PAEs with GC-MS.

  4. Is absolute noninvasive temperature measurement by the Pr[MOE-DO3A] complex feasible.

    Science.gov (United States)

    Hentschel, M; Findeisen, M; Schmidt, W; Frenzel, T; Wlodarczyk, W; Wust, P; Felix, R

    2000-02-01

    Recently, the feasibility of the praseodymium complex of 10-(2-methoxyethyl)-1,4,7,10-tetraaza-cyclododecane-1,4,7-tr iacetate (Pr[MOE-DO3A]) for non-invasive temperature measurement via 1H spectroscopy has been demonstrated. Particularly the suitability of the complex for non-invasive temperature measurements including in vivo spectroscopy without spatial resolution as well as first spectroscopic imaging measurements at low temporal resolution (> or = 4 min) and high temporal resolution (breath hold, approximately 20 s) has been shown. As of today, calibration curves according to the particular experimental conditions are necessary. This work aims to clarify whether the Pr[MOE-DO3A] probe in conjunction with 1H-NMR spectroscopy allows non-invasive absolute temperature measurements with high accuracy. The measurement results from two different representative media, distilled water and human plasma, show a slight but significant dependence of the calibration curves on the surrounding medium. Calibration curves in water and plasma were derived for the temperature dependence of the chemical shift difference (F) between Pr[MOE-DO3A]'s OCH3 and water with F = -(27.53 +/- 0.04) + (0.125 +/- 0.001) x T and F = -(27.61 +/- 0.02) + (0.129 +/- 0.001) x T, respectively, with F in ppm and T in degrees C. However, the differences are minuscule even for the highest spectral resolution of 0.001 ppm/pt, so that they are indistinguishable under practical conditions. The estimated temperature errors are +/- 0.18 degrees C for water and +/- 0.14 degrees C for plasma and with that only slightly worse than the measurement accuracy of the fiber-optical temperature probe (+/- 0.1 degrees C). It can be concluded that the results obtained indicate the feasibility of the 1H spectroscopy method in conjunction with the Pr[MOE-DO3A] probe for absolute temperature measurements, with a maximum accuracy of +/- 0.2 degrees C.

  5. Measurement of unsteady convection in a complex fenestration using laser interferometry

    Energy Technology Data Exchange (ETDEWEB)

    Poulad, M.E.; Naylor, D. [Ryerson Univ., Toronto, ON (Canada). Dept. of Mechanical and Industrial Engineering; Oosthuizen, P.H. [Queen' s Univ., Kingston, ON (Canada). Dept. of Mechanical and Materials Engineering

    2009-06-15

    Complex fenestration involving windows with between-panes louvered blinds is gaining interest as a means to control solar gains in buildings. However, the heat transfer performance of this type of shading system is not well understood, especially at high Rayleigh numbers. A Mach-Zehnder interferometer was used in this study to measure the unsteady convective heat transfer in a tall enclosure with between-panes blind that was heated to simulate absorbed solar radiation. Digital cinematography was combined with laser interferometry to make time-averaged measurements of unsteady and turbulent free convective heat transfer. This paper described the procedures used to measure the time-average local heat flux. Under strongly turbulent conditions, the average Nusselt number for the enclosure was found to compare well with empirical correlations. A total sampling time of about ten seconds was needed in this experiment to obtain a stationary time-average heat flux. The time-average heat flux was found to be relatively insensitive to the camera frame rate. The local heat flux was found to be unsteady and periodic. Heating of the blind made the flow more unstable, producing a higher amplitude heat flux variation than for the unheated blind condition. This paper reported on only a small set of preliminary measurements. This study is being extended to other blind angles and glazing spacings. The next phase will focus on flow visualization studies to characterize the nature of the flow. 8 refs., 2 tabs., 7 figs.

  6. What Are Complex eHealth Innovations and How Do You Measure Them? Position Paper.

    Science.gov (United States)

    Hübner, U

    2015-01-01

    eHealth and innovation are often regarded as synonyms - not least because eHealth technologies and applications are new to their users. This position paper challenges this view and aims at exploring the nature of eHealth innovation against the background of common definitions of innovation and facts from the biomedical and health informatics literature. A good understanding of what constitutes innovative eHealth developments allows the degree of innovation to be measured and interpreted. To this end, relevant biomedical and health informatics literature was searched mainly in Medline and ACM digital library. This paper presents seven facts about implementing and applying new eHealth developments hereby drawing on the experience published in the literature. The facts are: 1. eHealth innovation is relative. 2. Advanced clinical practice is the yardstick. 3. Only used and usable eHealth technology can give birth to eHealth innovatio. 4. One new single eHealth function does not make a complex eHealth innovation. 5. eHealth innovation is more evolution than revolution. 6. eHealth innovation is often triggered behind the scenes; and 7. There is no eHealth innovation without sociocultural change. The main conclusion of the seven facts is that eHealth innovations have many ingredients: newness, availability, advanced clinical practice with proven outcomes, use and usability, the supporting environment, other context factors and the stakeholder perspectives. Measuring eHealth innovation is thus a complex matter. To this end we propose the development of a composite score that expresses comprehensively the nature of eHealth innovation and that breaks down its complexity into the three dimensions: i) eHealth adoption, ii) partnership with advanced clinical practice, and iii) use and usability of eHealth. In order to better understand the momentum and mechanisms behind eHealth innovation the fourth dimension, iv) eHealth supporting services and means, needs to be studied

  7. SOCIAL MEASUREMENT OF YOUTH’S HEALTH: DESIGNING OF INDICATORS OF COMPLEX SOCIOLOGICAL RESEARCH

    Directory of Open Access Journals (Sweden)

    Vitalii Valeriyevich Kulish

    2017-06-01

    Full Text Available Purpose. The article is devoted to solving the problem of social measurement of modern youth’s health. The subject of the analysis is the content of the concept, characteristics and indicators of the social health of young people, which enable using sociological research’ methods to measure a given status of the younger generation in contemporary Russian society. The purpose of this work is to define the theoretical and methodological foundations of the sociological analysis of the young people social health and to substantiate its main indicators in the tools of complex sociological research. Methodology of the study. The basis of the research is formed by the system approach, the complex approach, the logical-conceptual method and general scientific methods of research: comparative analysis, system analysis, construction of social indicators, modeling. Results. The social health of young people is defined through the category “status” and is considered as an integrated indicator of the social quality of the younger generation. It is substantiated that the social health of youth is a status of socio-demographic community in which it is able not only to adapt to the changing conditions of the social environment but is also ready to transform actively the surrounding reality, having the potential to resist destructive social phenomena and processes. The main indicators that allow measuring the social health of young people by sociological methods are determined: adaptability in the social environment, social activity in all spheres of public life, social orientation and significance of activity, behavior regulativity by social norms and universal values, creativity of thinking and behavior, readiness for social integration and self-development. A system of social indicators and indicators for conducting a sociological study of social health in historical memory, value orientations and everyday practices of young people has been developed.

  8. Local atomic structure modulations activate metal oxide as electrocatalyst for hydrogen evolution in acidic water.

    Science.gov (United States)

    Li, Yu Hang; Liu, Peng Fei; Pan, Lin Feng; Wang, Hai Feng; Yang, Zhen Zhong; Zheng, Li Rong; Hu, P; Zhao, Hui Jun; Gu, Lin; Yang, Hua Gui

    2015-08-19

    Modifications of local structure at atomic level could precisely and effectively tune the capacity of materials, enabling enhancement in the catalytic activity. Here we modulate the local atomic structure of a classical but inert transition metal oxide, tungsten trioxide, to be an efficient electrocatalyst for hydrogen evolution in acidic water, which has shown promise as an alternative to platinum. Structural analyses and theoretical calculations together indicate that the origin of the enhanced activity could be attributed to the tailored electronic structure by means of the local atomic structure modulations. We anticipate that suitable structure modulations might be applied on other transition metal oxides to meet the optimal thermodynamic and kinetic requirements, which may pave the way to unlock the potential of other promising candidates as cost-effective electrocatalysts for hydrogen evolution in industry.

  9. Measuring spatial patterns in floodplains: A step towards understanding the complexity of floodplain ecosystems: Chapter 6

    Science.gov (United States)

    Murray Scown,; Martin Thoms,; DeJager, Nathan R.; Gilvear, David J.; Greenwood, Malcolm T.; Thoms, Martin C.; Wood, Paul J.

    2016-01-01

    Floodplains can be viewed as complex adaptive systems (Levin, 1998) because they are comprised of many different biophysical components, such as morphological features, soil groups and vegetation communities as well as being sites of key biogeochemical processing (Stanford et al., 2005). Interactions and feedbacks among the biophysical components often result in additional phenomena occuring over a range of scales, often in the absence of any controlling factors (sensu Hallet, 1990). This emergence of new biophysical features and rates of processing can lead to alternative stable states which feed back into floodplain adaptive cycles (cf. Hughes, 1997; Stanford et al., 2005). Interactions between different biophysical components, feedbacks, self emergence and scale are all key properties of complex adaptive systems (Levin, 1998; Phillips, 2003; Murray et al., 2014) and therefore will influence the manner in which we study and view spatial patterns. Measuring the spatial patterns of floodplain biophysical components is a prerequisite to examining and understanding these ecosystems as complex adaptive systems. Elucidating relationships between pattern and process, which are intrinsically linked within floodplains (Ward et al., 2002), is dependent upon an understanding of spatial pattern. This knowledge can help river scientists determine the major drivers, controllers and responses of floodplain structure and function, as well as the consequences of altering those drivers and controllers (Hughes and Cass, 1997; Whited et al., 2007). Interactions and feedbacks between physical, chemical and biological components of floodplain ecosystems create and maintain a structurally diverse and dynamic template (Stanford et al., 2005). This template influences subsequent interactions between components that consequently affect system trajectories within floodplains (sensu Bak et al., 1988). Constructing and evaluating models used to predict floodplain ecosystem responses to

  10. A physicochemical study of Al(+3) interactions with edible seaweed biomass in acidic waters.

    Science.gov (United States)

    Lodeiro, Pablo; López-García, Marta; Herrero, Luz; Barriada, José L; Herrero, Roberto; Cremades, Javier; Bárbara, Ignacio; Sastre de Vicente, Manuel E

    2012-09-01

    In this article, a study of the Al(+3) interactions in acidic waters with biomass of different edible seaweeds: brown (Fucus vesiculosus, Saccorhiza polyschides), red (Mastocarpus stellatus, Gelidium sesquipedale, Chondrus crispus), and green (Ulva rigida, Codium tomentosum), has been performed. The influence of both, the initial concentration of metal and the solution pH, on the Al-uptake capacity of the biomass has been analyzed. From preliminary tests, species Fucus vesiculosus and Gelidium sesquipedale have been selected for a more exhaustive analysis. Sorption kinetic studies demonstrated that 60 min are enough to reach equilibrium. The intraparticle diffusion model has been used to describe kinetic data. Equilibrium studies have been carried out at pH values of 1, 2.5, and 4. Langmuir isotherms showed that the best uptake values, obtained at pH 4, were 33 mg/g for F. vesiculosus and 9.2 mg/g for G. sesquipedale. These edible seaweeds have been found particularly effective in binding aluminum metal ions for most of the conditions tested. Physicochemical data reported at these low pH values could be of interest, not only in modeling aluminum-containing antacids-food pharmacokinetic processes produced in the stomach (pH values 1 to 3) but in remediation studies in acidic waters. Aluminum is thought to be linked to neurological disruptions such as Alzheimer's disease. In this article, the adsorption ability of different types of edible seaweeds toward aluminum has been studied. The choice of low pH values is due to the fact that stomach region is acidic with a pH value between 1 and 3 as a consequence of hydrochloric secretion; so physicochemical data reported in this study could be of interest in modeling drug-food interactions, in particular those referring to aluminum-containing antacids-food pharmacokinetic processes produced in the gastrointestinal tract. © 2012 Institute of Food Technologists®

  11. A fluorescence anisotropy method for measuring protein concentration in complex cell culture media.

    Science.gov (United States)

    Groza, Radu Constantin; Calvet, Amandine; Ryder, Alan G

    2014-04-22

    The rapid, quantitative analysis of the complex cell culture media used in biopharmaceutical manufacturing is of critical importance. Requirements for cell culture media composition profiling, or changes in specific analyte concentrations (e.g. amino acids in the media or product protein in the bioprocess broth) often necessitate the use of complicated analytical methods and extensive sample handling. Rapid spectroscopic methods like multi-dimensional fluorescence (MDF) spectroscopy have been successfully applied for the routine determination of compositional changes in cell culture media and bioprocess broths. Quantifying macromolecules in cell culture media is a specific challenge as there is a need to implement measurements rapidly on the prepared media. However, the use of standard fluorescence spectroscopy is complicated by the emission overlap from many media components. Here, we demonstrate how combining anisotropy measurements with standard total synchronous fluorescence spectroscopy (TSFS) provides a rapid, accurate quantitation method for cell culture media. Anisotropy provides emission resolution between large and small fluorophores while TSFS provides a robust measurement space. Model cell culture media was prepared using yeastolate (2.5 mg mL(-1)) spiked with bovine serum albumin (0 to 5 mg mL(-1)). Using this method, protein emission is clearly discriminated from background yeastolate emission, allowing for accurate bovine serum albumin (BSA) quantification over a 0.1 to 4.0 mg mL(-1) range with a limit of detection (LOD) of 13.8 μg mL(-1). Copyright © 2014. Published by Elsevier B.V.

  12. Vertical profiles of urban aerosol complex refractive index in the frame of ESQUIF airborne measurements

    Directory of Open Access Journals (Sweden)

    J.-C. Raut

    2007-07-01

    Full Text Available A synergy between lidar, sunphotometer and in situ measurements has been applied to airborne observations performed during the Etude et Simulation de la QUalité de l'air en Ile-de-France (ESQUIF, enabling the retrieval of vertical profiles for the aerosol complex refractive index (ACRI and single-scattering albedo with a vertical resolution of 200 m over Paris area. The averaged value over the entire planetary boundary layer (PBL for the ACRI is close to 1.51(±0.02–i0.017(±0.003 at 532 nm. The single-scattering albedo of the corresponding aerosols is found to be ~0.9 at the same wavelength. A good agreement is found with previous studies for urban aerosols. A comparison of vertical profiles of ACRI with simulations combining in situ measurements and relative humidity (RH profiles has highlighted a modification in aerosol optical properties linked to their history and the origin of the air mass. The determination of ACRI in the atmospheric column enabled to retrieve vertical profiles of extinction coefficient in accordance with lidar profiles measurements.

  13. Growing complex network of citations of scientific papers: Modeling and measurements.

    Science.gov (United States)

    Golosovsky, Michael; Solomon, Sorin

    2017-01-01

    We consider the network of citations of scientific papers and use a combination of the theoretical and experimental tools to uncover microscopic details of this network growth. Namely, we develop a stochastic model of citation dynamics based on the copying-redirection-triadic closure mechanism. In a complementary and coherent way, the model accounts both for statistics of references of scientific papers and for their citation dynamics. Originating in empirical measurements, the model is cast in such a way that it can be verified quantitatively in every aspect. Such validation is performed by measuring citation dynamics of physics papers. The measurements revealed nonlinear citation dynamics, the nonlinearity being intricately related to network topology. The nonlinearity has far-reaching consequences including nonstationary citation distributions, diverging citation trajectories of similar papers, runaways or "immortal papers" with infinite citation lifetime, etc. Thus nonlinearity in complex network growth is our most important finding. In a more specific context, our results can be a basis for quantitative probabilistic prediction of citation dynamics of individual papers and of the journal impact factor.

  14. Estimating the Effects of Sensor Spacing on Peak Wind Measurements at Launch Complex 39

    Science.gov (United States)

    Merceret, Francis J.

    1999-01-01

    This paper presents results of an empirical study to estimate the measurement error in the peak wind speed at Shuttle Launch Complex 39 (LC-39) which results from the measurement being made by sensors 1,300 feet away. Quality controlled data taken at a height of 30 feet from an array of sensors at the Shuttle Landing Facility (SLF) were used to model differences of peak winds as a function of separation distance and time interval. The SLF data covered wind speeds from less than ten to more than 25 knots. Winds measured at the standard LC-39 site at the normal height of 60 feet were used to verify the applicability of the model to the LC-39 situation. The error in the peak wind speed resulting from separation of the sensor from the target site obeys a power law as a function of separation distance and varies linearly with mean wind speed. At large separation distances, the error becomes a constant fraction of the mean wind speed as the separation function reaches an asymptotic value. The asymptotic average of the mean of the absolute difference in the peak wind speed between the two locations is about twelve percent of the mean wind speed. The distribution of the normalized absolute differences is half-Gaussian.

  15. Evaluating the feasibility of complex interventions in mental health services: standardised measure and reporting guidelines.

    Science.gov (United States)

    Bird, Victoria J; Le Boutillier, Clair; Leamy, Mary; Williams, Julie; Bradstreet, Simon; Slade, Mike

    2014-01-01

    The feasibility of implementation is insufficiently considered in clinical guideline development, leading to human and financial resource wastage. To develop (a) an empirically based standardised measure of the feasibility of complex interventions for use within mental health services and (b) reporting guidelines to facilitate feasibility assessment. A focused narrative review of studies assessing implementation blocks and enablers was conducted with thematic analysis and vote counting used to determine candidate items for the measure. Twenty purposively sampled studies (15 trial reports, 5 protocols) were included in the psychometric evaluation, spanning different interventions types. Cohen's kappa (κ) was calculated for interrater reliability and test-retest reliability. In total, 95 influences on implementation were identified from 299 references. The final measure - Structured Assessment of FEasibility (SAFE) - comprises 16 items rated on a Likert scale. There was excellent interrater (κ = 0.84, 95% CI 0.79-0.89) and test-retest reliability (κ = 0.89, 95% CI 0.85-0.93). Cost information and training time were the two influences least likely to be reported in intervention papers. The SAFE reporting guidelines include 16 items organised into three categories (intervention, resource consequences, evaluation). A novel approach to evaluating interventions, SAFE, supplements efficacy and health economic evidence. The SAFE reporting guidelines will allow feasibility of an intervention to be systematically assessed.

  16. Growing complex network of citations of scientific papers: Modeling and measurements

    Science.gov (United States)

    Golosovsky, Michael; Solomon, Sorin

    2017-01-01

    We consider the network of citations of scientific papers and use a combination of the theoretical and experimental tools to uncover microscopic details of this network growth. Namely, we develop a stochastic model of citation dynamics based on the copying-redirection-triadic closure mechanism. In a complementary and coherent way, the model accounts both for statistics of references of scientific papers and for their citation dynamics. Originating in empirical measurements, the model is cast in such a way that it can be verified quantitatively in every aspect. Such validation is performed by measuring citation dynamics of physics papers. The measurements revealed nonlinear citation dynamics, the nonlinearity being intricately related to network topology. The nonlinearity has far-reaching consequences including nonstationary citation distributions, diverging citation trajectories of similar papers, runaways or "immortal papers" with infinite citation lifetime, etc. Thus nonlinearity in complex network growth is our most important finding. In a more specific context, our results can be a basis for quantitative probabilistic prediction of citation dynamics of individual papers and of the journal impact factor.

  17. PAFit: A Statistical Method for Measuring Preferential Attachment in Temporal Complex Networks.

    Directory of Open Access Journals (Sweden)

    Thong Pham

    Full Text Available Preferential attachment is a stochastic process that has been proposed to explain certain topological features characteristic of complex networks from diverse domains. The systematic investigation of preferential attachment is an important area of research in network science, not only for the theoretical matter of verifying whether this hypothesized process is operative in real-world networks, but also for the practical insights that follow from knowledge of its functional form. Here we describe a maximum likelihood based estimation method for the measurement of preferential attachment in temporal complex networks. We call the method PAFit, and implement it in an R package of the same name. PAFit constitutes an advance over previous methods primarily because we based it on a nonparametric statistical framework that enables attachment kernel estimation free of any assumptions about its functional form. We show this results in PAFit outperforming the popular methods of Jeong and Newman in Monte Carlo simulations. What is more, we found that the application of PAFit to a publically available Flickr social network dataset yielded clear evidence for a deviation of the attachment kernel from the popularly assumed log-linear form. Independent of our main work, we provide a correction to a consequential error in Newman's original method which had evidently gone unnoticed since its publication over a decade ago.

  18. Evaluation of indirect impedance for measuring microbial growth in complex food matrices.

    Science.gov (United States)

    Johnson, N; Chang, Z; Bravo Almeida, C; Michel, M; Iversen, C; Callanan, M

    2014-09-01

    The suitability of indirect impedance to accurately measure microbial growth in real food matrices was investigated. A variety of semi-solid and liquid food products were inoculated with Bacillus cereus, Listeria monocytogenes, Staphylococcus aureus, Lactobacillus plantarum, Pseudomonas aeruginosa, Escherichia coli, Salmonella enteriditis, Candida tropicalis or Zygosaccharomyces rouxii and CO2 production was monitored using a conductimetric (Don Whitely R.A.B.I.T.) system. The majority (80%) of food and microbe combinations produced a detectable growth signal. The linearity of conductance responses in selected food products was investigated and a good correlation (R(2) ≥ 0.84) was observed between inoculum levels and times to detection. Specific growth rate estimations from the data were sufficiently accurate for predictive modeling in some cases. This initial evaluation of the suitability of indirect impedance to generate microbial growth data in complex food matrices indicates significant potential for the technology as an alternative to plating methods.

  19. Entropy-based complexity measures for gait data of patients with Parkinson's disease

    Science.gov (United States)

    Afsar, Ozgur; Tirnakli, Ugur; Kurths, Juergen

    2016-02-01

    Shannon, Kullback-Leibler, and Klimontovich's renormalized entropies are applied as three different complexity measures on gait data of patients with Parkinson's disease (PD) and healthy control group. We show that the renormalized entropy of variability of total reaction force of gait is a very efficient tool to compare patients with respect to disease severity. Moreover, it is a good risk predictor such that the sensitivity, i.e., the percentage of patients with PD who are correctly identified as having PD, increases from 25% to 67% while the Hoehn-Yahr stage increases from 2.5 to 3.0 (this stage goes from 0 to 5 as the disease severity increases). The renormalized entropy method for stride time variability of gait is found to correctly identify patients with a sensitivity of 80%, while the Shannon entropy and the Kullback-Leibler relative entropy can do this with a sensitivity of only 26.7% and 13.3%, respectively.

  20. COMPLEX LONG TIME MEASUREMENTS CARRIED OUT ON LIGNITE HEAP ON SOKOLOVSKÉ UHELNÉ A.S.

    Directory of Open Access Journals (Sweden)

    Vlastimil MONI

    2013-04-01

    Full Text Available Long time in situ measurements carried out on selected lignite (brown coal heaps in the summer and winter season of 2012 and 2013 are described in this paper. The study was prepared in the frame of the research project TAČR No. TA01020351 “Research of prediction possibilities of infusion occurrence and following brown coal fuel self-ignition” supported in the programme ALFA. The main goal is to inform about the progress in the project focused on the research, development, and verifying of the complex methodology for an early identification of the start of an irreversible infusion state of lignite (brown coal mass tending to its ignition (fire.

  1. Analysis of the existing Standard on Power performance measurement and its application in complex terrain

    Energy Technology Data Exchange (ETDEWEB)

    Cuerva, A.

    1997-10-01

    There are some groups working on the improvement of the existing Standard and recommendation on WECS power performance measurement and analysis. One of them, besides the one working in this project, is the MEASNET expert group. This one is trying to adequate the main reference, the IEC 1400-12 Re.[9]. to the current requirements on technical quality and trueness. Within this group and the MEASNET one, many deficiencies have been detected in the procedure followed up to now. Several of them belong to general aspects of the method (calculations, assumptions, etc.) but the most critical fact regards to the inherent characteristics of complex terrain and to the issue of site calibration and uncertainties due to it, specifically. (Author)

  2. Entropy-based complexity measures for gait data of patients with Parkinson's disease.

    Science.gov (United States)

    Afsar, Ozgur; Tirnakli, Ugur; Kurths, Juergen

    2016-02-01

    Shannon, Kullback-Leibler, and Klimontovich's renormalized entropies are applied as three different complexity measures on gait data of patients with Parkinson's disease (PD) and healthy control group. We show that the renormalized entropy of variability of total reaction force of gait is a very efficient tool to compare patients with respect to disease severity. Moreover, it is a good risk predictor such that the sensitivity, i.e., the percentage of patients with PD who are correctly identified as having PD, increases from 25% to 67% while the Hoehn-Yahr stage increases from 2.5 to 3.0 (this stage goes from 0 to 5 as the disease severity increases). The renormalized entropy method for stride time variability of gait is found to correctly identify patients with a sensitivity of 80%, while the Shannon entropy and the Kullback-Leibler relative entropy can do this with a sensitivity of only 26.7% and 13.3%, respectively.

  3. Examining Complexity across Domains: Relating Subjective and Objective Measures of Affective Environmental Scenes, Paintings and Music: e72412

    National Research Council Canada - National Science Library

    Manuela M Marin; Helmut Leder

    2013-01-01

      Subjective complexity has been found to be related to hedonic measures of preference, pleasantness and beauty, but there is no consensus about the nature of this relationship in the visual and musical domains...

  4. Response to Disturbance and Abundance of Final State: a Measure for Complexity?

    Institute of Scientific and Technical Information of China (English)

    SHEN Dan; WANG Wen-Xiu; JIANG Yu-Mei; HE Yue; HE Da-Ren

    2007-01-01

    We propose a new definition of complexity. The definition shows that when a system evolves to a final state via a transient state, its complexity depends on the abundance of both the final state and transient state. The abundance of the transient state may be described by the diversity of the response to disturbance. We hope that this definition can describe a clear boundary between simple systems and complex systems by showing that all the simple systems have zero complexity, and all the complex systems have positive complexity. Some examples of the complexity calculations are presented, which supports our hope.

  5. Separation of copper, iron, and zinc from complex aqueous solutions for isotopic measurement

    Science.gov (United States)

    Borrok, D.M.; Wanty, R.B.; Ridley, W.I.; Wolf, R.; Lamothe, P.J.; Adams, M.

    2007-01-01

    The measurement of Cu, Fe, and Zn isotopes in natural samples may provide valuable information about biogeochemical processes in the environment. However, the widespread application of stable Cu, Fe, and Zn isotope chemistry to natural water systems remains limited by our ability to efficiently separate these trace elements from the greater concentrations of matrix elements. In this study, we present a new method for the isolation of Cu, Fe, and Zn from complex aqueous solutions using a single anion-exchange column with hydrochloric acid media. Using this method we are able to quantitatively separate Cu, Fe, and Zn from each other and from matrix elements in a single column elution. Elution of the elements of interest, as well as all other elements, through the anion-exchange column is a function of the speciation of each element in the various concentrations of HCl. We highlight the column chemistry by comparing our observations with published studies that have investigated the speciation of Cu, Fe, and Zn in chloride solutions. The functionality of the column procedure was tested by measuring Cu, Fe, and Zn isotopes in a variety of stream water samples impacted by acid mine drainage. The accuracy and precision of Zn isotopic measurements was tested by doping Zn-free stream water with the Zn isotopic standard. The reproducibility of the entire column separation process and the overall precision of the isotopic measurements were also evaluated. The isotopic results demonstrate that the Cu, Fe, and Zn column separates from the tested stream waters are of sufficient purity to be analyzed directly using a multicollector inductively coupled plasma mass spectrometer (MC-ICP-MS), and that the measurements are fully-reproducible, accurate, and precise. Although limited in scope, these isotopic measurements reveal significant variations in ??65Cu (- 1.41 to + 0.30???), ??56Fe (- 0.56 to + 0.34???), and ??66Zn (0.31 to 0.49???) among samples collected from different

  6. Characterization of a complex near-surface structure using well logging and passive seismic measurements

    Science.gov (United States)

    Benjumea, Beatriz; Macau, Albert; Gabàs, Anna; Figueras, Sara

    2016-04-01

    We combine geophysical well logging and passive seismic measurements to characterize the near-surface geology of an area located in Hontomin, Burgos (Spain). This area has some near-surface challenges for a geophysical study. The irregular topography is characterized by limestone outcrops and unconsolidated sediments areas. Additionally, the near-surface geology includes an upper layer of pure limestones overlying marly limestones and marls (Upper Cretaceous). These materials lie on top of Low Cretaceous siliciclastic sediments (sandstones, clays, gravels). In any case, a layer with reduced velocity is expected. The geophysical data sets used in this study include sonic and gamma-ray logs at two boreholes and passive seismic measurements: three arrays and 224 seismic stations for applying the horizontal-to-vertical amplitude spectra ratio method (H/V). Well-logging data define two significant changes in the P-wave-velocity log within the Upper Cretaceous layer and one more at the Upper to Lower Cretaceous contact. This technique has also been used for refining the geological interpretation. The passive seismic measurements provide a map of sediment thickness with a maximum of around 40 m and shear-wave velocity profiles from the array technique. A comparison between seismic velocity coming from well logging and array measurements defines the resolution limits of the passive seismic techniques and helps it to be interpreted. This study shows how these low-cost techniques can provide useful information about near-surface complexity that could be used for designing a geophysical field survey or for seismic processing steps such as statics or imaging.

  7. The Magnetic Acoustic Change Complex and Mismatch Field: A Comparison of Neurophysiological Measures of Auditory Discrimination

    Directory of Open Access Journals (Sweden)

    Shu Hui Yau

    2017-02-01

    Full Text Available The Acoustic Change Complex (ACC, a P1-N1-P2-like event-related response to changes in a continuous sound, has been suggested as a reliable, objective, and efficient test of auditory discrimination. We used magnetoencephalography to compare the magnetic ACC (mACC to the more widely used mismatch field (MMF. Brain responses of 14 adults were recorded during mACC and MMF paradigms involving the same pitch and vowel changes in a synthetic vowel sound. Analyses of peak amplitudes revealed a significant interaction between stimulus and paradigm: for the MMF, the response was greater for vowel changes than for pitch changes, whereas, for the mACC, the pattern was reversed. A similar interaction was observed for the signal to noise ratio and single-trial analysis of individual participants’ responses showed that the MMF to Pitch changes was elicited less consistently than the other three responses. Results support the view that the ACC/mACC is a robust and efficient measure of simple auditory discrimination, particularly when researchers or clinicians are interested in the responses of individual listeners. However, the differential sensitivity of the two paradigms to the same acoustic changes indicates that the mACC and MMF are indices of different aspects of auditory processing and should, therefore, be seen as complementary rather than competing neurophysiological measures.

  8. Heart rate dynamics in patients with stable angina pectoris and utility of fractal and complexity measures

    Science.gov (United States)

    Makikallio, T. H.; Ristimae, T.; Airaksinen, K. E.; Peng, C. K.; Goldberger, A. L.; Huikuri, H. V.

    1998-01-01

    Dynamic analysis techniques may uncover abnormalities in heart rate (HR) behavior that are not easily detectable with conventional statistical measures. However, the applicability of these new methods for detecting possible abnormalities in HR behavior in various cardiovascular disorders is not well established. Conventional measures of HR variability were compared with short-term ( 11 beats, alpha2) fractal correlation properties and with approximate entropy of RR interval data in 38 patients with stable angina pectoris without previous myocardial infarction or cardiac medication at the time of the study and 38 age-matched healthy controls. The short- and long-term fractal scaling exponents (alpha1, alpha2) were significantly higher in the coronary patients than in the healthy controls (1.34 +/- 0.15 vs 1.11 +/- 0.12 [p angina pectoris have altered fractal properties and reduced complexity in their RR interval dynamics relative to age-matched healthy subjects. Dynamic analysis may complement traditional analyses in detecting altered HR behavior in patients with stable angina pectoris.

  9. Eddy-correlation measurements of benthic fluxes under complex flow conditions: Effects of coordinate transformations and averaging time scales

    DEFF Research Database (Denmark)

    Lorke, Andreas; McGinnis, Daniel F.; Maeck, Andreas

    2013-01-01

    hours of continuous eddy-correlation measurements of sediment oxygen fluxes in an impounded river, we demonstrate that rotation of measured current velocities into streamline coordinates can be a crucial and necessary step in data processing under complex flow conditions in non-flat environments...... in the context of the theoretical concepts underlying eddy-correlation measurements and a set of recommendations for planning and analyses of flux measurements are derived....

  10. Using Complexity Metrics With R-R Intervals and BPM Heart Rate Measures

    Directory of Open Access Journals (Sweden)

    Sebastian eWallot

    2013-08-01

    Full Text Available Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are reported in the literature. As complexity metrics of heart rate variability depend critically on variability of the data, different choices regarding the kind of measures can have a substantial impact on the results. In this article we compare linear and non-linear statistics on two prominent types of heart beat data, beat-to-beat intervals (R-R interval and beats-per-minute (BPM. As a proof-of-concept, we employ a simple rest-exercise-rest task and show that non-linear statistics – fractal (DFA and recurrence (RQA analyses – reveal information about heart beat activity above and beyond the simple level of heart rate. Non-linear statistics unveil sustained post-exercise effects on heart rate dynamics, but their power to do so critically depends on the type data that is employed: While R-R intervals are very susceptible to nonlinear analyses, the success of nonlinear methods for BPM data critically depends on their construction. Generally, ‘oversampled’ BPM time-series can be recommended as they retain most of the information about nonlinear aspects of heart beat dynamics.

  11. Using complexity metrics with R-R intervals and BPM heart rate measures.

    Science.gov (United States)

    Wallot, Sebastian; Fusaroli, Riccardo; Tylén, Kristian; Jegindø, Else-Marie

    2013-01-01

    Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are reported in the literature. As complexity metrics of heart rate variability depend critically on variability of the data, different choices regarding the kind of measures can have a substantial impact on the results. In this article we compare linear and non-linear statistics on two prominent types of heart beat data, beat-to-beat intervals (R-R interval) and beats-per-min (BPM). As a proof-of-concept, we employ a simple rest-exercise-rest task and show that non-linear statistics-fractal (DFA) and recurrence (RQA) analyses-reveal information about heart beat activity above and beyond the simple level of heart rate. Non-linear statistics unveil sustained post-exercise effects on heart rate dynamics, but their power to do so critically depends on the type data that is employed: While R-R intervals are very susceptible to non-linear analyses, the success of non-linear methods for BPM data critically depends on their construction. Generally, "oversampled" BPM time-series can be recommended as they retain most of the information about non-linear aspects of heart beat dynamics.

  12. Measuring Early Communication in Spanish Speaking Children: The Communication Complexity Scale in Peru

    Science.gov (United States)

    Atwood, Erin; Brady, Nancy C.; Esplund, Amy

    2015-01-01

    Background There is a great need in the United States to develop presymbolic evaluation tools that are widely available and accurate for individuals that come from a bilingual and/or multicultural setting. The Communication Complexity Scale (CCS) is a measure that evaluates expressive presymbolic communication including gestures, vocalizations and eye gaze. Studying the effectiveness of this tool in a Spanish speaking environment was undertaken to determine the applicability of the CCS with Spanish speaking children. Methods & Procedures: In 2011–2012, researchers from the University of Kansas and Centro Ann Sullivan del Perú (CASP) investigated communication in a cohort of 71 young Spanish speaking children with developmental disabilities and a documented history of self-injurious, stereotyped and aggressive behaviors. Communication was assessed first by parental report with translated versions of the Communication and Symbolic Behavior Scales (CSBS), a well-known assessment of early communication, and then eleven months later with the CCS. Hypothesis We hypothesized that the CCS and the CSBS measures would be significantly correlated in this population of Spanish speaking children. Outcomes & Results The CSBS scores from time 1 with a mean participant age of 41 months were determined to have a strong positive relationship to the CCS scores obtained at time 2 with a mean participant age of 52 months. Conclusions & Implications The CCS is strongly correlated to a widely accepted measure of early communication. These findings support the validity of the Spanish version of the CCS and demonstrate its usefulness for children from another culture and for children in a Spanish speaking environment. PMID:26636094

  13. Measurements of key offensive odorants in a fishery industrial complex in Korea

    Science.gov (United States)

    Seo, Seong-Gyu; Ma, Zhong-Kun; Jeon, Jun-Min; Jung, Sang-Chul; Lee, Woo-Bum

    2011-06-01

    This study was carried out to measure the concentrations of offensive odorants with an emphasis on nitrogenous compounds [NC: ammonia (NH 3) and trimethylamine (TMA)] and reduced sulfur compounds [RSC: hydrogen sulfide (H 2S), methyl mercaptan (CH 3SH), dimethyl sulfide (DMS), and dimethyl disulfide (DMDS)] from various sources in a fishery industrial complex in Yeosu, Korea. Samples were collected from a total of 18 sampling sites including the major fishery facilities (C-1˜C-5) and the border areas (O-1˜O-8) of this fishery industrial complex during spring, summer, and fall. The mean concentrations of odorants at the major fishery facilities were found in the order of NH 3 (638 ppb), H 2S (291 ppb), CH 3SH (123 ppb), TMA (20.6 ppb), DMDS (7.71 ppb), and DMS (5.25 ppb). On the other hand, the mean concentrations of odorants at the border areas were NH 3 (85.3 ppb), TMA (1.75 ppb), H 2S (0.25 ppb), CH 3SH (0.18 ppb), DMS (0.07 ppb), and DMDS (0.06 ppb). The mean concentrations of H 2S, CH 3SH and TMA in the major fishery facilities greatly exceeded the Odorant Emission Guideline (OEG) applied to an industrial area. The concentration gradient of RSC between the major fishery facilities and border areas was more prominent than that of NC. From the correlation analyses, the highest correlation coefficient of 0.976 ( p = 3.99E-40, n = 60) was found between DMS and DMDS at the major fishery facilities, while NH 3 had a strong correlation with the sum of odorant concentrations (SOC) at the border areas ( r = 0.997, p = 4.83E-54, n = 48). The results of this study thus confirmed that CH 3SH and TMA were the major odorants at the major fishery facilities and the border areas, respectively.

  14. Complex Correlation Measure: a novel descriptor for Poincaré plot

    Directory of Open Access Journals (Sweden)

    Gubbi Jayavardhana

    2009-08-01

    Full Text Available Abstract Background Poincaré plot is one of the important techniques used for visually representing the heart rate variability. It is valuable due to its ability to display nonlinear aspects of the data sequence. However, the problem lies in capturing temporal information of the plot quantitatively. The standard descriptors used in quantifying the Poincaré plot (SD1, SD2 measure the gross variability of the time series data. Determination of advanced methods for capturing temporal properties pose a significant challenge. In this paper, we propose a novel descriptor "Complex Correlation Measure (CCM" to quantify the temporal aspect of the Poincaré plot. In contrast to SD1 and SD2, the CCM incorporates point-to-point variation of the signal. Methods First, we have derived expressions for CCM. Then the sensitivity of descriptors has been shown by measuring all descriptors before and after surrogation of the signal. For each case study, lag-1 Poincaré plots were constructed for three groups of subjects (Arrhythmia, Congestive Heart Failure (CHF and those with Normal Sinus Rhythm (NSR, and the new measure CCM was computed along with SD1 and SD2. ANOVA analysis distribution was used to define the level of significance of mean and variance of SD1, SD2 and CCM for different groups of subjects. Results CCM is defined based on the autocorrelation at different lags of the time series, hence giving an in depth measurement of the correlation structure of the Poincaré plot. A surrogate analysis was performed, and the sensitivity of the proposed descriptor was found to be higher as compared to the standard descriptors. Two case studies were conducted for recognizing arrhythmia and congestive heart failure (CHF subjects from those with NSR, using the Physionet database and demonstrated the usefulness of the proposed descriptors in biomedical applications. CCM was found to be a more significant (p = 6.28E-18 parameter than SD1 and SD2 in discriminating

  15. Soil Temperature Variability in Complex Terrain measured using Distributed a Fiber-Optic Distributed Temperature Sensing

    Science.gov (United States)

    Seyfried, M. S.; Link, T. E.

    2013-12-01

    Soil temperature (Ts) exerts critical environmental controls on hydrologic and biogeochemical processes. Rates of carbon cycling, mineral weathering, infiltration and snow melt are all influenced by Ts. Although broadly reflective of the climate, Ts is sensitive to local variations in cover (vegetative, litter, snow), topography (slope, aspect, position), and soil properties (texture, water content), resulting in a spatially and temporally complex distribution of Ts across the landscape. Understanding and quantifying the processes controlled by Ts requires an understanding of that distribution. Relatively few spatially distributed field Ts data exist, partly because traditional Ts data are point measurements. A relatively new technology, fiber optic distributed temperature system (FO-DTS), has the potential to provide such data but has not been rigorously evaluated in the context of remote, long term field research. We installed FO-DTS in a small experimental watershed in the Reynolds Creek Experimental Watershed (RCEW) in the Owyhee Mountains of SW Idaho. The watershed is characterized by complex terrain and a seasonal snow cover. Our objectives are to: (i) evaluate the applicability of fiber optic DTS to remote field environments and (ii) to describe the spatial and temporal variability of soil temperature in complex terrain influenced by a variable snow cover. We installed fiber optic cable at a depth of 10 cm in contrasting snow accumulation and topographic environments and monitored temperature along 750 m with DTS. We found that the DTS can provide accurate Ts data (+/- .4°C) that resolves Ts changes of about 0.03°C at a spatial scale of 1 m with occasional calibration under conditions with an ambient temperature range of 50°C. We note that there are site-specific limitations related cable installation and destruction by local fauna. The FO-DTS provide unique insight into the spatial and temporal variability of Ts in a landscape. We found strong seasonal

  16. Radiometric Measurements of the Thermal Conductivity of Complex Planetary-like Materials

    Science.gov (United States)

    Piqueux, S.; Christensen, P. R.

    2012-12-01

    Planetary surface temperatures and thermal inertias are controlled by the physical and compositional characteristics of the surface layer material, which result from current and past geological activity. For this reason, temperature measurements are often acquired because they provide fundamental constraints on the geological history and habitability. Examples of regolith properties affecting surface temperatures and inertias are: grain sizes and mixture ratios, solid composition in the case of ices, presence of cement between grains, regolith porosity, grain roughness, material layering etc.. Other important factors include volatile phase changes, and endogenic or exogenic heat sources (i.e. geothermal heat flow, impact-related heat, biological activity etc.). In the case of Mars, the multitude of instruments observing the surface temperature at different spatial and temporal resolutions (i.e. IRTM, Thermoskan, TES, MiniTES, THEMIS, MCS, REMS, etc.) in conjunction with other instruments allows us to probe and characterize the thermal properties of the surface layer with an unprecedented resolution. While the derivation of thermal inertia values from temperature measurements is routinely performed by well-established planetary regolith numerical models, constraining the physical properties of the surface layer from thermal inertia values requires the additional step of laboratory measurements. The density and specific heat are usually constant and sufficiently well known for common geological materials, but the bulk thermal conductivity is highly variable as a function of the physical characteristics of the regolith. Most laboratory designs do not allow an investigation of the thermal conductivity of complex regolith configurations similar to those observed on planetary surfaces (i.e. cemented material, large grains, layered material, and temperature effects) because the samples are too small and need to be soft to insert heating or measuring devices. For this

  17. Microbicidal effects of weakly acidified chlorous acid water against feline calicivirus and Clostridium difficile spores under protein-rich conditions.

    Science.gov (United States)

    Goda, Hisataka; Yamaoka, Hitoshi; Nakayama-Imaohji, Haruyuki; Kawata, Hiroyuki; Horiuchi, Isanori; Fujita, Yatsuka; Nagao, Tamiko; Tada, Ayano; Terada, Atsushi; Kuwahara, Tomomi

    2017-01-01

    Sanitation of environmental surfaces with chlorine based-disinfectants is a principal measure to control outbreaks of norovirus or Clostridium difficile. The microbicidal activity of chlorine-based disinfectants depends on the free available chlorine (FAC), but their oxidative potential is rapidly eliminated by organic matter. In this study, the microbicidal activities of weakly acidified chlorous acid water (WACAW) and sodium hypochlorite solution (NaClO) against feline calcivirus (FCV) and C. difficile spores were compared in protein-rich conditions. WACAW inactivated FCV and C. difficile spores better than NaClO under all experimental conditions used in this study. WACAW above 100 ppm FAC decreased FCV >4 log10 within 30 sec in the presence of 0.5% each of bovine serum albumin (BSA), polypeptone or meat extract. Even in the presence of 5% BSA, WACAW at 600 ppm FAC reduced FCV >4 log10 within 30 sec. Polypeptone inhibited the virucidal activity of WACAW against FCV more so than BSA or meat extract. WACAW at 200 ppm FAC decreased C. difficile spores >3 log10 within 1 min in the presence of 0.5% polypeptone. The microbicidal activity of NaClO was extensively diminished in the presence of organic matter. WACAW recovered its FAC to the initial level after partial neutralization by sodium thiosulfate, while no restoration of the FAC was observed in NaClO. These results indicate that WACAW is relatively stable under organic matter-rich conditions and therefore may be useful for treating environmental surfaces contaminated by human excretions.

  18. Binary, ternary and quaternary liquid-liquid equilibria in 1-butanol, oleic acid, water and n-heptane mixtures

    NARCIS (Netherlands)

    Winkelman, J. G. M.; Kraai, G. N.; Heeres, H. J.

    2009-01-01

    This work reports on liquid-liquid equilibria in the system 1-butanol, oleic acid, water and n-heptane used for biphasic, lipase catalysed esterifications. The literature was studied on the mutual solubility in binary systems of water and each of the organic components. Experimental results were obt

  19. Unexpected Advantages Of Less Accurate Performance Measurements : How simple prescription data control the use of drugs in a complex setting

    NARCIS (Netherlands)

    A.A. de Bont (Antoinette); K.J. Grit (Kor)

    2010-01-01

    textabstractIn this paper we argue that performance measurement can better be done by general, less accurate measurements than by complex – and possible more accurate - ones. The conclusions of this study are drawn from a case study of the Dutch Foundation for effective use of medication. While most

  20. Tortuosity entropy: a measure of spatial complexity of behavioral changes in animal movement.

    Science.gov (United States)

    Liu, Xiaofeng; Xu, Ning; Jiang, Aimin

    2015-01-07

    The goal of animal movement analysis is to understand how organisms explore and exploit complex and varying environments. Animals usually exhibit varied and complicated movements, from apparently deterministic behaviours to highly random behaviours. It has been a common method to assess movement efficiency and foraging strategies by means of quantifying and analyzing movement trajectories. Here we introduce a tortuosity entropy (TorEn), a simple measure for quantifying the behavioral change in animal movement data. In our approach, the differences between pairwise successive track points are transformed into symbolic sequences, then we map these symbols into a group of pattern vectors and calculate the information entropy of pattern vectors. We test the algorithm on both simulated trajectories and real trajectories to show that it can accurately identify not only the mixed segments in simulated data, but also the different phases in real movement data. Tortuosity entropy can be easily applied to arbitrary real-world data, whether deterministic or stochastic, stationary or non-stationary. It could be a promising tool to reveal behavioral mechanism in movement data.

  1. Impact of automation: Measurement of performance, workload and behaviour in a complex control environment.

    Science.gov (United States)

    Balfe, Nora; Sharples, Sarah; Wilson, John R

    2015-03-01

    This paper describes an experiment that was undertaken to compare three levels of automation in rail signalling; a high level in which an automated agent set routes for trains using timetable information, a medium level in which trains were routed along pre-defined paths, and a low level where the operator (signaller) was responsible for the movement of all trains. These levels are described in terms of a Rail Automation Model based on previous automation theory (Parasuraman et al., 2000). Performance, subjective workload, and signaller activity were measured for each level of automation running under both normal operating conditions and abnormal, or disrupted, conditions. The results indicate that perceived workload, during both normal and disrupted phases of the experiment, decreased as the level of automation increased and performance was most consistent (i.e. showed the least variation between participants) with the highest level of automation. The results give a strong case in favour of automation, particularly in terms of demonstrating the potential for automation to reduce workload, but also suggest much benefit can achieved from a mid-level of automation potentially at a lower cost and complexity.

  2. The synergy factor: a statistic to measure interactions in complex diseases

    Directory of Open Access Journals (Sweden)

    Combarros Onofre

    2009-06-01

    Full Text Available Abstract Background One challenge in understanding complex diseases lies in revealing the interactions between susceptibility factors, such as genetic polymorphisms and environmental exposures. There is thus a need to examine such interactions explicitly. A corollary is the need for an accessible method of measuring both the size and the significance of interactions, which can be used by non-statisticians and with summarised, e.g. published data. The lack of such a readily available method has contributed to confusion in the field. Findings The synergy factor (SF allows assessment of binary interactions in case-control studies. In this paper we describe its properties and its novel characteristics, e.g. in calculating the power to detect a synergistic effect and in its application to meta-analyses. We illustrate these functions with real examples in Alzheimer's disease, e.g. a meta-analysis of the potential interaction between a BACE1 polymorphism and APOE4: SF = 2.5, 95% confidence interval: 1.5–4.2; p = 0.0001. Conclusion Synergy factors are easy to use and clear to interpret. Calculations may be performed through the Excel programmes provided within this article. Unlike logistic regression analysis, the method can be applied to datasets of any size, however small. It can be applied to primary or summarised data, e.g. published data. It can be used with any type of susceptibility factor, provided the data are dichotomised. Novel features include power estimation and meta-analysis.

  3. Inspection of Complex Internal Surface Shape with Fiber-optic Sensor II: for Specular Tilted Surface Measurement

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Complex surface shape measurement has been a focus topic in the CAD/CAM field. A popular method for measuring dimensional information is using a 3D coordinate measuring machine (CMM)with a touch trigger probe. The measurement set up with CMM, however, is a time consuming task and the accuracy of the measurement deteriorates as the speed of measurement increase. Non-contact measurement is favored since high speed measurement can be achieved and problems with vibration and friction can be eliminated. Although much research has been conducted in non-contact measurement using image capturing and processing schemes, accuracy is poor and measurement is limited. Some optical technologies developed provide a good accuracy but the dynamic range and versatility is very limited. A novel fiber-optic sensor used for the inspection of complex internal contours is presented in this paper, which is able to measure a surface shape in a non-contact manner with high accuracy and high speed, and is compact and flexible to be incorporated into a CMM. Modulation functions for tilted surface shape measurement, based on the Gaussian distribution of the emitting beam from single-mode fiber (SMF), were derived for specular reflection. The feasibility of the proposed measurement principle was verified by simulations.

  4. Quantifying the Consistency of Wearable Knee Acoustical Emission Measurements During Complex Motions.

    Science.gov (United States)

    Toreyin, Hakan; Jeong, Hyeon Ki; Hersek, Sinan; Teague, Caitlin N; Inan, Omer T

    2016-09-01

    Knee-joint sounds could potentially be used to noninvasively probe the physical and/or physiological changes in the knee associated with rehabilitation following acute injury. In this paper, a system and methods for investigating the consistency of knee-joint sounds during complex motions in silent and loud background settings are presented. The wearable hardware component of the system consists of a microelectromechanical systems microphone and inertial rate sensors interfaced with a field programmable gate array-based real-time processor to capture knee-joint sound and angle information during three types of motion: flexion-extension (FE), sit-to-stand (SS), and walking (W) tasks. The data were post-processed to extract high-frequency and short-duration joint sounds (clicks) with particular waveform signatures. Such clicks were extracted in the presence of three different sources of interference: background, stepping, and rubbing noise. A histogram-vector Vn(→) was generated from the clicks in a motion-cycle n, where the bin range was 10°. The Euclidean distance between a vector and the arithmetic mean Vav(→) of all vectors in a recording normalized by the Vav(→) is used as a consistency metric dn. Measurements from eight healthy subjects performing FE, SS, and W show that the mean (of mean) consistency metric for all subjects during SS (μ [ μ (dn)] = 0.72 in silent, 0.85 in loud) is smaller compared with the FE (μ [ μ (dn)] = 1.02 in silent, 0.95 in loud) and W ( μ [ μ (dn)] = 0.94 in silent, 0.97 in loud) exercises, thereby implying more consistent click-generation during SS compared with the FE and W. Knee-joint sounds from one subject performing FE during five consecutive work-days (μ [ μ (dn) = 0.72) and five different times of a day (μ [ μ (dn) = 0.73) suggests high consistency of the clicks on different days and throughout a day. This work represents the first time, to the best of our knowledge, that joint sound consistency has been

  5. Airborne Measurements of Aerosol Emissions From the Alberta Oil Sands Complex

    Science.gov (United States)

    Howell, S. G.; Clarke, A. D.; McNaughton, C. S.; Freitag, S.

    2012-12-01

    The Alberta oil sands contain a vast reservoir of fossil hydrocarbons. The extremely viscous bitumen requires significant energy to extract and upgrade to make a fluid product suitable for pipelines and further refinement. The mining and upgrading process constitute a large industrial complex in an otherwise sparsely populated area of Canada. During the ARCTAS project in June/July 2008, while studying forest fire plumes, the NASA DC-8 and P-3B flew through the plume a total of 5 times. Once was a coordinated visit by both aircraft; the other 3 were fortuitous passes downwind. One study has been published about gas emissions from the complex. Here we concentrate on aerosol emissions and aging. As previously reported, there appear to be at least 2 types of plumes produced. One is an industrial-type plume with vast numbers of ultrafine particles, SO2, sulfate, black carbon (BC), CO, and NO2. The other, probably from the mining, has more organic aerosol and BC together with dust-like aerosols at 3 μm and a 1 μm mode of unknown origin. The DC-8 crossed the plume about 10 km downwind of the industrial site, giving time for the boundary layer to mix and enabling a very crude flux calculation suggesting that sulfate and organic aerosols were each produced at about 500 g/s (estimated errors are a factor of 2, chiefly due to concerns about vertical mixing). Since this was a single flight during a project dedicated to other purposes and operating conditions and weather may change fluxes considerably, this may not be a typical flux. As the plume progresses downwind, the ultrafine particles grow to sizes effective as cloud condensation nucei (CCN), SO2 is converted to sulfate, and organic aerosol is produced. During fair weather in the summer, as was the case during these flights, cloud convection pumps aerosol above the mixed layer. While the aerosol plume is difficult to detect from space, NO2 is measured by the OMI instrument an the Aura satellite and the oil sands plume

  6. Evaluation of single and multiple Doppler lidar techniques to measure complex flow during the XPIA field campaign

    Energy Technology Data Exchange (ETDEWEB)

    Choukulkar, Aditya; Brewer, W. Alan; Sandberg, Scott P.; Weickmann, Ann; Bonin, Timothy A.; Hardesty, R. Michael; Lundquist, Julie K.; Delgado, Ruben; Iungo, G. Valerio; Ashton, Ryan; Debnath, Mithu; Bianco, Laura; Wilczak, James M.; Oncley, Steven; Wolfe, Daniel

    2017-01-01

    Accurate three-dimensional information of wind flow fields can be an important tool in not only visualizing complex flow but also understanding the underlying physical processes and improving flow modeling. However, a thorough analysis of the measurement uncertainties is required to properly interpret results. The XPIA (eXperimental Planetary boundary layer Instrumentation Assessment) field campaign conducted at the Boulder Atmospheric Observatory (BAO) in Erie, CO, from 2 March to 31 May 2015 brought together a large suite of in situ and remote sensing measurement platforms to evaluate complex flow measurement strategies.

    In this paper, measurement uncertainties for different single and multi-Doppler strategies using simple scan geometries (conical, vertical plane and staring) are investigated. The tradeoffs (such as time–space resolution vs. spatial coverage) among the different measurement techniques are evaluated using co-located measurements made near the BAO tower. Sensitivity of the single-/multi-Doppler measurement uncertainties to averaging period are investigated using the sonic anemometers installed on the BAO tower as the standard reference. Finally, the radiometer measurements are used to partition the measurement periods as a function of atmospheric stability to determine their effect on measurement uncertainty.

    It was found that with an increase in spatial coverage and measurement complexity, the uncertainty in the wind measurement also increased. For multi-Doppler techniques, the increase in uncertainty for temporally uncoordinated measurements is possibly due to requiring additional assumptions of stationarity along with horizontal homogeneity and less representative line-of-sight velocity statistics. It was also found that wind speed measurement uncertainty was lower during stable conditions compared to unstable conditions.

  7. Evaluation of single and multiple Doppler lidar techniques to measure complex flow during the XPIA field campaign

    Science.gov (United States)

    Choukulkar, Aditya; Brewer, W. Alan; Sandberg, Scott P.; Weickmann, Ann; Bonin, Timothy A.; Hardesty, R. Michael; Lundquist, Julie K.; Delgado, Ruben; Valerio Iungo, G.; Ashton, Ryan; Debnath, Mithu; Bianco, Laura; Wilczak, James M.; Oncley, Steven; Wolfe, Daniel

    2017-01-01

    Accurate three-dimensional information of wind flow fields can be an important tool in not only visualizing complex flow but also understanding the underlying physical processes and improving flow modeling. However, a thorough analysis of the measurement uncertainties is required to properly interpret results. The XPIA (eXperimental Planetary boundary layer Instrumentation Assessment) field campaign conducted at the Boulder Atmospheric Observatory (BAO) in Erie, CO, from 2 March to 31 May 2015 brought together a large suite of in situ and remote sensing measurement platforms to evaluate complex flow measurement strategies. In this paper, measurement uncertainties for different single and multi-Doppler strategies using simple scan geometries (conical, vertical plane and staring) are investigated. The tradeoffs (such as time-space resolution vs. spatial coverage) among the different measurement techniques are evaluated using co-located measurements made near the BAO tower. Sensitivity of the single-/multi-Doppler measurement uncertainties to averaging period are investigated using the sonic anemometers installed on the BAO tower as the standard reference. Finally, the radiometer measurements are used to partition the measurement periods as a function of atmospheric stability to determine their effect on measurement uncertainty. It was found that with an increase in spatial coverage and measurement complexity, the uncertainty in the wind measurement also increased. For multi-Doppler techniques, the increase in uncertainty for temporally uncoordinated measurements is possibly due to requiring additional assumptions of stationarity along with horizontal homogeneity and less representative line-of-sight velocity statistics. It was also found that wind speed measurement uncertainty was lower during stable conditions compared to unstable conditions.

  8. Implementing digital holograms to create and measure complex-plane optical fields

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2016-02-01

    Full Text Available The coherent superposition of a Gaussian beam with an optical vortex can be mathematically described to occupy the complex plane. The authors provide a simple analogy between the mathematics, in the form of the complex plane, and the visual...

  9. Predictive Potential of Heart Rate Complexity Measurement: An Indication for Laparotomy Following Solid Organ Injury

    Directory of Open Access Journals (Sweden)

    Foroutan

    2015-11-01

    Full Text Available Background Nonlinear analysis of heart rate variability (HRV has been recently used as a predictor of prognosis in trauma patients. Objectives We applied nonlinear analysis of HRV in patients with blunt trauma and intraperitoneal bleeding to assess our ability to predict the outcome of conservative management. Patients and Methods An analysis of electrocardiography (ECG from 120 patients with blunt trauma was conducted at the onset of admission to the emergency department. ECGs of 65 patients were excluded due to inadequacy of noise-free length. Of the remaining 55 patients, 47 survived (S group and eight patients died in the hospital (Non-S group. Nineteen patients were found to have intra-abdominal bleeding, eight of which ultimately underwent laparotomy to control bleeding (Op group and 11 underwent successful non-operative management (non-Op. Demographic data including vital signs, glasgow coma scale (GCS, arterial blood gas and injury severity scores (ISS were recorded. Heart rate complexity (HRC methods, including entropy, were used to analyze the ECG. Results There were no differences in age, gender, heart rate (HR and blood pressure between the S and Non-S groups. However, approximate entropy, used as a method of HRC measurement, and GCS were significantly higher in S group, compared to the Non-S group. The base deficit and ISS were significantly higher in the Non-S group. Regarding age, sex, ISS, base deficit, vital signs and GCS, no difference was found between Op and Non-Op groups. Approximate entropy was significantly lower in the Op group, compared to the Non-Op group. Conclusions The loss of HRC at the onset of admission may predict mortality in patients with blunt trauma. Lower entropy, in recently admitted patients with intra-abdominal bleeding, may indicate laparotomy when the vital signs are stable.

  10. Predictive validity of a frailty measure (GFI) and a case complexity measure (IM-E-SA) on healthcare costs in an elderly population

    NARCIS (Netherlands)

    Peters, Lilian L.; Burgerhof, Johannes G. M.; Boter, Han; Wild, Beate; Buskens, Erik; Slaets, Joris P. J.

    2015-01-01

    Objectives: Measures of frailty (Groningen Frailty Indicator, GFI) and case complexity (INTERMED for the Elderly, IM-E-SA) may assist healthcare professionals to allocate healthcare resources. Both instruments have been evaluated with good psychometric properties. Limited evidence has been published

  11. An approach to measuring adolescents' perception of complexity for pictures of fruit and vegetable mixes

    DEFF Research Database (Denmark)

    Mielby, Line Holler; Bennedbæk-Jensen, Sidsel; Edelenbos, Merete

    2013-01-01

    . An adolescent consumer group (n = 242) and an adult consumer group (n = 86) subsequently rated the pictures on simplicity and attractiveness. Pearson's correlation coefficients revealed strong correlations between the sensory panel and both consumer groups' usage of simplicity. This suggests that simplicity can...... adolescents' perception of complexity of pictures of fruit and vegetable mixes. A sensory panel evaluated 10 descriptive attributes, including simplicity and complexity, for 24 pictures of fruit and vegetable mixes. The descriptive analysis found strong inverse correlation between complexity and simplicity...

  12. [Improvement of a complex of sanitary and health-promoting measures in enterobiasis for children of pediatric institutions and schools].

    Science.gov (United States)

    Chernyshenko, A I; Pliushcheva, G L; Romanenko, N A; Rodilina, V D; Leksikova, L V

    2003-01-01

    The use of an improved complex of sanitary and health-promoting measures for children from 2 schools revealed that their infestation with pinworms among first-eleventh-form pupils from a boarding school was 42.6%, the above measures covering simultaneously them all with their subsequent repetition every 6 months allowed all the children to be free from pinworms. At school 8, the random use of these measures could reduce pinworm infestation to 4% of the children covered with health-promoting measures.

  13. A novel approach for rapidly and cost-effectively assessing toxicity of toxic metals in acidic water using an acidophilic iron-oxidizing biosensor.

    Science.gov (United States)

    Yang, Shih-Hung; Cheng, Kuo-Chih; Liao, Vivian Hsiu-Chuan

    2017-11-01

    Contamination by heavy metals and metalloids is a serious environmental and health concern. Acidic wastewaters are often associated with toxic metals which may enter and spread into agricultural soils. Several biological assays have been developed to detect toxic metals; however, most of them can only detect toxic metals in a neutral pH, not in an acidic environment. In this study, an acidophilic iron-oxidizing bacterium (IOB) Strain Y10 was isolated, characterized, and used to detect toxic metals toxicity in acidic water at pH 2.5. The colorimetric acidophilic IOB biosensor was based on the inhibition of the iron oxidizing ability of Strain Y10, an acidophilic iron-oxidizing bacterium, by metals toxicity. Our results showed that Strain Y10 is acidophilic iron-oxidizing bacterium. Thiobacillus caldus medium (TCM) (pH 2.5) supplied with both S4O6(2-) and glucose was the optimum growth medium for Strain Y10. The optimum temperature and pH for the growth of Strain Y10 was 45 °C and pH 2.5, respectively. Our study demonstrates that the color-based acidophilic IOB biosensor can be semi-quantitatively observed by eye or quantitatively measured by spectrometer to detect toxicity from multiple toxic metals at pH 2.5 within 45 min. Our study shows that monitoring toxic metals in acidic water is possible by using the acidophilic IOB biosensor. Our study thus provides a novel approach for rapid and cost-effective detection of toxic metals in acidic conditions that can otherwise compromise current methods of chemical analysis. This method also allows for increased efficiency when screening large numbers of environmental samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Development and evaluation of aperture-based complexity metrics using film and EPID measurements of static MLC openings

    Energy Technology Data Exchange (ETDEWEB)

    Götstedt, Julia [Department of Radiation Physics, University of Gothenburg, Göteborg 413 45 (Sweden); Karlsson Hauer, Anna; Bäck, Anna, E-mail: anna.back@vgregion.se [Department of Therapeutic Radiation Physics, Sahlgrenska University Hospital, Göteborg 413 45 (Sweden)

    2015-07-15

    Purpose: Complexity metrics have been suggested as a complement to measurement-based quality assurance for intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT). However, these metrics have not yet been sufficiently validated. This study develops and evaluates new aperture-based complexity metrics in the context of static multileaf collimator (MLC) openings and compares them to previously published metrics. Methods: This study develops the converted aperture metric and the edge area metric. The converted aperture metric is based on small and irregular parts within the MLC opening that are quantified as measured distances between MLC leaves. The edge area metric is based on the relative size of the region around the edges defined by the MLC. Another metric suggested in this study is the circumference/area ratio. Earlier defined aperture-based complexity metrics—the modulation complexity score, the edge metric, the ratio monitor units (MU)/Gy, the aperture area, and the aperture irregularity—are compared to the newly proposed metrics. A set of small and irregular static MLC openings are created which simulate individual IMRT/VMAT control points of various complexities. These are measured with both an amorphous silicon electronic portal imaging device and EBT3 film. The differences between calculated and measured dose distributions are evaluated using a pixel-by-pixel comparison with two global dose difference criteria of 3% and 5%. The extent of the dose differences, expressed in terms of pass rate, is used as a measure of the complexity of the MLC openings and used for the evaluation of the metrics compared in this study. The different complexity scores are calculated for each created static MLC opening. The correlation between the calculated complexity scores and the extent of the dose differences (pass rate) are analyzed in scatter plots and using Pearson’s r-values. Results: The complexity scores calculated by the edge

  15. An investigation of ozone and planetary boundary layer dynamics over the complex topography of Grenoble combining measurements and modeling

    OpenAIRE

    Couach, O.; Balin, I.; Jiménez, R; P. Ristori(CEILAP); Perego, S.; Kirchner, F.; Simeonov, V.; Calpini, B.; H. Bergh

    2003-01-01

    This paper concerns an evaluation of ozone (O3) and planetary boundary layer (PBL) dynamics over the complex topography of the Grenoble region through a combination of measurements and mesoscale model (METPHOMOD) predictions for three days, during July 1999. The measurements of O3 and PBL structure were obtained with a Differential Absorption Lidar (DIAL) system, situated 20 km south of Grenoble at Vif (310 m ASL). The combined lidar observations ...

  16. Measurement of the speed of sound by observation of the Mach cones in a complex plasma under microgravity conditions

    Energy Technology Data Exchange (ETDEWEB)

    Zhukhovitskii, D. I., E-mail: dmr@ihed.ras.ru; Fortov, V. E.; Molotkov, V. I.; Lipaev, A. M.; Naumkin, V. N. [Joint Institute of High Temperatures, Russian Academy of Sciences, Izhorskaya 13, Bd. 2, 125412 Moscow (Russian Federation); Thomas, H. M. [Research Group Complex Plasma, DLR, Oberpfaffenhofen, 82234 Wessling (Germany); Ivlev, A. V.; Morfill, G. E. [Max-Planck-Institut für extraterrestrische Physik, Giessenbachstrasse, 85748 Garching (Germany); Schwabe, M. [Department of Chemical and Biomolecular Engineering, Graves Lab, D75 Tan Hall, University of California, Berkeley, CA 94720 (United States)

    2015-02-15

    We report the first observation of the Mach cones excited by a larger microparticle (projectile) moving through a cloud of smaller microparticles (dust) in a complex plasma with neon as a buffer gas under microgravity conditions. A collective motion of the dust particles occurs as propagation of the contact discontinuity. The corresponding speed of sound was measured by a special method of the Mach cone visualization. The measurement results are incompatible with the theory of ion acoustic waves. The estimate for the pressure in a strongly coupled Coulomb system and a scaling law for the complex plasma make it possible to derive an evaluation for the speed of sound, which is in a reasonable agreement with the experiments in complex plasmas.

  17. Measurement of the speed of sound by observation of the Mach cones in a complex plasma under microgravity conditions

    CERN Document Server

    Zhukhovitskii, D I; Molotkov, V I; Lipaev, A M; Naumkin, V N; Thomas, H M; Ivlev, A V; Schwabe, M; Morfill, G E

    2014-01-01

    We report the first observation of the Mach cones excited by a larger microparticle (projectile) moving through a cloud of smaller microparticles (dust) in a complex plasma with neon as a buffer gas under microgravity conditions. A collective motion of the dust particles occurs as propagation of the contact discontinuity. The corresponding speed of sound was measured by a special method of the Mach cone visualization. The measurement results are fully incompatible with the theory of ion acoustic waves. We explore the analogy between a strongly coupled Coulomb system and a solid. A scaling law for the complex plasma makes it possible to derive a theoretical estimate for the speed of sound, which is in a reasonable agreement with the experiments in strongly coupled complex plasmas.

  18. Measuring and Perceiving Changes in Oral Complexity, Accuracy and Fluency: Examining Instructed Learners' Short-Term Gains

    Science.gov (United States)

    Tonkyn, Alan Paul

    2012-01-01

    This paper reports a case study of the nature and extent of progress in speaking skills made by a group of upper intermediate instructed learners, and also assessors' perceptions of that progress. Initial and final interview data were analysed using several measures of Grammatical and Lexical Complexity, Language Accuracy and Fluency. These…

  19. A Corpus-Based Evaluation of Syntactic Complexity Measures as Indices of College-Level ESL Writers' Language Development

    Science.gov (United States)

    Lu, Xiaofei

    2011-01-01

    This article reports results of a corpus-based evaluation of 14 syntactic complexity measures as objective indices of college-level English as a second language (ESL) writers' language development. I analyzed large-scale ESL writing data from the Written English Corpus of Chinese Learners (Wen, Wang, & Liang, 2005) using a computational system…

  20. Comment on 'Interpretation of the Lempel-Ziv Complexity Measure in the context of Biomedical Signal Analysis'

    CERN Document Server

    Balasubramanian, Karthi

    2013-01-01

    In this Communication, we express our reservations on some aspects of the interpretation of the Lempel-Ziv Complexity measure (LZ) by Mateo et al. in "Interpretation of the Lempel-Ziv complexity measure in the context of biomedical signal analysis," IEEE Trans. Biomed. Eng., vol. 53, no. 11, pp. 2282-2288, Nov. 2006. In particular, we comment on the dependence of the LZ complexity measure on number of harmonics, frequency content and amplitude modulation. We disagree with the following statements made by Mateo et al. 1. "LZ is not sensitive to the number of harmonics in periodic signals." 2. "LZ increases as the frequency of a sinusoid increases." 3. "Amplitude modulation of a signal doesnot result in an increase in LZ." We show the dependence of LZ complexity measure on harmonics and amplitude modulation by using a modified version of the synthetic signal that has been used in the original paper. Also, the second statement is a generic statement which is not entirely true. This is true only in the low freque...

  1. A Comparative Study of the Variables Used to Measure Syntactic Complexity and Accuracy in Task-Based Research

    Science.gov (United States)

    Inoue, Chihiro

    2016-01-01

    The constructs of complexity, accuracy and fluency (CAF) have been used extensively to investigate learner performance on second language tasks. However, a serious concern is that the variables used to measure these constructs are sometimes used conventionally without any empirical justification. It is crucial for researchers to understand how…

  2. Finalizing a measurement framework for the burden of treatment in complex patients with chronic conditions

    Directory of Open Access Journals (Sweden)

    Eton DT

    2015-03-01

    % were coping with multiple chronic conditions. A preliminary conceptual framework using data from the first 32 interviews was evaluated and was modified using narrative data from 18 additional interviews with a racially and socioeconomically diverse sample of patients. The final framework features three overarching themes with associated subthemes. These themes included: 1 work patients must do to care for their health (eg, taking medications, keeping medical appointments, monitoring health; 2 challenges/stressors that exacerbate perceived burden (eg, financial, interpersonal, provider obstacles; and 3 impacts of burden (eg, role limitations, mental exhaustion. All themes and subthemes were subsequently confirmed in focus groups. Conclusion: The final conceptual framework can be used as a foundation for building a patient self-report measure to systematically study treatment burden for research and analytical purposes, as well as to promote meaningful clinic-based dialogue between patients and providers about the challenges inherent in maintaining complex self-management of health. Keywords: treatment burden, conceptual framework, adherence, questionnaire, self-management, multi-morbidity

  3. Measuring crustal convergence using rock exhumation along the complex glaciated Chugach Mountains, southeast Alaska

    Science.gov (United States)

    Spotila, J. A.; Buscher, J.

    2002-12-01

    Rates of rock uplift often constrain magnitudes of convergent plate motion in collisional settings. In complex orogenic belts, however, these rates can be difficult to measure. In southeast Alaska, a rapidly-evolving mountain system is centered at a syntaxial bend in the Pacific-North American plate boundary. Rugged topography of the Chugach Mountains stretches for more than 500 km along the hanging wall of the Aleutian Trench, above a colliding microplate, and as coast ranges along the Queen Charlotte-Fairweather transform fault. At each segment of the plate boundary, crustal convergence within North America should vary according to the obliquity of plate motion and the degree of underthrusting. Geodetic and neotectonic studies of rapidly-eroding structures have yet to define rates of horizontal plate motion partitioning. Surface uplift studies, based on short term geodesy or Holocene motion of coastal landforms, are also complicated by megathrust elastic strain accumulation cycles and the viscoelastic response to recent glacial ice thinning. It is thus important to measure exhumation and the erosional transfer of mass as a proxy for the degree of upper crustal convergence accommodated by rock uplift. We have attempted to determine exhumation pattern where the highly-deformed, oceanic and continental rocks of the Yakutat microplate collide with North America. Although the total shortening rate between this microplate and North America is of the order ~3-5 cm/yr, an unconstrained magnitude of shortening is absorbed by imbricate thrust faults within it, the suture between it and North America, and within the previously accreted terranes that form the edge of the continent. We have constrained rock cooling histories as a proxy for exhumation on samples along a dense grid that spans major structural elements, including the Pamplona and Chugach-St. Elias fault systems. Apatite and zircon radiogenic helium ages provide a range in temperature sensitivity that can be used

  4. Distribution Entropy (DistEn): A complexity measure to detect arrhythmia from short length RR interval time series.

    Science.gov (United States)

    Karmakar, Chandan; Udhayakumar, Radhagayathri K; Palaniswami, Marimuthu

    2015-01-01

    Heart rate complexity analysis is a powerful non-invasive means to diagnose several cardiac ailments. Non-linear tools of complexity measurement are indispensable in order to bring out the complete non-linear behavior of Physiological signals. The most popularly used non-linear tools to measure signal complexity are the entropy measures like Approximate entropy (ApEn) and Sample entropy (SampEn). But, these methods become unreliable and inaccurate at times, in particular, for short length data. Recently, a novel method of complexity measurement called Distribution Entropy (DistEn) was introduced, which showed reliable performance to capture complexity of both short term synthetic and short term physiologic data. This study aims to i) examine the competence of DistEn in discriminating Arrhythmia from Normal sinus rhythm (NSR) subjects, using RR interval time series data; ii) explore the level of consistency of DistEn with data length N; and iii) compare the performance of DistEn with ApEn and SampEn. Sixty six RR interval time series data belonging to two groups of cardiac conditions namely `Arrhythmia' and `NSR' have been used for the analysis. The data length N was varied from 50 to 1000 beats with embedding dimension m = 2 for all entropy measurements. Maximum ROC area obtained using ApEn, SampEn and DistEn were 0.83, 0.86 and 0.94 for data length 1000, 1000 and 500 beats respectively. The results show that DistEn undoubtedly exhibits a consistently high performance as a classification feature in comparison with ApEn and SampEn. Therefore, DistEn shows a promising behavior as bio marker for detecting Arrhythmia from short length RR interval data.

  5. Structural measurements and cell line studies of the copper-PEG-Rifampicin complex against Mycobacterium tuberculosis.

    Science.gov (United States)

    Manning, Thomas; Mikula, Rachel; Wylie, Greg; Phillips, Dennis; Jarvis, Jackie; Zhang, Fengli

    2015-02-01

    The bacterium responsible for tuberculosis is increasing its resistance to antibiotics resulting in new multidrug-resistant Mycobacterium tuberculosis (MDR-TB) and extensively drug-resistant tuberculosis (XDR-TB). In this study, several analytical techniques including NMR, FT-ICR, MALDI-MS, LC-MS and UV/Vis are used to study the copper-Rifampicin-Polyethylene glycol (PEG-3350) complex. The copper (II) cation is a carrier for the antibiotic Rifampicin as well as nutrients for the bacterium. The NIH-NIAID cell line containing several Tb strains (including antibiotic resistant strains) is tested against seven copper-PEG-RIF complex variations.

  6. Application of the modified Wheeler cap method for radiation efficiency measurement of balanced electrically small antennas in complex environment

    DEFF Research Database (Denmark)

    Zhang, Jiaying; Pivnenko, Sergey; Breinbjerg, Olav

    2010-01-01

    In this paper, application of a modified Wheeler cap method for the radiation efficiency measurement of balanced electrically small antennas is presented. It is shown that the limitations on the cavity dimension can be overcome and thus measurement in a large cavity is possible. The cavity loss...... is investigated, and a modified radiation efficiency formula that includes the cavity loss is introduced. Moreover, a modification of the technique is proposed that involves the antenna working complex environment inside the Wheeler Cap and thus makes possible measurement of an antenna close to a hand or head...

  7. Synthesis and characterization of norfloxacin-transition metal complexes (group 11, IB): Spectroscopic, thermal, kinetic measurements and biological activity

    Science.gov (United States)

    Refat, Moamen S.

    2007-12-01

    The investigation of the new structures of Ag(I), Cu(II) and Au(III) complexes, [Ag 2(Nor) 2](NO 3) 2, [Cu(Nor) 2(H 2O) 2]SO 4·5H 2O and [Au(Nor) 2 (H 2O) 2]Cl 3 (where, Nor = norfloxacin) was done during the reaction of silver(I), copper(II) and gold(III) ions with norfloxacin drug ligand. Elemental analysis of CHN, infrared, electronic, 1H NMR and mass spectra, as well as thermo gravimetric analysis (TG and DTG) and conductivity measurements have been used to characterize the isolated complexes. The powder XRD studies confirm the amorphous nature of the complexes. The norfloxacin ligand is coordinated to Ag(I) and Au(III) ions as a neutral monodentate chelating through the N atom of piperidyl ring, but the copper(II) complex is coordinated through the carbonyl oxygen atom (quinolone group) and the oxygen atom of the carboxylic group. The norfloxacin and their metal complexes have been biologically tested, which resulted in norfloxacin complexes showing moderate activity against the gram positive and gram negative bacteria as well as against fungi.

  8. Pseudo-stokes vector from complex signal representation of a speckle pattern and its applications to micro-displacement measurement

    DEFF Research Database (Denmark)

    Wang, W.; Ishijima, R.; Matsuda, A.

    2010-01-01

    As an improvement of the intensity correlation used widely in conventional electronic speckle photography, we propose a new technique for displacement measurement based on correlating Stokes-like parameters derivatives for transformed speckle patterns. The method is based on a Riesz transform...... of the intensity speckle pattern, which converts the original real-valued signal into a complex signal. In closest analogy to the polarisation of a vector wave, the Stokes-like vector constructed from the spatial derivative of the generated complex signal has been applied for correlation. Experimental results...... are presented that demonstrate the validity and advantage of the proposed pseudo-Stokes vector correlation technique over conventional intensity correlation technique....

  9. Matrix measure method for global exponential stability of complex-valued recurrent neural networks with time-varying delays.

    Science.gov (United States)

    Gong, Weiqiang; Liang, Jinling; Cao, Jinde

    2015-10-01

    In this paper, based on the matrix measure method and the Halanay inequality, global exponential stability problem is investigated for the complex-valued recurrent neural networks with time-varying delays. Without constructing any Lyapunov functions, several sufficient criteria are obtained to ascertain the global exponential stability of the addressed complex-valued neural networks under different activation functions. Here, the activation functions are no longer assumed to be derivative which is always demanded in relating references. In addition, the obtained results are easy to be verified and implemented in practice. Finally, two examples are given to illustrate the effectiveness of the obtained results.

  10. Integrating Sound Scattering Measurements in the Design of Complex Architectural Surfaces

    DEFF Research Database (Denmark)

    Peters, Brady

    2010-01-01

    is recognized to be one of the most important factors in predicting the computational prediction of acoustic performance. This paper proposes a workflow for the design of complex architectural surfaces and the prediction of their sound scattering properties. This workflow includes the development...

  11. Update of a footprint-based approach for the characterisation of complex measurement sites

    DEFF Research Database (Denmark)

    Goeckede, M.; Markkanen, T.; Hasager, C.B.

    2006-01-01

    Horizontal heterogeneity can significantly affect the flux data quality at monitoring sites in complex terrain. In heterogeneous conditions, the adoption of the eddy-covariance technique is contraindicated by the lack of horizontal homogeneity and presence of advective conditions. In addition, un...

  12. An Inter-Comparison Study of Multi- and DBS Lidar Measurements in Complex Terrain

    DEFF Research Database (Denmark)

    Pauscher, Lukas; Vasiljevic, Nikola; Callies, Doron

    2016-01-01

    WindScanners were focused on one point next to a reference mast in complex terrain. This multi-lidar (ML) technique is also compared to a profiling lidar using the Doppler beam swinging (DBS) method. First- and second-order statistics of the radial wind velocities from the individual instruments...

  13. Inferring a Drive-Response Network from Time Series of Topological Measures in Complex Networks with Transfer Entropy

    Directory of Open Access Journals (Sweden)

    Xinbo Ai

    2014-11-01

    Full Text Available Topological measures are crucial to describe, classify and understand complex networks. Lots of measures are proposed to characterize specific features of specific networks, but the relationships among these measures remain unclear. Taking into account that pulling networks from different domains together for statistical analysis might provide incorrect conclusions, we conduct our investigation with data observed from the same network in the form of simultaneously measured time series. We synthesize a transfer entropy-based framework to quantify the relationships among topological measures, and then to provide a holistic scenario of these measures by inferring a drive-response network. Techniques from Symbolic Transfer Entropy, Effective Transfer Entropy, and Partial Transfer Entropy are synthesized to deal with challenges such as time series being non-stationary, finite sample effects and indirect effects. We resort to kernel density estimation to assess significance of the results based on surrogate data. The framework is applied to study 20 measures across 2779 records in the Technology Exchange Network, and the results are consistent with some existing knowledge. With the drive-response network, we evaluate the influence of each measure by calculating its strength, and cluster them into three classes, i.e., driving measures, responding measures and standalone measures, according to the network communities.

  14. Change ΔS of the entropy in natural time under time reversal: Complexity measures upon change of scale

    Science.gov (United States)

    Sarlis, N. V.; Christopoulos, S.-R. G.; Bemplidaki, M. M.

    2015-01-01

    The entropy S in natural time as well as the entropy in natural time under time reversal S- have already found useful applications in the physics of complex systems, e.g., in the analysis of electrocardiograms (ECGs). Here, we focus on the complexity measures Λl which result upon considering how the statistics of the time series Δ S≤ft[\\equiv S- S-\\right] changes upon varying the scale l. These scale-specific measures are ratios of the standard deviations σ(Δ S_l) and hence independent of the mean value and the standard deviation of the data. They focus on the different dynamics that appear on different scales. For this reason, they can be considered complementary to other standard measures of heart rate variability in ECG, like SDNN, as well as other complexity measures already defined in natural time. An application to the analysis of ECG —when solely using NN intervals— is presented: We show how Λl can be used to separate ECG of healthy individuals from those suffering from congestive heart failure and sudden cardiac death.

  15. Application of the modified Wheeler cap method for radiation efficiency measurement of balanced electrically small antennas in complex environment

    DEFF Research Database (Denmark)

    Zhang, Jiaying; Pivnenko, Sergey; Breinbjerg, Olav

    2010-01-01

    In this paper, application of a modified Wheeler cap method for the radiation efficiency measurement of balanced electrically small antennas is presented. It is shown that the limitations on the cavity dimension can be overcome and thus measurement in a large cavity is possible. The cavity loss...... is investigated, and a modified radiation efficiency formula that includes the cavity loss is introduced. Moreover, a modification of the technique is proposed that involves the antenna working complex environment inside the Wheeler Cap and thus makes possible measurement of an antenna close to a hand or head...... phantom. The measurement procedures are described and the key features of the technique are discussed. The results of simulations and measurements by the proposed method are presented and compared....

  16. Instrumentation Suite for Acoustic Propagation Measurements in Complex Shallow Water Environments

    Data.gov (United States)

    Federal Laboratory Consortium — FUNCTION: Obtain at-sea measurements to test theoretical and modeling predictions of acoustic propagation in dynamic, inhomogeneous, and nonisotropic shallow water...

  17. Measuring social complexity and the emergence of cooperation from entropic principles

    CERN Document Server

    López-Corona, O; Huerta, A; Mustri-Trejo, D; Perez, K; Ruiz, A; Valdés, O; Zamudio, F

    2015-01-01

    Assessing quantitatively the state and dynamics of a social system is a very difficult problem. It is of great importance for both practical and theoretical reasons such as establishing the efficiency of social action programs, detecting possible community needs or allocating resources. In this paper we propose a new general theoretical framework for the study of social complexity, based on the relation of complexity and entropy in combination with evolutionary dynamics to asses the dynamics of the system. Imposing the second law of thermodynamics, we study the conditions under which cooperation emerges and demonstrate that it depends of relative importance of local and global fitness. As cooperation is a central concept in sustainability, this thermodynamic-informational approach allows new insights and means to asses it using the concept of Helmholtz free energy. Finally we introduce a new set of equations that consider the more general case where the social system change both in time and space, and relate ...

  18. Force and complexity of tongue task training influences behavioral measures of motor learning

    DEFF Research Database (Denmark)

    Kothari, Mohit; Svensson, Peter; Huo, Xueliang;

    2012-01-01

    Relearning of motor skills is important in neurorehabilitation. We investigated the improvement of training success during simple tongue protrusion (two force levels) and a more complex tongue-training paradigm using the Tongue Drive System (TDS). We also compared subject-based reports of fun, pain......, fatigue, and motivation between paradigms. Three randomized sessions and one control experiment were performed. Sixteen healthy subjects completed two different 1-h sessions of simple tongue training with 1 N and 3 N, respectively, and one TDS session. After 1 wk, six out of 16 subjects participated...... the experienced group performed equal to the last 5 min of their first TDS session and neither group improved during rest. Training with the TDS was rated as more fun, less painful, less fatiguing, and more motivating compared with simple tongue training. In conclusion, force level and complexity of tongue...

  19. Entropy measures, entropy estimators, and their performance in quantifying complex dynamics: Effects of artifacts, nonstationarity, and long-range correlations

    Science.gov (United States)

    Xiong, Wanting; Faes, Luca; Ivanov, Plamen Ch.

    2017-06-01

    Entropy measures are widely applied to quantify the complexity of dynamical systems in diverse fields. However, the practical application of entropy methods is challenging, due to the variety of entropy measures and estimators and the complexity of real-world time series, including nonstationarities and long-range correlations (LRC). We conduct a systematic study on the performance, bias, and limitations of three basic measures (entropy, conditional entropy, information storage) and three traditionally used estimators (linear, kernel, nearest neighbor). We investigate the dependence of entropy measures on estimator- and process-specific parameters, and we show the effects of three types of nonstationarities due to artifacts (trends, spikes, local variance change) in simulations of stochastic autoregressive processes. We also analyze the impact of LRC on the theoretical and estimated values of entropy measures. Finally, we apply entropy methods on heart rate variability data from subjects in different physiological states and clinical conditions. We find that entropy measures can only differentiate changes of specific types in cardiac dynamics and that appropriate preprocessing is vital for correct estimation and interpretation. Demonstrating the limitations of entropy methods and shedding light on how to mitigate bias and provide correct interpretations of results, this work can serve as a comprehensive reference for the application of entropy methods and the evaluation of existing studies.

  20. Aspects of quality control of wind profiler measurements in complex topography

    OpenAIRE

    Maruri, M.; J. A. Romo; L. Gomez

    2014-01-01

    It is well known in the scientific community that some remote sensing instruments assume that sample volumes present homogeneous conditions within a defined meteorological profile. At complex topographic sites and under extreme meteorological conditions, this assumption may be fallible depending on the site, and it is more likely to fail in the lower layers of the atmosphere. This piece of work tests the homogeneity of the wind field over a boundary layer wind profiler radar...

  1. Quality aspects of the measurements of a wind profiler in a complex topography

    OpenAIRE

    Maruri, M.; J. A. Romo; L. Gomez

    2013-01-01

    It is well known amongst the scientific community that some remote sensing instruments have assumed that sample volumes present homogeneous conditions within a defined meteorological profile. At complex topographic sites and under extreme meteorological conditions, this assumption may be fallible depending on the site, and it is more likely to fail in the lower layers of the atmosphere. This piece of work tests the homogeneity of the wind field over a boundary layer wind profiler radar locate...

  2. On complex Langevin dynamics and zeroes of the measure I: Formal proof and simple models

    CERN Document Server

    Aarts, Gert; Sexty, Denes; Stamatescu, Ion-Olimpiu

    2016-01-01

    In the complex Langevin approach to lattice simulations at nonzero density, zeroes of the fermion determinant lead to a meromorphic drift and hence a need to revisit the theoretical derivation. We discuss how poles in the drift affect the formal justification of the approach and then explore the various potential issues in simple models, in a manner that is applicable to heavy dense and full QCD.

  3. High-precision optical measuring instruments and their application as part of mobile diagnostic complexes

    Directory of Open Access Journals (Sweden)

    Igor Miroshnichenko

    2014-04-01

    Full Text Available The article is devoted to the results of the laser technologies and methods of optical interferometry for registering information in the quality control and diagnostics of the construction materials and the power of elements of the acoustic non-destructive control, and also described new technical decisions, allowing to apply the results obtained to the solution of practical problems of diagnostics of products in operation, as part of mobile diagnostic complexes.

  4. MEASURING ACCURACY AND COMPLEXITY OF AN L2 LEARNER’S ORAL PRODUCTION

    Directory of Open Access Journals (Sweden)

    Teguh Khaerudin

    2015-03-01

    Full Text Available This paper aims at examining the influence of different tasks on the degree of task performance in a second language learner’s oral production. The underlying assumption is that among the three aspects of language performance in L2, i.e. fluency, accuracy, and complexity, learners may prioritize only one of them (Ellis & Barkhuizen, 2005, p. 150 and that their decision to prioritize one particular area of language performance may be determined by the characteristics of the task given to the learners (Skehan & Foster, 1997. Having a written record of an oral production, the writer focuses this study on determining the degree of complexity and accuracy, and analyzing whether the different tasks change the level of learner’s oral performance. The results show that learner’s accuracy from both tasks remains in the same level. However, both task conditions, which do not allow speech plan, result in no improvement in accuracy level and a minor improvement in the complexity level.

  5. Efficacy of dynamic traffic management measures: the influence of complexity and situational awareness

    NARCIS (Netherlands)

    Hoogendoorn, R.; Vreeswijk, J.D.; Hoogendoorn, S.P.; Brookhuis, K.A.; Arem, van B.; Berkum, van E.C.

    2012-01-01

    Behavior of road users (e.g. route choice, driving behavior) is a critical factor in the efficacy of measures applied in the context of Dynamic Traffic Management (DTM). In order for drivers to make well-informed decisions, it is required that information provided by DTM measures is perceived. In th

  6. Efficacy of dynamic traffic management measures: the influence of complexity and situational awareness

    NARCIS (Netherlands)

    Hoogendoorn, R.; Vreeswijk, Jacob Dirk; Hoogendoorn, S.P.; Brookhuis, K.A.; van Arem, Bart; van Berkum, Eric C.

    2012-01-01

    Behavior of road users (e.g. route choice, driving behavior) is a critical factor in the efficacy of measures applied in the context of Dynamic Traffic Management (DTM). In order for drivers to make well-informed decisions, it is required that information provided by DTM measures is perceived. In

  7. Potential for measurement of the distribution of DNA folds in complex environments using Correlated X-ray Scattering

    Science.gov (United States)

    Schenk, Gundolf; Krajina, Brad; Spakowitz, Andrew; Doniach, Sebastian

    2016-12-01

    In vivo chromosomal behavior is dictated by the organization of genomic DNA at length scales ranging from nanometers to microns. At these disparate scales, the DNA conformation is influenced by a range of proteins that package, twist and disentangle the DNA double helix, leading to a complex hierarchical structure that remains undetermined. Thus, there is a critical need for methods of structural characterization of DNA that can accommodate complex environmental conditions over biologically relevant length scales. Based on multiscale molecular simulations, we report on the possibility of measuring supercoiling in complex environments using angular correlations of scattered X-rays resulting from X-ray free electron laser (xFEL) experiments. We recently demonstrated the observation of structural detail for solutions of randomly oriented metallic nanoparticles [D. Mendez et al., Philos. Trans. R. Soc. B 360 (2014) 20130315]. Here, we argue, based on simulations, that correlated X-ray scattering (CXS) has the potential for measuring the distribution of DNA folds in complex environments, on the scale of a few persistence lengths.

  8. Tracking of Maneuvering Complex Extended Object with Coupled Motion Kinematics and Extension Dynamics Using Range Extent Measurements.

    Science.gov (United States)

    Sun, Lifan; Ji, Baofeng; Lan, Jian; He, Zishu; Pu, Jiexin

    2017-09-22

    The key to successful maneuvering complex extended object tracking (MCEOT) using range extent measurements provided by high resolution sensors lies in accurate and effective modeling of both the extension dynamics and the centroid kinematics. During object maneuvers, the extension dynamics of an object with a complex shape is highly coupled with the centroid kinematics. However, this difficult but important problem is rarely considered and solved explicitly. In view of this, this paper proposes a general approach to modeling a maneuvering complex extended object based on Minkowski sum, so that the coupled turn maneuvers in both the centroid states and extensions can be described accurately. The new model has a concise and unified form, in which the complex extension dynamics can be simply and jointly characterized by multiple simple sub-objects' extension dynamics based on Minkowski sum. The proposed maneuvering model fits range extent measurements very well due to its favorable properties. Based on this model, an MCEOT algorithm dealing with motion and extension maneuvers is also derived. Two different cases of the turn maneuvers with known/unknown turn rates are specifically considered. The proposed algorithm which jointly estimates the kinematic state and the object extension can also be easily implemented. Simulation results demonstrate the effectiveness of the proposed modeling and tracking approaches.

  9. A convenient method for complex permittivity measurement of thin materials at microwave frequencies

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B K [Faculty of Engineering, Multimedia University, 63100 Cyberjaya (Malaysia)

    2006-05-07

    A practical problem in the reflection method for measuring permittivity of thin materials is the difficulty in ensuring the sample is placed exactly at the waveguide flange. A small position offset of the dielectric slab will give rise to significant errors in calculating the permittivity. To circumvent this problem, a measurement method using a waveguide partially filled with a thin material slab has been developed. The material sample can be easily prepared and inserted into the guide through a longitudinal slot on the broad wall of the waveguide. Multiple material slabs can be measured rapidly because one does not have to disconnect the waveguide system for sample placement. The method is verified with measurement of Teflon, glass and FR4 fibreglass. The measured permittivity show good agreement with published data. Subsequently, the permittivity of a vegetation leaf was measured. The method presented in this paper is particularly useful in measuring the permittivity of a thin and narrow slab of natural materials such as a paddy leaf.

  10. Measurement of the infrared complex Faraday angle in semiconductors and insulators

    CERN Document Server

    Kim, M -H; Acbas, G; Ellis, C T; Cerne, J

    2009-01-01

    We measure the infrared (wavelength 11 - 0.8 microns; energy E = 0.1 - 1.5 eV) Faraday rotation and ellipticity in GaAs, BaF2, LaSrGaO4, LaSrAlO4, and ZnSe. Since these materials are commonly used as substrates and windows in infrared magneto-optical measurements, it is important to measure their Faraday signals for background subtraction. These measurement also provide a rigorous test of the accuracy and sensitivity of our unique magneto-polarimetry system. The light sources used in these measurements consist of gas and semiconductor lasers, which cover 0.1 - 1.3 eV, as well as a custom-modified prism monochromator with a Xe lamp, which allows continuous broadband measurements in the 0.28 - 1.5 eV energy range. The sensitivity of this broad-band system is approximately 10 micro-rad. Our measurements reveal that the Verdet coefficients of these materials are proportional to $1/\\lambda^2$, which is expected when probing with radiation energies below the band gap. Reproducible ellipticity signals are also seen,...

  11. Developing palaeolimnological records of organic content (DOC and POC) using the UK Acid Water Monitoring Network sites

    Science.gov (United States)

    Russell, Fiona; Chiverrell, Richard; Boyle, John

    2016-04-01

    Monitoring programmes have shown increases in concentrations of dissolved organic matter (DOM) in the surface waters of northern and central Europe (Monteith et al. 2007), and negative impacts of the browning of river waters have been reported for fish populations (Jonsson et al. 2012; Ranaker et al. 2012) and for ecosystem services such as water treatment (Tuvendal and Elmqvist 2011). Still the exact causes of the recent browning remain uncertain, the main contenders being climate change (Evans et al. 2005) and reduced ionic strength in surface water resulting from declines in anthropogenic sulphur and sea salt deposition (Monteith et al. 2007). There is a need to better understand the pattern, drivers and trajectory of these increases in DOC and POC in both recent and longer-term (Holocene) contexts to improve the understanding of carbon cycling within lakes and their catchments. In Britain there are some ideal sites for testing whether these trends are preserved and developing methods for reconstructing organic fluxes from lake sedimentary archives. There is a suite of lakes distributed across the country, the UK Acid Waters Monitoring Network (UKAWMN) sites, which have been monitored monthly for dissolved organic carbon and other aqueous species since 1988. These 12 lakes have well studied recent and in some case whole Holocene sediment records. Here four of those lakes (Grannoch, Chon, Scoat Tarn and Cwm Mynach) are revisited, with sampling focused on the sediment-water interface and very recent sediments (approx.150 years). At Scoat Tarn (approx. 1000 years) and Llyn Mynach (11.5k years) longer records have been obtained to assess equivalent patterns through the Holocene. Analyses of the gravity cores have focused on measuring and characterising the organic content for comparison with recorded surface water DOC measurements (UKAWMN). Data from pyrolysis measurements (TGA/DSC) in an N atmosphere show that the mass loss between 330-415°C correlates well with

  12. Measurements in Vacuum of the Effect of Ilmenite on the Complex Dielectric Permittivity of Planetary Regolith Analog Materials

    Science.gov (United States)

    Boivin, A.; Hickson, D.; Cunje, A.; Tsai, C. A.; Ghent, R. R.; Daly, M. G.

    2016-12-01

    When considering radar observations of airless bodies containing regolith, the radar backscattering coefficient is dependent on both the complex permittivity and the thickness of the regolith. The complex permittivity is typically normalized by the permittivity of free space (ɛ0) and reported as the relative permittivity (ɛr = ɛr' + iɛr'', where ɛr' is the dielectric constant and ɛr'' is the loss factor). Given the backscattering coefficient and the dielectric properties of the regolith, it should be possible to determine regolith thickness. This problem has long been considered for the Moon and many measurements of either real or complex permittivity have been made on both Apollo samples and regolith analogues. Measurements thus far have either only been done at a lower frequency range (baked at 250°C for 48hrs and are then placed in a vacuum chamber. Measurements are then made using a sweep of frequencies from 300 kHz to 8.5 GHz. Preliminary results show that ilmenite significantly influences signal attenuation, especially at high concentrations.

  13. Assessment of the microclimatic and human comfort conditions in a complex urban environment: Modelling and measurements

    Energy Technology Data Exchange (ETDEWEB)

    Gulyas, Agnes; Unger, Janos [University of Szeged, Szeged (Hungary). Department of Climatology and Landscape Ecology; Matzarakis, Andreas [Meteorological Institute, University of Freiburg, Freiburg im Breisgau (Germany)

    2006-12-15

    Several complex thermal indices (e.g. Predicted Mean Vote and Physiological Equivalent Temperature) were developed in the last decades to describe and quantify the thermal environment of humans and the energy fluxes between body and environment. Compared to open spaces/landscapes the complex surface structure of urban areas creates an environment with special microclimatic characteristics, which have a dominant effect on the energy balance of the human body. In this study, outdoor thermal comfort conditions are examined through two field-surveys in Szeged, a South-Hungarian city (population 160,000). The intensity of radiation fluxes is dependent on several factors, such as surface structure and housing density. Since our sample area is located in a heavily built-up city centre, radiation fluxes are mainly influenced by narrow streets and several 20-30-year-old (20-30m tall) trees. Special emphasis is given to the human-biometeorological assessment of the microclimate of complex urban environments through the application of the thermal index PET. The analysis is carried out by the utilization of the RayMan model. Firstly, bioclimatic conditions of sites located close to each other but shaded differently by buildings and plants are compared. The results show that differences in the PET index amongst these places can be as high as 15-20{sup |}C due to the different irradiation. Secondly, the investigation of different modelled environments by RayMan (only buildings, buildings+trees and only trees) shows significant alterations in the human comfort sensation between the situations. (author)

  14. Measuring orthographic transparency and morphological-syllabic complexity in alphabetic orthographies: a narrative review

    NARCIS (Netherlands)

    Borleffs, Elisabeth; Maassen, Bernardus; Lyytinen, Heikki; Zwarts, Frans

    2017-01-01

    This narrative review discusses quantitative indices measuring differences between alphabetic languages that are related to the process of word recognition. The specific orthography that a child is acquiring has been identified as a central element influencing reading acquisition and dyslexia.

  15. Amines are likely to enhance neutral and ion-induced sulfuric acid-water nucleation in the atmosphere more effectively than ammonia

    Directory of Open Access Journals (Sweden)

    T. Kurtén

    2008-07-01

    Full Text Available We have studied the structure and formation thermodynamics of dimer clusters containing H2SO4 or HSO4 together with ammonia and seven different amines possibly present in the atmosphere, using the high-level ab initio methods RI-MP2 and RI-CC2. As expected from e.g. proton affinity data, the binding of all studied amine-H2SO4 complexes is significantly stronger than that of NH3•H2SO4, while most amine-HSO4 complexes are only somewhat more strongly bound than NH3•HSO4. Further calculations on larger cluster structures containing dimethylamine or ammonia together with two H2SO4 molecules or one H2SO4 molecule and one HSO4 ion demonstrate that amines, unlike ammonia, significantly assist the growth of not only neutral but also ionic clusters along the H2SO4 co-ordinate. A sensitivity analysis indicates that the difference in complexation free energies for amine- and ammonia-containing clusters is large enough to overcome the mass-balance effect caused by the fact that the concentration of amines in the atmosphere is probably 2 or 3 orders of magnitude lower than that of ammonia. This implies that amines might be more important than ammonia in enhancing neutral and especially ion-induced sulfuric acid-water nucleation in the atmosphere.

  16. Amines are likely to enhance neutral and ion-induced sulfuric acid-water nucleation in the atmosphere more effectively than ammonia

    Directory of Open Access Journals (Sweden)

    T. Kurtén

    2008-04-01

    Full Text Available We have studied the structure and formation thermodynamics of dimer clusters containing H2SO4 or HSO4 together with ammonia and seven different amines possibly present in the atmosphere, using the high-level ab initio methods RI-MP2 and RI-CC2. As expected from e.g. proton affinity data, the binding of all studied amine – H2SO4 complexes is significantly stronger than that of NH3•H2SO4, while most amine – HSO4 complexes are only somewhat more strongly bound than NH3•HSO4. Further calculations on larger cluster structures containing dimethylamine or ammonia together with two H2SO4 molecules or one H2SO4 molecule and one HSO4 ion demonstrate that amines, unlike ammonia, significantly assist the growth of not only neutral but also ionic clusters along the H2SO4 co-ordinate. A sensitivity analysis indicates that the difference in complexation free energies for amine- and ammonia-containing clusters is large enough to overcome the mass-balance effect caused by the fact that the concentration of amines in the atmosphere is probably 2 or 3 orders of magnitude lower than that of ammonia. This implies that amines might be more important than ammonia in enhancing neutral and especially ion-induced sulfuric acid-water nucleation in the atmosphere.

  17. A new strip line broad-band measurement evaluation for determining the complex permeability of thin ferromagnetic films

    Energy Technology Data Exchange (ETDEWEB)

    Bekker, V.; Seemann, K. E-mail: klaus.seemann@imf.fzk.de; Leiste, H

    2004-04-01

    In the present paper, a new method for determining the frequency dependent complex permeability of thin magnetic films, designed for measurements up to 5 GHz, is presented. The measurement technique described here was carried out by a one-port permeameter, which is based on a short-circuited strip line. The complex permeability was deduced by a new analytical approach from the measured reflection coefficient of a strip line (S{sub 11}) with and without a ferromagnetic film material inside. An adaptive error correction was applied in the measurement procedure. The spectral permeability of thin FeCoAlN films with an in-plane uniaxial anisotropy of {mu}{sub 0}{sup *}H{sub a}=3.2 mT induced by annealing at CMOS temperatures in a static magnetic field was investigated. The measurements were compared with a theoretical model taking the Landau-Lifshitz and eddy current theories into account. A resonant frequency of about 1.6 GHz was observed.

  18. Measurement of reflected second harmonics and nonlinearity parameter using a transducer with complex structure

    Institute of Scientific and Technical Information of China (English)

    MA Qingyu; LU Rongrong; ZHANG Dong; GONG Xiufen; LIU Xiaozhou

    2003-01-01

    Measurement of nonlinearity parameter using the second-harmonic reflective model is studied. A new kind of compound transducer is designed and fabricated for this purpose. With this transducer and the finite amplitude insert-substitution method, an experimental system to measure the nonlinearity parameter using reflective model is developed. B/A values of some liquids and biological tissues are obtained and results coincide well with those presented in the literatures.

  19. Using complexity metrics with R-R intervals and BPM heart rate measures

    OpenAIRE

    Wallot, Sebastian; Fusaroli, Riccardo; Tylén, Kristian; Jegindø, Else-Marie

    2013-01-01

    Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are ...

  20. Development of a Symmetric Ring Junction as a Four-Port Reflectometer for Complex Reflection Coefficient Measurements

    Directory of Open Access Journals (Sweden)

    K.Y. Lee

    2015-12-01

    Full Text Available Six-port reflectometer is well-known for its ability to measure magnitude and phase-shift of microwave signal using four power detectors that perform magnitude-only measurements. This paper presents the development of an innovative symmetric ring junction as four-port reflectometer for complex reflection coefficient measurements. It reduces the number of required detectors to two. Design optimization, new calibration modeling and algorithm are discussed in details for this four-port reflectometer. The developed four-port reflectometer is compared to five-port reflectometer and vector network analyzer. It is found that the measured magnitude and phase-shift shows good performance in comparison with the commercial vector network analyzer and the five-port reflectometer.

  1. Application of Image Measurement and Continuum Mechanics to the Direct Measurement of Two-Dimensional Finite Strain in a Complex Fibro-Porous Material

    Science.gov (United States)

    Britton, Paul; Loughran, Jeff

    This paper outlines a computational procedure that has been implemented for the direct measurement of finite material strains from digital images taken of a material surface during plane-strain process experiments. The selection of both hardware and software components of the image processing system is presented, and the numerical procedures developed for measuring the 2D material deformations are described. The algorithms are presented with respect to two-roll milling of sugar cane bagasse, a complex fibro-porous material that undergoes large strains during processing to extract the sucrose-rich liquid. Elaborations are made in regard to numerical developments for other forms of experimentation, algorithm calibrations and measurement improvements. Finite 2D strain results are shown for both confined uniaxial compression and two-roll milling experiments.

  2. Combining classification with fMRI-derived complex network measures for potential neurodiagnostics.

    Directory of Open Access Journals (Sweden)

    Tomer Fekete

    Full Text Available Complex network analysis (CNA, a subset of graph theory, is an emerging approach to the analysis of functional connectivity in the brain, allowing quantitative assessment of network properties such as functional segregation, integration, resilience, and centrality. Here, we show how a classification framework complements complex network analysis by providing an efficient and objective means of selecting the best network model characterizing given functional connectivity data. We describe a novel kernel-sum learning approach, block diagonal optimization (BDopt, which can be applied to CNA features to single out graph-theoretic characteristics and/or anatomical regions of interest underlying discrimination, while mitigating problems of multiple comparisons. As a proof of concept for the method's applicability to future neurodiagnostics, we apply BDopt classification to two resting state fMRI data sets: a trait (between-subjects classification of patients with schizophrenia vs. controls, and a state (within-subjects classification of wake vs. sleep, demonstrating powerful discriminant accuracy for the proposed framework.

  3. Comparison of Measured and Numerically Simulated Turbulence Statistics in a Convective Boundary Layer Over Complex Terrain

    Science.gov (United States)

    Rai, Raj K.; Berg, Larry K.; Kosović, Branko; Mirocha, Jeffrey D.; Pekour, Mikhail S.; Shaw, William J.

    2016-11-01

    The Weather Research and Forecasting (WRF) model can be used to simulate atmospheric processes ranging from quasi-global to tens of m in scale. Here we employ large-eddy simulation (LES) using the WRF model, with the LES-domain nested within a mesoscale WRF model domain with grid spacing decreasing from 12.15 km (mesoscale) to 0.03 km (LES). We simulate real-world conditions in the convective planetary boundary layer over an area of complex terrain. The WRF-LES model results are evaluated against observations collected during the US Department of Energy-supported Columbia Basin Wind Energy Study. Comparison of the first- and second-order moments, turbulence spectrum, and probability density function of wind speed shows good agreement between the simulations and observations. One key result is to demonstrate that a systematic methodology needs to be applied to select the grid spacing and refinement ratio used between domains, to avoid having a grid resolution that falls in the grey zone and to minimize artefacts in the WRF-LES model solutions. Furthermore, the WRF-LES model variables show large variability in space and time caused by the complex topography in the LES domain. Analyses of WRF-LES model results show that the flow structures, such as roll vortices and convective cells, vary depending on both the location and time of day as well as the distance from the inflow boundaries.

  4. Comparison of Measured and Numerically Simulated Turbulence Statistics in a Convective Boundary Layer Over Complex Terrain

    Energy Technology Data Exchange (ETDEWEB)

    Rai, Raj K.; Berg, Larry K.; Kosović, Branko; Mirocha, Jeffrey D.; Pekour, Mikhail S.; Shaw, William J.

    2016-11-25

    High resolution numerical simulation can provide insight into important physical processes that occur within the planetary boundary layer (PBL). The present work employs large eddy simulation (LES) using the Weather Forecasting and Research (WRF) model, with the LES domain nested within mesoscale simulation, to simulate real conditions in the convective PBL over an area of complex terrain. A multiple nesting approach has been used to downsize the grid spacing from 12.15 km (mesoscale) to 0.03 km (LES). A careful selection of grid spacing in the WRF Meso domain has been conducted to minimize artifacts in the WRF-LES solutions. The WRF-LES results have been evaluated with in situ and remote sensing observations collected during the US Department of Energy-supported Columbia BasinWind Energy Study (CBWES). Comparison of the first- and second-order moments, turbulence spectrum, and probability density function (PDF) of wind speed shows good agreement between the simulations and data. Furthermore, the WRF-LES variables show a great deal of variability in space and time caused by the complex topography in the LES domain. The WRF-LES results show that the flow structures, such as roll vortices and convective cells, vary depending on both the location and time of day. In addition to basic studies related to boundary-layer meteorology, results from these simulations can be used in other applications, such as studying wind energy resources, atmospheric dispersion, fire weather etc.

  5. Comparison of Measured and Numerically Simulated Turbulence Statistics in a Convective Boundary Layer Over Complex Terrain

    Science.gov (United States)

    Rai, Raj K.; Berg, Larry K.; Kosović, Branko; Mirocha, Jeffrey D.; Pekour, Mikhail S.; Shaw, William J.

    2017-04-01

    The Weather Research and Forecasting (WRF) model can be used to simulate atmospheric processes ranging from quasi-global to tens of m in scale. Here we employ large-eddy simulation (LES) using the WRF model, with the LES-domain nested within a mesoscale WRF model domain with grid spacing decreasing from 12.15 km (mesoscale) to 0.03 km (LES). We simulate real-world conditions in the convective planetary boundary layer over an area of complex terrain. The WRF-LES model results are evaluated against observations collected during the US Department of Energy-supported Columbia Basin Wind Energy Study. Comparison of the first- and second-order moments, turbulence spectrum, and probability density function of wind speed shows good agreement between the simulations and observations. One key result is to demonstrate that a systematic methodology needs to be applied to select the grid spacing and refinement ratio used between domains, to avoid having a grid resolution that falls in the grey zone and to minimize artefacts in the WRF-LES model solutions. Furthermore, the WRF-LES model variables show large variability in space and time caused by the complex topography in the LES domain. Analyses of WRF-LES model results show that the flow structures, such as roll vortices and convective cells, vary depending on both the location and time of day as well as the distance from the inflow boundaries.

  6. Towards a methodology for validation of centrality measures in complex networks.

    Directory of Open Access Journals (Sweden)

    Komal Batool

    Full Text Available BACKGROUND: Living systems are associated with Social networks - networks made up of nodes, some of which may be more important in various aspects as compared to others. While different quantitative measures labeled as "centralities" have previously been used in the network analysis community to find out influential nodes in a network, it is debatable how valid the centrality measures actually are. In other words, the research question that remains unanswered is: how exactly do these measures perform in the real world? So, as an example, if a centrality of a particular node identifies it to be important, is the node actually important? PURPOSE: The goal of this paper is not just to perform a traditional social network analysis but rather to evaluate different centrality measures by conducting an empirical study analyzing exactly how do network centralities correlate with data from published multidisciplinary network data sets. METHOD: We take standard published network data sets while using a random network to establish a baseline. These data sets included the Zachary's Karate Club network, dolphin social network and a neural network of nematode Caenorhabditis elegans. Each of the data sets was analyzed in terms of different centrality measures and compared with existing knowledge from associated published articles to review the role of each centrality measure in the determination of influential nodes. RESULTS: Our empirical analysis demonstrates that in the chosen network data sets, nodes which had a high Closeness Centrality also had a high Eccentricity Centrality. Likewise high Degree Centrality also correlated closely with a high Eigenvector Centrality. Whereas Betweenness Centrality varied according to network topology and did not demonstrate any noticeable pattern. In terms of identification of key nodes, we discovered that as compared with other centrality measures, Eigenvector and Eccentricity Centralities were better able to identify

  7. Towards a methodology for validation of centrality measures in complex networks.

    Science.gov (United States)

    Batool, Komal; Niazi, Muaz A

    2014-01-01

    Living systems are associated with Social networks - networks made up of nodes, some of which may be more important in various aspects as compared to others. While different quantitative measures labeled as "centralities" have previously been used in the network analysis community to find out influential nodes in a network, it is debatable how valid the centrality measures actually are. In other words, the research question that remains unanswered is: how exactly do these measures perform in the real world? So, as an example, if a centrality of a particular node identifies it to be important, is the node actually important? The goal of this paper is not just to perform a traditional social network analysis but rather to evaluate different centrality measures by conducting an empirical study analyzing exactly how do network centralities correlate with data from published multidisciplinary network data sets. We take standard published network data sets while using a random network to establish a baseline. These data sets included the Zachary's Karate Club network, dolphin social network and a neural network of nematode Caenorhabditis elegans. Each of the data sets was analyzed in terms of different centrality measures and compared with existing knowledge from associated published articles to review the role of each centrality measure in the determination of influential nodes. Our empirical analysis demonstrates that in the chosen network data sets, nodes which had a high Closeness Centrality also had a high Eccentricity Centrality. Likewise high Degree Centrality also correlated closely with a high Eigenvector Centrality. Whereas Betweenness Centrality varied according to network topology and did not demonstrate any noticeable pattern. In terms of identification of key nodes, we discovered that as compared with other centrality measures, Eigenvector and Eccentricity Centralities were better able to identify important nodes.

  8. Noise exposure assessment with task-based measurement in complex noise environment

    Institute of Scientific and Technical Information of China (English)

    LI Nan; YANG Qiu-ling; ZENG Lin; ZHU Liang-liang; TAO Li-yuan; ZHANG Hua; ZHAO Yi-ming

    2011-01-01

    Background Task-based measurement (TBM) is a method to assess the eight-hour A-weighted equivalent noise exposure level (LAeq. 8h) besides dosimeter. TBM can be better used in factories by non-professional workers and staffs.However, it is still not clear if TBM is equal or similar with dosimeter for LAeq.8h measurement in general. This study considered the measurement with dosimeter as real personal noise exposure level (PNEL) and assessed the accuracy of TBM by comparing the consistencies of TBM and dosimeter in LAeq.8h measurement.Methods The study was conducted in one automobile firm among 387 workers who are exposed to unstable noise.Dosimeters and TBM were used to compare the two strategies and assess the degree of agreement and causes of disagreement. Worker's PNEL was measured via TBM for noise; the real PNEL was also recorded. The TBM for noise was computed with task/position noise levels measured via sound level meter and workers' exposure information collected via working diary forms (WDF) filled by participants themselves. Full-shift noise exposure measurement via personal noise dosimeters were taken as the real PNEL. General linear model (GLM) was built to analyze the accuracy of TBM for noise and the source of difference between TBM for noise and real PNEL.Results The LAeq.8h with TBM were slightly higher than the real PNELs, except the electricians. Differences of the two values had statistical significance in stamping workers (P <0.001), assembly workers (P=0.015) and welding workers (P=0.001). The correlation coefficient of LAeq.8h with TBM and real PNELs was 0.841. Differences of the two results were mainly affected by real PNEL (F=11.27, P=0.001); and work groups (F=3.11, P <0.001) divided by jobs and workshops were also independent factors. PNEL of workers with fixed task/position ((86.53±8.82) dB(A)) was higher than those without ((75.76±9.92) dB(A)) (t=8.84, P <0.01). Whether workers had fixed task/position was another factor on the

  9. Comparisons of Snowfall Measurements in Complex Terrain Made During the 2010 Winter Olympics in Vancouver

    Science.gov (United States)

    Boudala, Faisal S.; Isaac, George A.; Rasmussen, Roy; Cober, Stewart G.; Scott, Bill

    2014-01-01

    Solid precipitation (SP) intensity () using four automatic gauges, Pluvio, PARSIVEL (PArticle, SIze and VELocity), FD12P and POSS, and radar reflectivity factor () using the POSS and PARSIVEL were measured at a naturally sheltered station (VOA) located at high level (1,640 m) on the Whistler Mountain in British Colombia, Canada. The R s and other standard meteorological parameters were collected from March 2009, and from November 2009, to February 2010. The wind speed (ws) measured during this period ranged from 0 to 4.5 ms-1, with a mean value of 0.5 ms-1. The temperature varied from 4 to -17 °C. The SP amount reported by the PARSIVEL was higher than that reported by the Pluvio by more than a factor of 2, while the FD12P and POSS measured relatively smaller amounts, but much closer to that reported by the Pluvio and manual measurements. The dependence of R s from the PARSIVEL on wind speed was examined, but no significant dependence was found. The PARSIVEL's precipitation retrieval algorithm was modified and tested using three different snow density size relationships ( ρ s- D) reported in literature. It was found that after modification of the algorithm, the derived R s amounts using the raw data agreed reasonably well with the Pluvio. Statistical analysis shows that more than 95 % of data measured by POSS appears to correlates well with the reflectivity factors determined using the three ρ s- D relationships. The automated Pluvio accumulation and manually determined daily SP amount (SPm) measured during five winter months were compared. The mean ratio (MR) and the mean difference (MD), and the correlation coefficient ( r) calculated using the data collected using the two methods, were found to be 0.96, 0.4 and 0.6 respectively, indicating respectable agreement between these two methods, with only the Pluvio underestimating the amount by about 4 %.

  10. The challenging measurement of protein in complex biomass-derived samples

    DEFF Research Database (Denmark)

    Haven, M.O.; Jørgensen, H.

    2014-01-01

    and fast protein measurement on this type of samples was the ninhydrin assay. This method has also been used widely for this purpose, but with two different methods for protein hydrolysis prior to the assay - alkaline or acidic hydrolysis. In samples containing glucose or ethanol, there was significant......Measurement of the protein content in samples from production of lignocellulosic bioethanol is an important tool when studying the adsorption of cellulases. Several methods have been used for this, and after reviewing the literature, we concluded that one of the most promising assays for simple...

  11. Measurement of net electric charge and dipole moment of dust aggregates in a complex plasma

    CERN Document Server

    Yousefi, Razieh; Carmona-Reyes, Jorge; Matthews, Lorin S; Hyde, Truell W

    2014-01-01

    Understanding the agglomeration of dust particles in complex plasmas requires a knowledge of the basic properties such as the net electrostatic charge and dipole moment of the dust. In this study, dust aggregates are formed from gold coated mono-disperse spherical melamine-formaldehyde monomers in a radio-frequency (rf) argon discharge plasma. The behavior of observed dust aggregates is analyzed both by studying the particle trajectories and by employing computer models examining 3D structures of aggregates and their interactions and rotations as induced by torques arising from their dipole moments. These allow the basic characteristics of the dust aggregates, such as the electrostatic charge and dipole moment, to be determined. It is shown that the experimental results support the predicted values from computer models for aggregates in these environments.

  12. Measurement of net electric charge and dipole moment of dust aggregates in a complex plasma.

    Science.gov (United States)

    Yousefi, Razieh; Davis, Allen B; Carmona-Reyes, Jorge; Matthews, Lorin S; Hyde, Truell W

    2014-09-01

    Understanding the agglomeration of dust particles in complex plasmas requires knowledge of basic properties such as the net electrostatic charge and dipole moment of the dust. In this study, dust aggregates are formed from gold-coated mono-disperse spherical melamine-formaldehyde monomers in a radiofrequency (rf) argon discharge plasma. The behavior of observed dust aggregates is analyzed both by studying the particle trajectories and by employing computer models examining three-dimensional structures of aggregates and their interactions and rotations as induced by torques arising from their dipole moments. These allow the basic characteristics of the dust aggregates, such as the electrostatic charge and dipole moment, as well as the external electric field, to be determined. It is shown that the experimental results support the predicted values from computer models for aggregates in these environments.

  13. Simultaneous measurements of photocurrents and H2O2 evolution from solvent exposed photosystem 2 complexes.

    Science.gov (United States)

    Vöpel, Tobias; Ning Saw, En; Hartmann, Volker; Williams, Rhodri; Müller, Frank; Schuhmann, Wolfgang; Plumeré, Nicolas; Nowaczyk, Marc; Ebbinghaus, Simon; Rögner, Matthias

    2015-03-22

    In plants, algae, and cyanobacteria, photosystem 2 (PS2) catalyzes the light driven oxidation of water. The main products of this reaction are protons and molecular oxygen. In vitro, however, it was demonstrated that reactive oxygen species like hydrogen peroxide are obtained as partially reduced side products. The transition from oxygen to hydrogen peroxide evolution might be induced by light triggered degradation of PS2's active center. Herein, the authors propose an analytical approach to investigate light induced bioelectrocatalytic processes such as PS2 catalyzed water splitting. By combining chronoamperometry and fluorescence microscopy, the authors can simultaneously monitor the photocurrent and the hydrogen peroxide evolution of light activated, solvent exposed PS2 complexes, which have been immobilized on a functionalized gold electrode. The authors show that under limited electron mediation PS2 displays a lower photostability that correlates with an enhanced H2O2 generation as a side product of the light induced water oxidation.

  14. Molybdate transport in a chemically complex aquifer: Field measurements compared with solute-transport model predictions

    Science.gov (United States)

    Stollenwerk, K.G.

    1998-01-01

    A natural-gradient tracer test was conducted in an unconfined sand and gravel aquifer on Cape Cod, Massachusetts. Molybdate was included in the injectate to study the effects of variable groundwater chemistry on its aqueous distribution and to evaluate the reliability of laboratory experiments for identifying and quantifying reactions that control the transport of reactive solutes in groundwater. Transport of molybdate in this aquifer was controlled by adsorption. The amount adsorbed varied with aqueous chemistry that changed with depth as freshwater recharge mixed with a plume of sewage-contaminated groundwater. Molybdate adsorption was strongest near the water table where pH (5.7) and the concentration of the competing solutes phosphate (2.3 micromolar) and sulfate (86 micromolar) were low. Adsorption of molybdate decreased with depth as pH increased to 6.5, phosphate increased to 40 micromolar, and sulfate increased to 340 micromolar. A one-site diffuse-layer surface-complexation model and a two-site diffuse-layer surface-complexation model were used to simulate adsorption. Reactions and equilibrium constants for both models were determined in laboratory experiments and used in the reactive-transport model PHAST to simulate the two-dimensional transport of molybdate during the tracer test. No geochemical parameters were adjusted in the simulation to improve the fit between model and field data. Both models simulated the travel distance of the molybdate cloud to within 10% during the 2-year tracer test; however, the two-site diffuse-layer model more accurately simulated the molybdate concentration distribution within the cloud.

  15. Overview of Approaches to Incorporate Dynamics into the Measurement of Complex Phenomena with the Use of Composite Indices

    Directory of Open Access Journals (Sweden)

    Anna Łatuszyńska

    2012-06-01

    Full Text Available Composite indices have substantially gained in popularity in recent years. Despite their alleged disadvantages, they appear to be very useful in measuring the level of certain phenomena that are too complex to express with a single indicator. Most rankings based on composite indicators are created at regular intervals, such as every month, every quarter or every year. A common approach is to base rankings solely on the most current values of single indicators, making no reference to previous results. The absence of dynamics from such measurements deprives studies of information on change in these phenomena and may limit the stability of classifications. This article presents the possibility of creating reliable, dynamic rankings of measured items and measuring the complex phenomena with the use of composite indices. Potential solutions are presented on the basis of a review of the international literature. Some advantages and disadvantages of the presented solutions are described and an example of a new approach is shown.

  16. Comparisons between field- and LiDAR-based measures of stand structrual complexity

    Science.gov (United States)

    Van R. Kane; Robert J. McGaughey; Jonathan D. Bakker; Rolf F. Gersonde; James A. Lutz; Jerry F. Franklin

    2010-01-01

    Forest structure, as measured by the physical arrangement of trees and their crowns, is a fundamental attribute of forest ecosystems that changes as forests progress through successional stages. We examined whether LiDAR data could be used to directly assess the successional stage of forests by determining the degree to which the LiDAR data would show the same relative...

  17. High resolution pollutant measurements in complex urban environments using mobile monitoring

    Science.gov (United States)

    Measuring air pollution in real-time using an instrumented vehicle platform has been an emerging strategy to resolve air pollution trends at a very fine spatial scale (10s of meters). Achieving second-by-second data representative of urban air quality trends requires advanced in...

  18. Measuring orthographic transparency and morphological-syllabic complexity in alphabetic orthographies: a narrative review

    NARCIS (Netherlands)

    Borleffs, Elisabeth; Maassen, Bernardus; Lyytinen, Heikki; Zwarts, Frans

    2017-01-01

    This narrative review discusses quantitative indices measuring differences between alphabetic languages that are related to the process of word recognition. The specific orthography that a child is acquiring has been identified as a central element influencing reading acquisition and dyslexia. Howev

  19. Measuring the complex permittivity tensor of uniaxial biological materials with coplanar waveguide transmission line

    Science.gov (United States)

    A simple and accurate technique is described for measuring the uniaxial permittivity tensor of biological materials with a coplanar waveguide transmission-line configuration. Permittivity tensor results are presented for several chicken and beef fresh meat samples at 2.45 GHz....

  20. New Measures of Heart-Rate Complexity: Effect of Chest Trauma and Hemorrhage

    Science.gov (United States)

    2010-05-01

    299. 6. Goldberger AL, West BJ. Applications of nonlinear dynamics to clinical cardiology . Ann NY Acad Sci. 1987;504:195–213. 7. Buchman TG...Palazzolo JA, Estafanous FG, Murray PA. Entropy measures of heart rate variation in conscious dogs . Am J Physiol. 1998;274:H1099–H1105. 34. West BJ

  1. Microwave generation and complex microwave responsivity measurements on small Dayem bridges

    DEFF Research Database (Denmark)

    Pedersen, Niels Falsig; Sørensen, O; Mygind, Jesper;

    1977-01-01

    or second harmonic response was measured. On the basis of analogue computer simulations an equivalent circuit was obtained describing the bridge coupled to the cavity. The large self inductance of the background film adjacent to the bridge was found to play a major role in explaining our results....

  2. Rotational dynamics of magnetic silica spheres studied by measuring the complex magnetic susceptibility

    NARCIS (Netherlands)

    Claesson, E.M.; Erne, B.H.; Philipse, A.P.

    2007-01-01

    The weak permanent magnetic dipole moment of cobalt ferrite-doped colloidal silica spheres was increased by exposure to a saturating magnetic field. The resulting change of the rotational dynamics of the magnetic microspheres in a weak alternating field was measured from low to high volume fraction

  3. Coaxial Sensors For Broad-Band Complex Permittivity Measurements of Petroleum Fluids

    Energy Technology Data Exchange (ETDEWEB)

    Folgeroe, K.

    1996-12-31

    This doctoral thesis verifies that dielectric spectroscopy and microwave permittivity measurements can be used to characterize petroleum liquids. It concentrates on developing sensors for three potential industrial applications: quality characterization of crude oil and petroleum fractions, monitoring of gas-hydrate formation in water-in-oil emulsions, and determination of water-content in thin liquid layers. The development of a permittivity measurement system for crude oil and petroleum fractions is described. As black oils have low dielectric constant and loss, the system must be very sensitive in order to measure the dielectric spectra and to distinguish oils of different permittivity. Such a system was achieved by combining impedance and scattering parameter measurements with appropriate permittivity calculation methods. The frequency range from 10 kHz to 6 GHz was found convenient for observing the main dispersion of the oils. All the oils had dielectric constants between 2.1 and 2.9 and dielectric loss below 0.01. The oils studied were samples of the feedstock for the cracker and coke processes at a petroleum refinery. This verifies that dielectric spectroscopy is a potential technique for on-line quality monitoring of the feedstock at petroleum refineries. Gas hydrates may cause major problems like clogging of pipelines. Dielectric spectroscopy is proposed as a means of monitoring the formation of gas hydrates in emulsions. It is found that open-ended coaxial probes fulfill the sensitivity requirements for such sensors. 312 refs., 87 figs., 20 tabs.

  4. Measuring Orthographic Transparency and Morphological-Syllabic Complexity in Alphabetic Orthographies: A Narrative Review

    Science.gov (United States)

    Borleffs, Elisabeth; Maassen, Ben A. M.; Lyytinen, Heikki; Zwarts, Frans

    2017-01-01

    This narrative review discusses quantitative indices measuring differences between alphabetic languages that are related to the process of word recognition. The specific orthography that a child is acquiring has been identified as a central element influencing reading acquisition and dyslexia. However, the development of reliable metrics to…

  5. Human Caring in the Social Work Context: Continued Development and Validation of a Complex Measure

    Science.gov (United States)

    Ellis, Jacquelyn I.; Ellett, Alberta J.; DeWeaver, Kevin

    2007-01-01

    Objectives: (a) to continue the development of a measure of human caring in the context of social work practice and (b) to expand a line of inquiry exploring the relationship between human caring characteristics and the retention of public child welfare workers. Methodology: Surveys were received from a sample (n = 786) child welfare workers in…

  6. The Texas Projection Measure: Ignoring Complex Ecologies in a Changing World

    Science.gov (United States)

    Roane, Warren

    2010-01-01

    The Texas Projection Measure (TPM) has grown out of the state's need to meet the requirements of No Child Left Behind (NCLB). An examination of the state's method of predicting 8th grade mathematics scores reveals that several factors have been ignored in the process of developing the model, including assumptions in its underlying statistical…

  7. Measuring marine iron(III) complexes by CLE-AdSV

    NARCIS (Netherlands)

    Town, R.M.; Leeuwen, van H.P.

    2005-01-01

    Iron(iii) speciation data, as determined by competitive ligand exchange?adsorptive stripping voltammetry (CLE-AdSV), is reconsidered in the light of the kinetic features of the measurement. The very large stability constants reported for iron(iii) in marine ecosystems are shown to be possibly due to

  8. Reply to Comments on Measuring marine iron(III) complexes by CLE-AdSV

    NARCIS (Netherlands)

    Town, R.M.; Leeuwen, van H.P.

    2005-01-01

    The interpretation of CLE-AdSV based iron(iii) speciation data for marine waters has been called into question in light of the kinetic features of the measurement. The implications of the re-think may have consequences for understanding iron biogeochemistry and its impact on ecosystem functioning.

  9. Measures of Causality in Complex Datasets with Application to Financial Data

    Directory of Open Access Journals (Sweden)

    Anna Zaremba

    2014-04-01

    Full Text Available This article investigates the causality structure of financial time series. We concentrate on three main approaches to measuring causality: linear Granger causality, kernel generalisations of Granger causality (based on ridge regression and the Hilbert–Schmidt norm of the cross-covariance operator and transfer entropy, examining each method and comparing their theoretical properties, with special attention given to the ability to capture nonlinear causality. We also present the theoretical benefits of applying non-symmetrical measures rather than symmetrical measures of dependence. We apply the measures to a range of simulated and real data. The simulated data sets were generated with linear and several types of nonlinear dependence, using bivariate, as well as multivariate settings. An application to real-world financial data highlights the practical difficulties, as well as the potential of the methods. We use two real data sets: (1 U.S. inflation and one-month Libor; (2 S&P data and exchange rates for the following currencies: AUDJPY, CADJPY, NZDJPY, AUDCHF, CADCHF, NZDCHF. Overall, we reach the conclusion that no single method can be recognised as the best in all circumstances, and each of the methods has its domain of best applicability. We also highlight areas for improvement and future research.

  10. Parameter optimization of measuring and control elements in the monitoring systems of complex technical objects

    Science.gov (United States)

    Nekrylov, Ivan; Korotaev, Valery; Blokhina, Anastasia; Kleshchenok, Maksim

    2017-06-01

    In the world is the widespread adoption of measuring equipment of new generation, which is characterized by small size, high automation level, a multi-channel, digital filtering, satellite synchronization, wireless communication, digital record in long-term memory with great resource, powered by long-lived sources, etc. However, modern equipment base of the Russian institutions and the level of development of technical facilities and measuring technologies lag far behind developed countries. For this reason, the vacated niches are actively developed by foreign companies. For example, more than 70% instrumentation performing works on the territory of Russia, equipped with imported equipment (products of Sweden and Germany companies); the amount of work performed with German equipment is more than 70% of the total volume of these works; more than 80% of industrial measurements are performed using HEXAGON equipment (Sweden). These trends show that the Russian sector of measuring technology gradually become import-dependent, which poses a threat to the economic security of the country and consistent with national priorities. The results of the research will allow to develop the theory of formation of control systems of the displacement with high accuracy and unattainable for the existing analogue ergonomic and weight characteristics combined with a comparable or lower cost. These advantages will allow you to be successful competition, and eventually to supplant the existing system, which had no fundamental changes in the last 20 years and, therefore, retained all the drawbacks: large size and weight, high power consumption, the dependence on magnetic fields

  11. Multi-sensor data fusion for measurement of complex freeform surfaces

    Science.gov (United States)

    Ren, M. J.; Liu, M. Y.; Cheung, C. F.; Yin, Y. H.

    2016-01-01

    Along with the rapid development of the science and technology in fields such as space optics, multi-scale enriched freeform surfaces are widely used to enhance the performance of the optical systems in both functionality and size reduction. Multi-sensor technology is considered as one of the promising methods to measure and characterize these surfaces at multiple scales. This paper presents a multi-sensor data fusion based measurement method to purposely extract the geometric information of the components with different scales which is used to establish a holistic geometry of the surface via data fusion. To address the key problems of multi-sensor data fusion, an intrinsic feature pattern based surface registration method is developed to transform the measured datasets to a common coordinate frame. Gaussian zero-order regression filter is then used to separate each measured data in different scales, and the datasets are fused based on an edge intensity data fusion algorithm within the same wavelength. The fused data at different scales is then merged to form a new surface with holistic multiscale information. Experimental study is presented to verify the effectiveness of the proposed method.

  12. Soil temperature variability in complex terrain measured using fiber-optic distributed temperature sensing

    Science.gov (United States)

    Soil temperature (Ts) exerts critical controls on hydrologic and biogeochemical processes but magnitude and nature of Ts variability in a landscape setting are rarely documented. Fiber optic distributed temperature sensing systems (FO-DTS) potentially measure Ts at high density over a large extent. ...

  13. Definition of (so MIScalled) ''Complexity'' as UTTER-SIMPLICITY!!! Versus Deviations From it as Complicatedness-Measure

    Science.gov (United States)

    Young, F.; Siegel, Edward Carl-Ludwig

    2011-03-01

    (so MIScalled) "complexity" with INHERENT BOTH SCALE-Invariance Symmetry-RESTORING, AND 1 / w (1.000..) "pink" Zipf-law Archimedes-HYPERBOLICITY INEVITABILITY power-spectrum power-law decay algebraicity. Their CONNECTION is via simple-calculus SCALE-Invariance Symmetry-RESTORING logarithm-function derivative: (d/ d ω) ln(ω) = 1 / ω , i.e. (d/ d ω) [SCALE-Invariance Symmetry-RESTORING](ω) = 1/ ω . Via Noether-theorem continuous-symmetries relation to conservation-laws: (d/ d ω) [inter-scale 4-current 4-div-ergence} = 0](ω) = 1 / ω . Hence (so MIScalled) "complexity" is information inter-scale conservation, in agreement with Anderson-Mandell [Fractals of Brain/Mind, G. Stamov ed.(1994)] experimental-psychology!!!], i.e. (so MIScalled) "complexity" is UTTER-SIMPLICITY!!! Versus COMPLICATEDNESS either PLUS (Additive) VS. TIMES (Multiplicative) COMPLICATIONS of various system-specifics. COMPLICATEDNESS-MEASURE DEVIATIONS FROM complexity's UTTER-SIMPLICITY!!!: EITHER [SCALE-Invariance Symmetry-BREAKING] MINUS [SCALE-Invariance Symmetry-RESTORING] via power-spectrum power-law algebraicity decays DIFFERENCES: ["red"-Pareto] MINUS ["pink"-Zipf Archimedes-HYPERBOLICITY INEVITABILITY]!!!

  14. A method for obtaining distributed surface flux measurements in complex terrain

    Science.gov (United States)

    Daniels, M. H.; Pardyjak, E.; Nadeau, D. F.; Barrenetxea, G.; Brutsaert, W. H.; Parlange, M. B.

    2011-12-01

    Sonic anemometers and gas analyzers can be used to measure fluxes of momentum, heat, and moisture over flat terrain, and with the proper corrections, over sloping terrain as well. While this method of obtaining fluxes is currently the most accurate available, the instruments themselves are costly, making installation of many stations impossible for most campaign budgets. Small, commercial automatic weather stations (Sensorscope) are available at a fraction of the cost of sonic anemometers or gas analyzers. Sensorscope stations use slow-response instruments to measure standard meteorological variables, including wind speed and direction, air temperature, humidity, surface skin temperature, and incoming solar radiation. The method presented here makes use of one sonic anemometer and one gas analyzer along with a dozen Sensorscope stations installed throughout the Val Ferret catchment in southern Switzerland in the summers of 2009, 2010 and 2011. Daytime fluxes are calculated using Monin-Obukhov similarity theory in conjunction with the surface energy balance at each Sensorscope station as well as at the location of the sonic anemometer and gas analyzer, where a suite of additional slow-response instruments were co-located. Corrections related to slope angle were made for wind speeds and incoming shortwave radiation measured by the horizontally-mounted cup anemometers and incoming solar radiation sensors respectively. A temperature correction was also applied to account for daytime heating inside the radiation shield on the slow-response temperature/humidity sensors. With these corrections, we find a correlation coefficient of 0.77 between u* derived using Monin-Obukhov similarity theory and that of the sonic anemometer. Calculated versus measured heat fluxes also compare well and local patterns of latent heat flux and measured surface soil moisture are correlated.

  15. Directed weighted network structure analysis of complex impedance measurements for characterizing oil-in-water bubbly flow

    Science.gov (United States)

    Gao, Zhong-Ke; Dang, Wei-Dong; Xue, Le; Zhang, Shan-Shan

    2017-03-01

    Characterizing the flow structure underlying the evolution of oil-in-water bubbly flow remains a contemporary challenge of great interests and complexity. In particular, the oil droplets dispersing in a water continuum with diverse size make the study of oil-in-water bubbly flow really difficult. To study this issue, we first design a novel complex impedance sensor and systematically conduct vertical oil-water flow experiments. Based on the multivariate complex impedance measurements, we define modalities associated with the spatial transient flow structures and construct modality transition-based network for each flow condition to study the evolution of flow structures. In order to reveal the unique flow structures underlying the oil-in-water bubbly flow, we filter the inferred modality transition-based network by removing the edges with small weight and resulting isolated nodes. Then, the weighted clustering coefficient entropy and weighted average path length are employed for quantitatively assessing the original network and filtered network. The differences in network measures enable to efficiently characterize the evolution of the oil-in-water bubbly flow structures.

  16. Evaluation of Grounding Impedance of a Complex Lightning Protective System Using Earth Ground Clamp Measurements and ATP Modeling

    Science.gov (United States)

    Mata, Carlos T.; Rakov, V. A.; Mata, Angel G.

    2010-01-01

    A new Lightning Protection System (LPS) was designed and built at Launch Complex 39B (LC39B), at the Kennedy Space Center (KSC), Florida, which consists of a catenary wire system (at a height of about 181 meters above ground level) supported by three insulators installed atop three towers in a triangular configuration. A total of nine downconductors (each about 250 meters long, on average) are connected to the catenary wire system. Each of the nine downconductors is connected to a 7.62-meter radius circular counterpoise conductor with six equally spaced 6-meter long vertical grounding rods. Grounding requirements at LC39B call for all underground and above ground metallic piping, enclosures, raceways, and cable trays, within 7.62 meters of the counterpoise, to be bounded to the counterpoise, which results in a complex interconnected grounding system, given the many metallic piping, raceways, and cable trays that run in multiple direction around LC39B. The complexity of this grounding system makes the fall of potential method, which uses multiple metallic rods or stakes, unsuitable for measuring the grounding impedances of the downconductors. To calculate the downconductors grounding impedance, an Earth Ground Clamp (a stakeless grounding resistance measuring device) and a LPS Alternative Transient Program (ATP) model are used. The Earth Ground Clamp is used to measure the loop impedance plus the grounding impedance of each downconductor and the ATP model is used to calculate the loop impedance of each downconductor circuit. The grounding impedance of the downconductors is then calculated by subtracting the ATP calculated loop impedances from the Earth Ground Clamp measurements.

  17. Measurements of Complex Permeability and Permittivity of Ferrites for the LHC Injection Kicke

    CERN Document Server

    Caspers, Friedhelm; González, C; Dyachkov, M

    1999-01-01

    The LHC injection kicker is made by a lumped element delay line using capacitors and single turn inductors. For these inductors different types of ferrites (Philips 8C11 and 4A4) are considered. At the time when this report was written only 4A4 ferrite was available for a prototype kicker construction, as well as for impedance measurements by the wire method. The 4A4 ferrite comes in standard blocks (42 x 54 x 74 mm) which are quite expensive, so there were virtually no spare blocks available which could be machined for use in the standard coaxial technique. Thus we have developed a strip-line test jig which permits testing material parameters on existing ferrite blocks without additional (destructive) machining. Special aspects, advantages and difficulties of this method are discussed. The bench measurements and also theoretical and numerical estimates of the beam coupling impedance of the kickers are under way.

  18. Complexities and subtleties in the measurement and reporting of breastfeeding practices

    OpenAIRE

    Debra Hector J

    2011-01-01

    Abstract Background Monitoring of breastfeeding is vital. However, infant feeding practices are difficult to assess at the population level. Although significant efforts have been made towards the consistent measurement and reporting of breastfeeding, few countries have successfully implemented a system to do so. Many inaccuracies, inconsistencies and issues remain. This paper highlights the main issues relating to the methods and indicators used to monitor breastfeeding, particularly exclusi...

  19. Early Seizure Detection Using Neuronal Potential Similarity: A Generalized Low-Complexity and Robust Measure.

    Science.gov (United States)

    Bandarabadi, Mojtaba; Rasekhi, Jalil; Teixeira, Cesar A; Netoff, Theoden I; Parhi, Keshab K; Dourado, Antonio

    2015-08-01

    A novel approach using neuronal potential similarity (NPS) of two intracranial electroencephalogram (iEEG) electrodes placed over the foci is proposed for automated early seizure detection in patients with refractory partial epilepsy. The NPS measure is obtained from the spectral analysis of space-differential iEEG signals. Ratio between the NPS values obtained from two specific frequency bands is then investigated as a robust generalized measure, and reveals invaluable information about seizure initiation trends. A threshold-based classifier is subsequently applied on the proposed measure to generate alarms. The performance of the method was evaluated using cross-validation on a large clinical dataset, involving 183 seizure onsets in 1785 h of long-term continuous iEEG recordings of 11 patients. On average, the results show a high sensitivity of 86.9% (159 out of 183), a very low false detection rate of 1.4 per day, and a mean detection latency of 13.1 s from electrographic seizure onsets, while in average preceding clinical onsets by 6.3 s. These high performance results, specifically the short detection latency, coupled with the very low computational cost of the proposed method make it adequate for using in implantable closed-loop seizure suppression systems.

  20. Determining Wind Turbine Gearbox Model Complexity Using Measurement Validation and Cost Comparison: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    LaCava, W.; Xing, Y.; Guo, Y.; Moan, T.

    2012-04-01

    The Gearbox Reliability Collaborative (GRC) has conducted extensive field and dynamometer test campaigns on two heavily instrumented wind turbine gearboxes. In this paper, data from the planetary stage is used to evaluate the accuracy and computation time of numerical models of the gearbox. First, planet-bearing load and motion data is analyzed to characterize planetary stage behavior in different environments and to derive requirements for gearbox models and life calculations. Second, a set of models are constructed that represent different levels of fidelity. Simulations of the test conditions are compared to the test data and the computational cost of the models are compared. The test data suggests that the planet-bearing life calculations should be made separately for each bearing on a row due to unequal load distribution. It also shows that tilting of the gear axes is related to planet load share. The modeling study concluded that fully flexible models were needed to predict planet-bearing loading in some cases, although less complex models were able to achieve good correlation in the field-loading case. Significant differences in planet load share were found in simulation and were dependent on the scope of the model and the bearing stiffness model used.

  1. Cleaner production and methodological proposal of eco-efficiency measurement in a Mexican petrochemical complex.

    Science.gov (United States)

    Morales, M A; Herrero, V M; Martínez, S A; Rodríguez, M G; Valdivieso, E; Garcia, G; de los Angeles Elías, Maria

    2006-01-01

    Abstract In the frame of the Petróleos Mexicanos Institutional Program for Sustainable Development, processes were evaluated in the manufacture operation of the petrochemical industry, with the purpose of reducing their ecological fingerprint. Thirteen cleaner production opportunities were registered in six process plants: ethylene oxide and glycols, acetaldehyde, ethylene, high density polyethylene, polypropylene switch and acrylonitrile, and 45 recommendations in the waste water treatment plant. Morelos is the second most important petrochemical complex in the Mexican and Latin American petrochemical industry. A tool was developed to obtain eco-efficiency indicators in operation processes, and as a result, potential savings were obtained based on best performance, as well as the integrated distribution of Sankey diagrams. Likewise, a mechanism of calculation to obtain economic savings based on the reduction of residues during the whole productive process is proposed. These improvement opportunities and recommendations will result in economic and environmental benefits minimising the use of water, efficient use of energy, raw materials and reducing residues from source, generating less environmental impacts during the process.

  2. Measurement of the formation of complexes in tyrosine kinase-mediated signal transduction

    Energy Technology Data Exchange (ETDEWEB)

    Ladbury, John E., E-mail: j.ladbury@biochem.ucl.ac.uk [Department of Biochemistry and Molecular Biology, University College London, Gower Street, London WC1E 6BT (United Kingdom)

    2007-01-01

    The use of isothermal titration calorimetry (ITC) provides a full thermodynamic characterization of an interaction in one experiment. The determination of the affinity is an important value; however, the additional layer of information provided by the change in enthalpy and entropy can help in understanding the biology. This is demonstrated with respect to tyrosine kinase-mediated signal transduction. Isothermal titration calorimetry (ITC) provides highly complementary data to high-resolution structural detail. An overview of the methodology of the technique is provided. Ultimately, the correlation of the thermodynamic parameters determined by ITC with structural perturbation observed on going from the free to the bound state should be possible at an atomic level. Currently, thermodynamic data provide some insight as to potential changes occurring on complex formation. Here, this is demonstrated in the context of in vitro quantification of intracellular tyrosine kinase-mediated signal transduction and the issue of specificity of the important interactions. The apparent lack of specificity in the interactions of domains of proteins involved in early signalling from membrane-bound receptors is demonstrated using data from ITC.

  3. Entropy and complexity measures for EEG signal classification of schizophrenic and control participants.

    Science.gov (United States)

    Sabeti, Malihe; Katebi, Serajeddin; Boostani, Reza

    2009-11-01

    In this paper, electroencephalogram (EEG) signals of 20 schizophrenic patients and 20 age-matched control participants are analyzed with the objective of classifying the two groups. For each case, 20 channels of EEG are recorded. Several features including Shannon entropy, spectral entropy, approximate entropy, Lempel-Ziv complexity and Higuchi fractal dimension are extracted from EEG signals. Leave-one (participant)-out cross-validation is used for reliable estimate of the separability of the two groups. The training set is used for training the two classifiers, namely, linear discriminant analysis (LDA) and adaptive boosting (Adaboost). Each classifier is assessed using the test dataset. A classification accuracy of 86% and 90% is obtained by LDA and Adaboost respectively. For further improvement, genetic programming is employed to select the best features and remove the redundant ones. Applying the two classifiers to the reduced feature set, a classification accuracy of 89% and 91% is obtained by LDA and Adaboost respectively. The proposed technique is compared and contrasted with a recently reported method and it is demonstrated that a considerably enhanced performance is achieved. This study shows that EEG signals can be a useful tool for discrimination of the schizophrenic and control participants. It is suggested that this analysis can be a complementary tool to help psychiatrists diagnosing schizophrenic patients.

  4. A chemical test of critical point isomorphism: reactive dissolution of ionic solids in isobutyric acid + water near the consolute point.

    Science.gov (United States)

    Baird, James K; Baker, Jonathan D; Hu, Baichuan; Lang, Joshua R; Joyce, Karen E; Sides, Alison K; Richey, Randi D

    2015-03-12

    Binary liquid mixtures having a consolute point can be used as solvents for chemical reactions. When excess cerium(IV) oxide is brought into equilibrium with a mixture of isobutyric acid + water, and the concentration of cerium in the liquid phase is plotted in van't Hoff form, a straight line results for temperatures sufficiently in excess of the critical solution temperature. Within 1 K of the critical temperature, however, the concentration becomes substantially suppressed, and the van't Hoff slope diverges toward negative infinity. According to the phase rule, one mole fraction can be fixed. Given this restriction, the temperature behavior of the data is in exact agreement with the predictions of both the principle of critical point isomorphism and the Gibbs-Helmholtz equation. In addition, we have determined the concentration of lead in the liquid phase when crystalline lead(II) sulfate reacts with potassium iodide in isobutyric acid + water. When plotted in van't Hoff form, the data lie on a straight line for all temperatures including the critical region. The phase rule indicates that two mole fractions can be fixed. With this restriction, the data are in exact agreement with the principle of critical point isomorphism.

  5. The Eye-Key Span as a Measure for Translation Complexity

    DEFF Research Database (Denmark)

    Carl, Michael; Schaeffer, Moritz

    Dragsted (Dragsted & Hansen, 2008; Dragsted, 2010) developed the eye-key span (EKS) in reference to the ear-voice span which is used to describe the distance between input and output during simultaneous interpreting, typically measured in words or seconds (e.g. Defrancq, 2015). The EKS during...... from Dragsted (Dragsted & Hansen, 2008; Dragsted, 2010) based on a corpous of 12,474 ST words, 3,242 unique ST items, 108 participants and 12 different texts. We use R and the lme4 (Bates, Maechler, Bolker, & Walker, 2014) and languageR (Baayen, 2013) packages to perform (general) linear mixed...

  6. Turbulence measurements in a complex flowfield using a crossed hot-wire

    Science.gov (United States)

    Lilley, D. G.; Mckillop, B. E.

    1984-01-01

    An X-wire probe was used to measure the time-mean and fluctuating velocities and shear stress in nonswirling nonreacting confined jet flows. Data were taken from an axisymmetric confined jet with an expansion ratio of 2 and an expansion angle of 90 deg, and from the same segment with a contraction nozzle. Velocity profiles developed faster in the confined jet than in the free jet, with the former experiencing higher turbulence levels and larger time-mean velocities. The X-wire is concluded to furnish more accurate results for the turbulent shear stress than a multioriented single-wire technique.

  7. Complexities and subtleties in the measurement and reporting of breastfeeding practices

    Directory of Open Access Journals (Sweden)

    Debra Hector J

    2011-05-01

    Full Text Available Abstract Background Monitoring of breastfeeding is vital. However, infant feeding practices are difficult to assess at the population level. Although significant efforts have been made towards the consistent measurement and reporting of breastfeeding, few countries have successfully implemented a system to do so. Many inaccuracies, inconsistencies and issues remain. This paper highlights the main issues relating to the methods and indicators used to monitor breastfeeding, particularly exclusive breastfeeding, at the population level. In doing so, it aims to support progress in this area. Discussion Indicators are used primarily for comparative purposes and should be broadly consistent with recommended practice; regarding exclusive breastfeeding this is 'to six months'. There are limitations to both main methods used to measure and report on breastfeeding: current status (often 24-hour recall, and longer-term recall. Issues relate to how age is considered within the analysis and interpretation of data, including boundary points or cut offs, as well as how breastfeeding practices are reported against different ages, especially regarding whether to use the preposition 'to' or 'at'. Other issues include the conversion from weeks to months, as well as the 'regular' versus 'first' introduction of something other than breast milk, to signify the deviation from exclusive breastfeeding. Differences in how data are collected, and uncertainties around how data are interpreted, have led to the mixed and often inaccurate reporting of breastfeeding practices, particularly exclusive breastfeeding. Assuming a particular definition of exclusive breastfeeding, such as that of the World Health Organization, the period over which exclusive breastfeeding is measured and how it is determined in the survey are important in relation to indicator phrasing. Often compromises are made in data collected to report against exclusive breastfeeding, despite subsequent

  8. The Experimental Measurement of Aerodynamic Heating About Complex Shapes at Supersonic Mach Numbers

    Science.gov (United States)

    Neumann, Richard D.; Freeman, Delma C.

    2011-01-01

    In 2008 a wind tunnel test program was implemented to update the experimental data available for predicting protuberance heating at supersonic Mach numbers. For this test the Langley Unitary Wind Tunnel was also used. The significant differences for this current test were the advances in the state-of-the-art in model design, fabrication techniques, instrumentation and data acquisition capabilities. This current paper provides a focused discussion of the results of an in depth analysis of unique measurements of recovery temperature obtained during the test.

  9. System Architecture for measuring and monitoring Beam Losses in the Injector Complex at CERN

    CERN Document Server

    Zamantzas, C; Dehning, B; Jackson, S; Kwiatkowski, M; Vigano, W

    2012-01-01

    The strategy for beam setup and machine protection of the accelerators at the European Organisation for Nuclear Research (CERN) is mainly based on its Beam Loss Monitoring (BLM) systems. For their upgrade to higher beam energies and intensities, a new BLM system is under development with the aim of providing faster measurement updates with higher dynamic range and the ability to accept more types of detectors as input compared to its predecessors. In this paper, the architecture of the complete system is explored giving an insight to the design choices made to provide a highly reconfigurable system that is able to fulfil the different requirements of each accelerator using reprogrammable devices.

  10. Measuring and Calculative Complex for Registration of Quasi-Static and Dynamic Processes of Electromagnetic Irradiation

    Directory of Open Access Journals (Sweden)

    V. I. Ovchinnikov

    2007-01-01

    Full Text Available The paper is devoted to the development of measuring device to register dynamic processes of electromagnetic irradiation during the treatment of materials with energy of explosion. Standard units to register main parameters of the explosion do not allow predict and control results of the process. So, to overcome disadvantages of former control units a new one has been developed applying Hall’s sensors. The device developed allows effectively register of the inductive component of the electromagnetic irradiation in wide range of temperature for many shot-time processes.

  11. Recommendations for a first Core Outcome Measurement set for complex regional PAin syndrome Clinical sTudies (COMPACT).

    Science.gov (United States)

    Grieve, Sharon; Perez, Roberto Sgm; Birklein, Frank; Brunner, Florian; Bruehl, Stephen; Harden, R Norman; Packham, Tara; Gobeil, Francois; Haigh, Richard; Holly, Janet; Terkelsen, Astrid; Davies, Lindsay; Lewis, Jennifer; Thomassen, Ilona; Connett, Robyn; Worth, Tina; Vatine, Jean-Jacques; McCabe, Candida S

    2017-02-04

    Complex Regional Pain Syndrome (CRPS) is a persistent pain condition that remains incompletely understood and challenging to treat. Historically, a wide range of different outcome measures have been used to capture the multidimensional nature of CRPS. This has been a significant limiting factor in the advancement of our understanding of the mechanisms and management of CRPS.In 2013, an international consortium of patients, clinicians, researchers and industry representatives was established, to develop and agree on a minimum core set of standardised outcome measures for use in future CRPS clinical research, including but not limited to clinical trials within adult populationsThe development of a core measurement set was informed through workshops and supplementary work, using an iterative consensus process. 'What is the clinical presentation and course of CRPS, and what factors influence it?' was agreed as the most pertinent research question that our standardised set of patient-reported outcome measures should be selected to answer. The domains encompassing the key concepts necessary to answer the research question were agreed as: pain, disease severity, participation and physical function, emotional and psychological function, self efficacy, catastrophizing and patient's global impression of change. The final core measurement set included the optimum generic or condition-specific patient-reported questionnaire outcome measures, which captured the essence of each domain, and one clinician reported outcome measure to capture the degree of severity of CRPS. The next step is to test the feasibility and acceptability of collecting outcome measure data using the core measurement set in the CRPS population internationally.

  12. In vivo measurements of the triceps surae complex architecture in man: implications for muscle function.

    Science.gov (United States)

    Maganaris, C N; Baltzopoulos, V; Sargeant, A J

    1998-10-15

    1. The objectives of this study were to (1) quantify experimentally in vivo changes in pennation angle, fibre length and muscle thickness in the triceps surae complex in man in response to changes in ankle position and isometric plantarflexion moment and (2) compare changes in the above muscle architectural characteristics occurring in the transition from rest to a given isometric plantarflexion intensity with the estimations of a planimetric muscle model assuming constant thickness and straight muscle fibres. 2. The gastrocnemius medialis (GM), gastrocnemius lateralis (GL) and soleus (SOL) muscles of six males were scanned with ultrasonography at different sites along and across the muscle belly at rest and during maximum voluntary contraction (MVC) trials at ankle angles of -15 deg (dorsiflexed direction), 0 deg (neutral position), +15 deg (plantarflexed direction) and +30 deg. Additional images were taken at 80, 60, 40 and 20% of MVC at an ankle angle of 0 deg. 3. In all three muscles and all scanned sites, as ankle angle increased from -15 to +30 deg, pennation increased (by 6-12 deg, 39-67%, P MVC) and fibre length decreased (by 15-28 mm, 32-34%, P MVC). Thickness in GL and SOL increased during MVC compared with rest (by 5-7 mm, 36-47%, P 0.05) between rest and MVC. 4. At any given ankle angle the model underestimated changes in GL and SOL occurring in the transition from rest to MVC in pennation angle (by 9-12 deg, 24-38%, P architecture during contraction compared with rest.

  13. Interpreting Ground Temperature Measurements for Thermophysical Properties on Complex Surfaces of the Moon and Mars

    Science.gov (United States)

    Vasavada, A. R.; Hamilton, V. E.; Team, M.

    2013-12-01

    With the successful deployments of the Diviner radiometer on the Lunar Reconnaissance Orbiter and the REMS ground temperature sensor on the Curiosity Mars rover, records of ground temperature with high accuracy and finely sampled diurnal and seasonal cycles have become available. The detailed shapes of these temperature profiles allow inferences beyond just bulk thermophysical properties. Subtle (or sometime significant) effects of surface roughness, slope, and lateral and vertical heterogeneity may be identified in the surface brightness temperature data. For example, changes in thermal or physical properties with depth in the shallow subsurface affect the conduction and storage of thermal energy. These affect the surface energy balance and therefore surface temperatures, especially the rate of cooling at night. Making unique determinations of subsurface soil properties requires minimizing the uncertainties introduced by other effects. On Mars, atmospheric aerosol opacity and wind-driven sensible heat fluxes also affect the diurnal and annual temperature profiles. On both bodies, variations in thermal inertia, slopes, roughness, albedo, and emissivity within the radiometer footprint will cause the composite brightness temperature to differ from a kinetic temperature. Nevertheless, we have detected potential effects of complex surfaces in the temperature data from both Diviner and Curiosity. On the Moon, the results reveal a nearly ubiquitous surface structure, created mechanically by impact gardening, that controls the thermal response of the surface. On Mars, the thermal response is controlled primarily by grain size, cementation, lithification, and composition. However, the secondary effects of near-surface layering aid in the interpretation of stratigraphy and in the identification of geologic processes that have altered the surface.

  14. Designing diagnostics in complexity: Measuring technical and contextual aspects in monitoring and evaluation systems

    Directory of Open Access Journals (Sweden)

    Caitlin Blaser Mapitsa

    2017-04-01

    Full Text Available Background: This article emphasizes the importance of reflecting on the methods employed when designing diagnostic tools for monitoring and evaluation (M&E systems. It sheds light on a broader debate about how we understand and assess M&E systems within their political and organisational contexts.Objectives: The article looks at what divergent purposes of M&E mean for how M&E systems are assessed, and how context-appropriate diagnostic studies can be designed.Method: The article draws on two different approaches: a survey that looks at the technical components of an M&E system and a complexity framework that analyses the way a system functions in a broader political and organisational context. The foundation is provided by survey and interview data from over 70 officials from across the City of Johannesburg’s administration.Results: The study revealed great diversity as to respondents’ understanding of what M&E structures and processes should do and achieve within the city, ranging from a management function closely linked to auditing and oversight responsibilities to a governance role that is more linked to learning and planning. Limitations in M&E capacity and/or performance were linked to contested political and bureaucratic structures.Conclusion: The mixed method approach to diagnostics proposed in this article contributes to the call in the ‘Made in Africa’ debate for more contextualised methods and tools around the practice and the assessment of M&E. The article proposes the development of a synthetic tool that covers both M&E technical components and capacity on one hand, and an analysis of how these are embedded in a political and organisational context on the other.

  15. Circular dichroism measured on single chlorosomal light-harvesting complexes of green photosynthetic bacteria

    KAUST Repository

    Furumaki, Shu

    2012-12-06

    We report results on circular dichroism (CD) measured on single immobilized chlorosomes of a triple mutant of green sulfur bacterium Chlorobaculum tepidum. The CD signal is measured by monitoring chlorosomal bacteriochlorphyll c fluorescence excited by alternate left and right circularly polarized laser light with a fixed wavelength of 733 nm. The excitation wavelength is close to a maximum of the negative CD signal of a bulk solution of the same chlorosomes. The average CD dissymmetry parameter obtained from an ensemble of individual chlorosomes was gs = -0.025, with an intrinsic standard deviation (due to variations between individual chlorosomes) of 0.006. The dissymmetry value is about 2.5 times larger than that obtained at the same wavelength in the bulk solution. The difference can be satisfactorily explained by taking into account the orientation factor in the single-chlorosome experiments. The observed distribution of the dissymmetry parameter reflects the well-ordered nature of the mutant chlorosomes. © 2012 American Chemical Society.

  16. Quantification of Soil Pore Network Complexity with X-ray Computed Tomography and Gas Transport Measurements

    DEFF Research Database (Denmark)

    Katuwal, Sheela; Arthur, Emmanuel; Tuller, Markus

    2015-01-01

    Flow and transport of gases through soils are largely controlled by pore structural attributes. The quantification of pore network characteristics is therefore essential for accurate prediction of air permeability and gas diffusivity. In this study, the pore network characteristics of seven...... different soils subjected to 22 mo of field regeneration were quantified with X-ray computed tomography (CT) and compared with functional pore characteristics estimated from measurements of air permeability and gas diffusivity. Furthermore, predictive models for air permeability and gas diffusivity were...... equivalent pore diameter in predictive gas diffusivity and air permeability models significantly improved their performance. The obtained results suggest that the application of X-ray CT-derived pore-structural parameters has great potential for predicting gas diffusivity and air permeability....

  17. The Eye-Key Span as a Measure for Translation Complexity

    DEFF Research Database (Denmark)

    Carl, Michael; Schaeffer, Moritz

    Dragsted (Dragsted & Hansen, 2008; Dragsted, 2010) developed the eye-key span (EKS) in reference to the ear-voice span which is used to describe the distance between input and output during simultaneous interpreting, typically measured in words or seconds (e.g. Defrancq, 2015). The EKS during...... from Dragsted (Dragsted & Hansen, 2008; Dragsted, 2010) based on a corpous of 12,474 ST words, 3,242 unique ST items, 108 participants and 12 different texts. We use R and the lme4 (Bates, Maechler, Bolker, & Walker, 2014) and languageR (Baayen, 2013) packages to perform (general) linear mixed......-effects models ((G)LMEMs). Our findings support and extend those of Dragsted and underpin Schaeffer and Carl (2013), who argued that translation is best understood as both an early and a late effect, i.e., early, relatively automatic processes which are highly bilingual in nature and late processes which...

  18. The complex of measures on inclusion of small businesses in innovation clusters

    Directory of Open Access Journals (Sweden)

    A. V. Kupchinsky

    2016-01-01

    Full Text Available Modern practice of managing and its display in scientific publications demonstrate that development of world economy with all evidence proves the major role and the importance of sector of small business structures in national economy. In the modern world the national economy in many respects began to be determined by the balanced and sustainable development of the small business structures recognized now as conductors and creators of new opening and technologies, moreover, as the strategic instrument of the structural transformations of a modern economic system of the country often directed to high-quality increase in efficiency of reproduction process of regional economy. Now in Russia the level of development of an innovative entrepreneurship is very low. It is possible to state lack of properly created institutional environment for development of a small entrepreneurship in the innovative sphere. Clasterisation represents process of consolidation of a number of the organizations of various industries for increase in competitiveness, implementation of innovations, effective development and receipt of other benefits. According to separation of economy on real and virtual, the possibility of creation of both real, and virtual clusters increases. Creation and development of regional clusters will help to create the necessary level of activity of small business structures in innovative activities that will favorably affect increase in competitiveness of both regional, and national economy. The package of measures including measures for involvement of small business structures in clusters is developed for development of a cluster initiative and increase in innovative development of the region. Application of this program will allow to reach synergy effect at the expense of high degree of concentration and cooperation of small business structures and increase in effectiveness of their activities.

  19. An investigation of ozone and planetary boundary layer dynamics over the complex topography of Grenoble combining measurements and modeling

    Directory of Open Access Journals (Sweden)

    O. Couach

    2003-01-01

    Full Text Available This paper concerns an evaluation of ozone (O3 and planetary boundary layer (PBL dynamics over the complex topography of the Grenoble region through a combination of measurements and mesoscale model (METPHOMOD predictions for three days, during July 1999. The measurements of O3 and PBL structure were obtained with a Differential Absorption Lidar (DIAL system, situated 20 km south of Grenoble at Vif (310 m ASL. The combined lidar observations and model calculations are in good agreement with atmospheric measurements obtained with an instrumented aircraft (METAIR. Ozone fluxes were calculated using lidar measurements of ozone vertical profiles concentrations and the horizontal wind speeds measured with a Radar Doppler wind profiler (DEGREANE. The ozone flux patterns indicate that the diurnal cycle of ozone production is controlled by local thermal winds. The convective PBL maximum height was some 2700 m above the land surface while the nighttime residual ozone layer was generally found between 1200 and 2200 m. Finally we evaluate the magnitude of the ozone processes at different altitudes in order to estimate the photochemical ozone production due to the primary pollutants emissions of Grenoble city and the regional network of automobile traffic.

  20. Traceable measurement and imaging of the complex permittivity of a multiphase mineral specimen at micron scales using a microwave microscope.

    Science.gov (United States)

    Gregory, A P; Blackburn, J F; Hodgetts, T E; Clarke, R N; Lees, K; Plint, S; Dimitrakis, G A

    2017-01-01

    This paper describes traceable measurements of the dielectric permittivity and loss tangent of a multiphase material (particulate rock set in epoxy) at micron scales using a resonant Near-Field Scanning Microwave Microscope (NSMM) at 1.2GHz. Calibration and extraction of the permittivity and loss tangent is via an image charge analysis which has been modified by the use of the complex frequency to make it applicable for high loss materials. The results presented are obtained using a spherical probe tip, 0.1mm in diameter, and also a conical probe tip with a rounded end 0.01mm in diameter, which allows imaging with higher resolution (≈10µm). The microscope is calibrated using approach-curve data over a restricted range of gaps (typically between 1% and 10% of tip diameter) as this is found to give the best measurement accuracy. For both tips the uncertainty of scanned measurements of permittivity is estimated to be±10% (at coverage factor k=2) for permittivity ⪝10. Loss tangent can be resolved to approximately 0.001. Subject to this limit, the uncertainty of loss tangent measurements is estimated to be±20% (at k=2). The reported measurements inform studies of how microwave energy interacts with multiphase materials containing microwave absorbent phases.

  1. Direct measurement of the Mn(II) hydration state in metal complexes and metalloproteins through 17O NMR line widths.

    Science.gov (United States)

    Gale, Eric M; Zhu, Jiang; Caravan, Peter

    2013-12-11

    Here we describe a simple method to estimate the inner-sphere hydration state of the Mn(II) ion in coordination complexes and metalloproteins. The line width of bulk H2(17)O is measured in the presence and absence of Mn(II) as a function of temperature, and transverse (17)O relaxivities are calculated. It is demonstrated that the maximum (17)O relaxivity is directly proportional to the number of inner-sphere water ligands (q). Using a combination of literature data and experimental data for 12 Mn(II) complexes, we show that this method provides accurate estimates of q with an uncertainty of ±0.2 water molecules. The method can be implemented on commercial NMR spectrometers working at fields of 7 T and higher. The hydration number can be obtained for micromolar Mn(II) concentrations. We show that the technique can be extended to metalloproteins or complex:protein interactions. For example, Mn(II) binds to the multimetal binding site A on human serum albumin with two inner-sphere water ligands that undergo rapid exchange (1.06 × 10(8) s(-1) at 37 °C). The possibility of extending this technique to other metal ions such as Gd(III) is discussed.

  2. The general theory of the Quasi-reproducible experiments: How to describe the measured data of complex systems?

    Science.gov (United States)

    Nigmatullin, Raoul R.; Maione, Guido; Lino, Paolo; Saponaro, Fabrizio; Zhang, Wei

    2017-01-01

    In this paper, we suggest a general theory that enables to describe experiments associated with reproducible or quasi-reproducible data reflecting the dynamical and self-similar properties of a wide class of complex systems. Under complex system we understand a system when the model based on microscopic principles and suppositions about the nature of the matter is absent. This microscopic model is usually determined as "the best fit" model. The behavior of the complex system relatively to a control variable (time, frequency, wavelength, etc.) can be described in terms of the so-called intermediate model (IM). One can prove that the fitting parameters of the IM are associated with the amplitude-frequency response of the segment of the Prony series. The segment of the Prony series including the set of the decomposition coefficients and the set of the exponential functions (with k = 1,2,…,K) is limited by the final mode K. The exponential functions of this decomposition depend on time and are found by the original algorithm described in the paper. This approach serves as a logical continuation of the results obtained earlier in paper [Nigmatullin RR, W. Zhang and Striccoli D. General theory of experiment containing reproducible data: The reduction to an ideal experiment. Commun Nonlinear Sci Numer Simul, 27, (2015), pp 175-192] for reproducible experiments and includes the previous results as a partial case. In this paper, we consider a more complex case when the available data can create short samplings or exhibit some instability during the process of measurements. We give some justified evidences and conditions proving the validity of this theory for the description of a wide class of complex systems in terms of the reduced set of the fitting parameters belonging to the segment of the Prony series. The elimination of uncontrollable factors expressed in the form of the apparatus function is discussed. To illustrate how to apply the theory and take advantage of its

  3. Unstable work histories and fertility in France: An adaptation of sequence complexity measures to employment trajectories

    Directory of Open Access Journals (Sweden)

    Daniel Ciganda

    2015-04-01

    Full Text Available Background: The emergence of new evidence suggesting a sign shift in the long-standing negativecorrelation between prosperity and fertility levels has sparked a renewed interest in understanding the relationship between economic conditions and fertility decisions. In thiscontext, the notion of uncertainty has gained relevance in analyses of low fertility. So far, most studies have approached this notion using snapshot indicators such as type of contract or employment situation. However, these types of measures seem to be fallingshort in capturing what is intrinsically a dynamic process. Objective: Our first objective is to analyze to what extent employment trajectories have become lessstable over time, and the second, to determine whether or not employment instability has an impact on the timing and quantum of fertility in France.Additionally, we present a new indicator of employment instability that takes into account both the frequency and duration of unemployment, with the objective of comparing its performance against other, more commonly used indicators of economic uncertainty. Methods: Our study combines exploratory (Sequence Analysis with confirmatory (Event History, Logistic Regression methods to understand the relationship between early life-course uncertainty and the timing and intensity of fertility. We use employment histories from the three available waves of the Etude des relations familiales et intergenerationnelles (ERFI, a panel survey carried out by INED and INSEE which constitutes the base of the Generations and Gender Survey (GGS in France. Results: Although France is characterized by strong family policies and high and stable fertility levels, we find that employment instability not only has a strong and persistent negative effect on the final number of children for both men and women, but also contributes to fertility postponement in the case of men.Regarding the timing of the transition to motherhood, we show how

  4. Effectiveness of electrolyzed acidic water in killing Escherichia coli O157:H7, Salmonella enteritidis, and Listeria monocytogenes on the surfaces of tomatoes.

    Science.gov (United States)

    Bari, M L; Sabina, Y; Isobe, S; Uemura, T; Isshiki, K

    2003-04-01

    A study was conducted to evaluate the efficacy of electrolyzed acidic water, 200-ppm chlorine water, and sterile distilled water in killing Escherichia coli O157:H7, Salmonella, and Listeria monocytogenes on the surfaces of spot-inoculated tomatoes. Inoculated tomatoes were sprayed with electrolyzed acidic water, 200-ppm chlorine water, and sterile distilled water (control) and rubbed by hand for 40 s. Populations of E. coli O157:H7, Salmonella, and L. monocytogenes in the rinse water and in the peptone wash solution were determined. Treatment with 200-ppm chlorine water and electrolyzed acidic water resulted in 4.87- and 7.85-log10 reductions, respectively, in Escherichia coli O157:H7 counts and 4.69- and 7.46-log10 reductions, respectively, in Salmonella counts. Treatment with 200-ppm chlorine water and electrolyzed acidic water reduced the number of L. monocytogenes by 4.76 and 7.54 log10 CFU per tomato, respectively. This study's findings suggest that electrolyzed acidic water could be useful in controlling pathogenic microorganisms on fresh produce.

  5. A complexity measure based method for studying the dependance of 222Rn concentration time series on indoor air temperature and humidity.

    Science.gov (United States)

    Mihailovic, D T; Udovičić, V; Krmar, M; Arsenić, I

    2014-02-01

    We have suggested a complexity measure based method for studying the dependence of measured (222)Rn concentration time series on indoor air temperature and humidity. This method is based on the Kolmogorov complexity (KL). We have introduced (i) the sequence of the KL, (ii) the Kolmogorov complexity highest value in the sequence (KLM) and (iii) the KL of the product of time series. The noticed loss of the KLM complexity of (222)Rn concentration time series can be attributed to the indoor air humidity that keeps the radon daughters in air.

  6. A complexity measure based method for studying the dependence of 222Rn concentration time series on indoor air temperature and humidity

    CERN Document Server

    Mihailovic, Dragutin T; Krmar, Miodrag; Arsenić, Ilija

    2013-01-01

    We have suggested a complexity measure based method for studying the dependence of measured 222Rn concentration time series on indoor air temperature and humidity. This method is based on the Kolmogorov complexity (KL). We have introduced (i) the sequence of the KL, (ii) the Kolmogorov complexity highest value in the sequence (KLM) and (iii) the KL of the product of time series. The noticed loss of the KLM complexity of 222Rn concentration time series can be attributed to the indoor air humidity that keeps the radon daughters in air.

  7. Modelling multi-protein complexes using PELDOR distance measurements for rigid body minimisation experiments using XPLOR-NIH

    DEFF Research Database (Denmark)

    Hammond, Colin M; Owen-Hughes, Tom; Norman, David G

    2014-01-01

    Crystallographic and NMR approaches have provided a wealth of structural information about protein domains. However, often these domains are found as components of larger multi domain polypeptides or complexes. Orienting domains within such contexts can provide powerful new insight into their fun......Crystallographic and NMR approaches have provided a wealth of structural information about protein domains. However, often these domains are found as components of larger multi domain polypeptides or complexes. Orienting domains within such contexts can provide powerful new insight...... into their function. The combination of site specific spin labelling and Pulsed Electron Double Resonance (PELDOR) provide a means of obtaining structural measurements that can be used to generate models describing how such domains are oriented. Here we describe a pipeline for modelling the location of thio......-reactive nitroxyl spin locations to engineered sties on the histone chaperone Vps75. We then use a combination of experimentally determined measurements and symmetry constraints to model the orientation in which homodimers of Vps75 associate to form homotetramers using the XPLOR-NIH platform. This provides...

  8. Unraveling the complexity of protein backbone dynamics with combined (13)C and (15)N solid-state NMR relaxation measurements.

    Science.gov (United States)

    Lamley, Jonathan M; Lougher, Matthew J; Sass, Hans Juergen; Rogowski, Marco; Grzesiek, Stephan; Lewandowski, Józef R

    2015-09-14

    Typically, protein dynamics involve a complex hierarchy of motions occurring on different time scales between conformations separated by a range of different energy barriers. NMR relaxation can in principle provide a site-specific picture of both the time scales and amplitudes of these motions, but independent relaxation rates sensitive to fluctuations in different time scale ranges are required to obtain a faithful representation of the underlying dynamic complexity. This is especially pertinent for relaxation measurements in the solid state, which report on dynamics in a broader window of time scales by more than 3 orders of magnitudes compared to solution NMR relaxation. To aid in unraveling the intricacies of biomolecular dynamics we introduce (13)C spin-lattice relaxation in the rotating frame (R1ρ) as a probe of backbone nanosecond-microsecond motions in proteins in the solid state. We present measurements of (13)C'R1ρ rates in fully protonated crystalline protein GB1 at 600 and 850 MHz (1)H Larmor frequencies and compare them to (13)C'R1, (15)N R1 and R1ρ measured under the same conditions. The addition of carbon relaxation data to the model free analysis of nitrogen relaxation data leads to greatly improved characterization of time scales of protein backbone motions, minimizing the occurrence of fitting artifacts that may be present when (15)N data is used alone. We also discuss how internal motions characterized by different time scales contribute to (15)N and (13)C relaxation rates in the solid state and solution state, leading to fundamental differences between them, as well as phenomena such as underestimation of picosecond-range motions in the solid state and nanosecond-range motions in solution.

  9. Robust estimation of fractal measures for characterizing the structural complexity of the human brain: optimization and reproducibility.

    Science.gov (United States)

    Goñi, Joaquín; Sporns, Olaf; Cheng, Hu; Aznárez-Sanado, Maite; Wang, Yang; Josa, Santiago; Arrondo, Gonzalo; Mathews, Vincent P; Hummer, Tom A; Kronenberger, William G; Avena-Koenigsberger, Andrea; Saykin, Andrew J; Pastor, María A

    2013-12-01

    High-resolution isotropic three-dimensional reconstructions of human brain gray and white matter structures can be characterized to quantify aspects of their shape, volume and topological complexity. In particular, methods based on fractal analysis have been applied in neuroimaging studies to quantify the structural complexity of the brain in both healthy and impaired conditions. The usefulness of such measures for characterizing individual differences in brain structure critically depends on their within-subject reproducibility in order to allow the robust detection of between-subject differences. This study analyzes key analytic parameters of three fractal-based methods that rely on the box-counting algorithm with the aim to maximize within-subject reproducibility of the fractal characterizations of different brain objects, including the pial surface, the cortical ribbon volume, the white matter volume and the gray matter/white matter boundary. Two separate datasets originating from different imaging centers were analyzed, comprising 50 subjects with three and 24 subjects with four successive scanning sessions per subject, respectively. The reproducibility of fractal measures was statistically assessed by computing their intra-class correlations. Results reveal differences between different fractal estimators and allow the identification of several parameters that are critical for high reproducibility. Highest reproducibility with intra-class correlations in the range of 0.9-0.95 is achieved with the correlation dimension. Further analyses of the fractal dimensions of parcellated cortical and subcortical gray matter regions suggest robustly estimated and region-specific patterns of individual variability. These results are valuable for defining appropriate parameter configurations when studying changes in fractal descriptors of human brain structure, for instance in studies of neurological diseases that do not allow repeated measurements or for disease

  10. Ensemble Classification of Alzheimer's Disease and Mild Cognitive Impairment Based on Complex Graph Measures from Diffusion Tensor Images

    Science.gov (United States)

    Ebadi, Ashkan; Dalboni da Rocha, Josué L.; Nagaraju, Dushyanth B.; Tovar-Moll, Fernanda; Bramati, Ivanei; Coutinho, Gabriel; Sitaram, Ranganatha; Rashidi, Parisa

    2017-01-01

    The human brain is a complex network of interacting regions. The gray matter regions of brain are interconnected by white matter tracts, together forming one integrative complex network. In this article, we report our investigation about the potential of applying brain connectivity patterns as an aid in diagnosing Alzheimer's disease and Mild Cognitive Impairment (MCI). We performed pattern analysis of graph theoretical measures derived from Diffusion Tensor Imaging (DTI) data representing structural brain networks of 45 subjects, consisting of 15 patients of Alzheimer's disease (AD), 15 patients of MCI, and 15 healthy subjects (CT). We considered pair-wise class combinations of subjects, defining three separate classification tasks, i.e., AD-CT, AD-MCI, and CT-MCI, and used an ensemble classification module to perform the classification tasks. Our ensemble framework with feature selection shows a promising performance with classification accuracy of 83.3% for AD vs. MCI, 80% for AD vs. CT, and 70% for MCI vs. CT. Moreover, our findings suggest that AD can be related to graph measures abnormalities at Brodmann areas in the sensorimotor cortex and piriform cortex. In this way, node redundancy coefficient and load centrality in the primary motor cortex were recognized as good indicators of AD in contrast to MCI. In general, load centrality, betweenness centrality, and closeness centrality were found to be the most relevant network measures, as they were the top identified features at different nodes. The present study can be regarded as a “proof of concept” about a procedure for the classification of MRI markers between AD dementia, MCI, and normal old individuals, due to the small and not well-defined groups of AD and MCI patients. Future studies with larger samples of subjects and more sophisticated patient exclusion criteria are necessary toward the development of a more precise technique for clinical diagnosis. PMID:28293162

  11. Aerosol Disinfection Capacity of Slightly Acidic Hypochlorous Acid Water Towards Newcastle Disease Virus in the Air: An In Vivo Experiment.

    Science.gov (United States)

    Hakim, Hakimullah; Thammakarn, Chanathip; Suguro, Atsushi; Ishida, Yuki; Nakajima, Katsuhiro; Kitazawa, Minori; Takehara, Kazuaki

    2015-12-01

    Existence of bioaerosol contaminants in farms and outbreaks of some infectious organisms with the ability of transmission by air increase the need for enhancement of biosecurity, especially for the application of aerosol disinfectants. Here we selected slightly acidic hypochlorous acid water (SAHW) as a candidate and evaluated its virucidal efficacy toward a virus in the air. Three-day-old conventional chicks were challenged with 25 doses of Newcastle disease live vaccine (B1 strain) by spray with nebulizer (particle size Newcastle disease virus (NDV) strain Sato, too, was immediately inactivated by SAHW containing 50 ppm chlorine in the aqueous phase. These data suggest that SAHW containing 100 ppm chlorine can be used for aerosol disinfection of NDV in farms.

  12. Precipitation of arsenic sulphide from acidic water in a fixed-film bioreactor.

    Science.gov (United States)

    Battaglia-Brunet, Fabienne; Crouzet, Catherine; Burnol, André; Coulon, Stéphanie; Morin, Dominique; Joulian, Catherine

    2012-08-01

    Arsenic (As) is a toxic element frequently present in acid mine waters and effluents. Precipitation of trivalent arsenic sulphide in sulphate-reducing conditions at low pH has been studied with the aim of removing this hazardous element in a waste product with high As content. To achieve this, a 400m L fixed-film column bioreactor was fed continuously with a synthetic solution containing 100mg L(-1) As(V), glycerol and/or hydrogen, at pH values between 2.7 and 5. The highest global As removal rate obtained during these experiments was close to 2.5mg L(-1)h(-1). A switch from glycerol to hydrogen when the biofilm was mature induced an abrupt increase in the sulphate-reducing activity, resulting in a dramatic mobilisation of arsenic due to the formation of soluble thioarsenic complexes. A new analytical method, based on ionic chromatography, was used to evaluate the proportion of As present as thioarsenic complexes in the bioreactor. Profiles of pH, total As and sulphate concentrations suggest that As removal efficiency was linked to solubility of orpiment (As(2)S(3)) depending on pH conditions. Molecular fingerprints revealed fairly homogeneous bacterial colonisation throughout the reactor. The bacterial community was diverse and included fermenting bacteria and Desulfosporosinus-like sulphate-reducing bacteria. arrA genes, involved in dissimilatory reduction of As(V), were found and the retrieved sequences suggested that As(V) was reduced by a Desulfosporosinus-like organism. This study was the first to show that As can be removed by bioprecipitation of orpiment from acidic solution containing up to 100mg L(-1) As(V) in a bioreactor.

  13. Reaction kinetics and critical phenomena: iodination of acetone in isobutyric acid + water near the consolute point.

    Science.gov (United States)

    Hu, Baichuan; Baird, James K

    2010-01-14

    The rate of iodination of acetone has been measured as a function of temperature in the binary solvent isobutyric acid (IBA) + water near the upper consolute point. The reaction mixture was prepared by the addition of acetone, iodine, and potassium iodide to IBA + water at its critical composition of 38.8 mass % IBA. The value of the critical temperature determined immediately after mixing was 25.43 degrees C. Aliquots were extracted from the mixture at regular intervals in order to follow the time course of the reaction. After dilution of the aliquot with water to quench the reaction, the concentration of triiodide ion was determined by the measurement of the optical density at a wavelength of 565 nm. These measurements showed that the kinetics were zeroth order. When at the end of 24 h the reaction had come to equilibrium, the critical temperature was determined again and found to be 24.83 degrees C. An Arrhenius plot of the temperature dependence of the observed rate constant, k(obs), was linear over the temperature range 27.00-38.00 degrees C, but between 25.43 and 27.00 degrees C, the values of k(obs) fell below the extrapolation of the Arrhenius line. This behavior is evidence in support of critical slowing down. Our experimental method and results are significant in three ways: (1) In contrast to in situ measurements of optical density, the determination of the optical density of diluted aliquots avoided any interference from critical opalescence. (2) The measured reaction rate exhibited critical slowing down. (3) The rate law was pseudo zeroth order both inside and outside the critical region, indicating that the reaction mechanism was unaffected by the presence of the critical point.

  14. Increasing the sensitivity of NMR diffusion measurements by paramagnetic longitudinal relaxation enhancement, with application to ribosome–nascent chain complexes

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Sammy H. S.; Waudby, Christopher A.; Cassaignau, Anaïs M. E.; Cabrita, Lisa D.; Christodoulou, John, E-mail: j.christodoulou@ucl.ac.uk [University College London and Birkbeck College, Institute of Structural and Molecular Biology (United Kingdom)

    2015-10-15

    The translational diffusion of macromolecules can be examined non-invasively by stimulated echo (STE) NMR experiments to accurately determine their molecular sizes. These measurements can be important probes of intermolecular interactions and protein folding and unfolding, and are crucial in monitoring the integrity of large macromolecular assemblies such as ribosome–nascent chain complexes (RNCs). However, NMR studies of these complexes can be severely constrained by their slow tumbling, low solubility (with maximum concentrations of up to 10 μM), and short lifetimes resulting in weak signal, and therefore continuing improvements in experimental sensitivity are essential. Here we explore the use of the paramagnetic longitudinal relaxation enhancement (PLRE) agent NiDO2A on the sensitivity of {sup 15}N XSTE and SORDID heteronuclear STE experiments, which can be used to monitor the integrity of these unstable complexes. We exploit the dependence of the PLRE effect on the gyromagnetic ratio and electronic relaxation time to accelerate recovery of {sup 1}H magnetization without adversely affecting storage on N{sub z} during diffusion delays or introducing significant transverse relaxation line broadening. By applying the longitudinal relaxation-optimized SORDID pulse sequence together with NiDO2A to 70S Escherichia coli ribosomes and RNCs, NMR diffusion sensitivity enhancements of up to 4.5-fold relative to XSTE are achieved, alongside ∼1.9-fold improvements in two-dimensional NMR sensitivity, without compromising the sample integrity. We anticipate these results will significantly advance the use of NMR to probe dynamic regions of ribosomes and other large, unstable macromolecular assemblies.Graphical Abstract.

  15. Rotational study of the CH{sub 4}–CO complex: Millimeter-wave measurements and ab initio calculations

    Energy Technology Data Exchange (ETDEWEB)

    Surin, L. A., E-mail: surin@ph1.uni-koeln.de [I. Physikalisches Institut, University of Cologne, Zülpicher St. 77, 50937 Cologne (Germany); Institute of Spectroscopy, Russian Academy of Sciences, Fizicheskaya St. 5, 142190 Troitsk, Moscow (Russian Federation); Tarabukin, I. V.; Panfilov, V. A. [Institute of Spectroscopy, Russian Academy of Sciences, Fizicheskaya St. 5, 142190 Troitsk, Moscow (Russian Federation); Schlemmer, S. [I. Physikalisches Institut, University of Cologne, Zülpicher St. 77, 50937 Cologne (Germany); Kalugina, Y. N. [Department of Optics and Spectroscopy, Tomsk State University, 36 Lenin Ave., 634050 Tomsk (Russian Federation); Faure, A.; Rist, C. [University Grenoble Alpes, IPAG, F-38000 Grenoble (France); CNRS, IPAG, F-38000 Grenoble (France); Avoird, A. van der, E-mail: A.vanderAvoird@theochem.ru.nl [Theoretical Chemistry, Institute for Molecules and Materials, Radboud University, Heyendaalseweg 135, 6525 AJ Nijmegen (Netherlands)

    2015-10-21

    The rotational spectrum of the van der Waals complex CH{sub 4}–CO has been measured with the intracavity OROTRON jet spectrometer in the frequency range of 110–145 GHz. Newly observed and assigned transitions belong to the K = 2–1 subband correlating with the rotationless j{sub CH4} = 0 ground state and the K = 2–1 and K = 0–1 subbands correlating with the j{sub CH4} = 2 excited state of free methane. The (approximate) quantum number K is the projection of the total angular momentum J on the intermolecular axis. The new data were analyzed together with the known millimeter-wave and microwave transitions in order to determine the molecular parameters of the CH{sub 4}–CO complex. Accompanying ab initio calculations of the intermolecular potential energy surface (PES) of CH{sub 4}–CO have been carried out at the explicitly correlated coupled cluster level of theory with single, double, and perturbative triple excitations [CCSD(T)-F12a] and an augmented correlation-consistent triple zeta (aVTZ) basis set. The global minimum of the five-dimensional PES corresponds to an approximately T-shaped structure with the CH{sub 4} face closest to the CO subunit and binding energy D{sub e} = 177.82 cm{sup −1}. The bound rovibrational levels of the CH{sub 4}–CO complex were calculated for total angular momentum J = 0–6 on this intermolecular potential surface and compared with the experimental results. The calculated dissociation energies D{sub 0} are 91.32, 94.46, and 104.21 cm{sup −1} for A (j{sub CH4} = 0), F (j{sub CH4} = 1), and E (j{sub CH4} = 2) nuclear spin modifications of CH{sub 4}–CO, respectively.

  16. The improvement of the energy resolution in epi-thermal neutron region of Bonner sphere using boric acid water solution moderator.

    Science.gov (United States)

    Ueda, H; Tanaka, H; Sakurai, Y

    2015-10-01

    Bonner sphere is useful to evaluate the neutron spectrum in detail. We are improving the energy resolution in epi-thermal neutron region of Bonner sphere, using boric acid water solution as a moderator. Its response function peak is narrower than that for polyethylene moderator and the improvement of the resolution is expected. The resolutions between polyethylene moderator and boric acid water solution moderator were compared by simulation calculation. Also the influence in the uncertainty of Bonner sphere configuration to spectrum estimation was simulated. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Reprint of The improvement of the energy resolution in epi-thermal neutron region of Bonner sphere using boric acid water solution moderator.

    Science.gov (United States)

    Ueda, H; Tanaka, H; Sakurai, Y

    2015-12-01

    Bonner sphere is useful to evaluate the neutron spectrum in detail. We are improving the energy resolution in epi-thermal neutron region of Bonner sphere, using boric acid water solution as a moderator. Its response function peak is narrower than that for polyethylene moderator and the improvement of the resolution is expected. The resolutions between polyethylene moderator and boric acid water solution moderator were compared by simulation calculation. Also the influence in the uncertainty of Bonner sphere configuration to spectrum estimation was simulated.

  18. Implementation of a complex of measures to fulfill the planetary protection requirements of the ExoMars-2016 mission

    Science.gov (United States)

    Khamidullina, Natalia; Novikova, Nataliya; Deshevaya, Elena; Orlov, Oleg; Guridov, Alexander; Zakharenko, Dmitry; Zaytseva, Olga

    2016-07-01

    The major purpose of the planetary protection program in the ExoMars-2016 mission is to forestall Mars contamination by terrestrial microorganisms. Since Martian descent module is not intended for biological experiments, ExoMars-2016 mission falls under COSPAR category IVa. Within the joint project co-sponsored by ESA and Roscosmos the European side holds full responsibility for ensuring a prescribed level of SC microbiological purity, while the Russian side is charged with compliance of the launch services provided on Baikonur technical complex with the planetary protection requirements that is, specifically, prevention of SC recontamination. To this end, a complex of measures was executed to control microbial contamination of cosmodrome facilities on the prescribed level which included: - regular decontamination of clean rooms using an effective disinfectant and impulse ultraviolet radiation that created favorable conditions for reliable functioning of the ESA clean tent, - replacement of airline filters in the Thermal Conditioning Unit (TCU) air duct for SC conditioning with pure air. The results of microbiological tests performed in the period of 2015 - 2016 lead to the conclusion that the Baikonur clean rooms (ISO class 8), TCU air ducts and Air Thermal Control System (ATCS) at launch site are ready for the launch campaign and that the Russian side fulfilled the planetary protection requirements of the ExoMars-2016 mission.

  19. Comparaison of 85Kr measurements with the ADMS model (Atmospheric Dispersion Modelling System) on a coastal complex site

    Science.gov (United States)

    Leroy, C.; Maro, D.; Connan, O.; Hebert, D.; Rozet, M.

    2009-04-01

    Modelling atmospheric dispersion of radioactive plumes is a major issue for nuclear safety institutes to predict and estimate the radiological consequences to the population. The French Institute for the Radiological protection and the Nuclear Safety (IRSN) uses gaussian plume models, particularly adapted in accidental situations, because of short computation times. Due to the lack of experimental data, the reliability of these models is poorly documented and misunderstood for elevated sources in the near field and more particularly, in complex areas (topography, change of roughness). In order to improve the knowledge of dispersion mechanisms in such conditions, the IRSN ran a series of experimental campaigns between 1999 and 2002 in the vicinity of the La Hague nuclear reprocessing plant (AREVA NC - France). The La Hague peninsula is very narrow and the plant is located at 2 km from the coastline, at 150 m above sea level. During the experiments, the krypton-85 (85Kr), a radionucleide, was used as a non-reactive tracer of the plumes released by the 100 m high stack. In this work, the Atmospheric Transfer Coefficients (ATC) obtained from 85Kr measurements at La Hague are compared with the computations of the "next generation" gaussian model ADMS (Atmospheric Dispersion Modelling System) performed with "complex and coastal effects" ADMS modules.

  20. Family impact of assistive technology scale: development of a measurement scale for parents of children with complex communication needs.

    Science.gov (United States)

    Delarosa, Elizabeth; Horner, Stephanie; Eisenberg, Casey; Ball, Laura; Renzoni, Anne Marie; Ryan, Stephen E

    2012-09-01

    Young people use augmentative and alternative communication (AAC) systems to meet their everyday communication needs. However, the successful integration of an AAC system into a child's life requires strong commitment and continuous support from parents and other family members. This article describes the development and evaluation of the Family Impact of Assistive Technology Scale for AAC Systems - a parent-report questionnaire intended to detect the impact of AAC systems on the lives of children with complex communication needs and their families. The study involved 179 parents and clinical experts to test the content and face validities of the questionnaire, demonstrate its internal reliability and stability over time, and estimate its convergent construct validity when compared to a standardized measure of family impact.

  1. An Image Pattern Tracking Algorithm for Time-resolved Measurement of Mini- and Micro-scale Motion of Complex Object

    Directory of Open Access Journals (Sweden)

    John M. Seiner

    2009-03-01

    Full Text Available An image pattern tracking algorithm is described in this paper for time-resolved measurements of mini- and micro-scale movements of complex objects. This algorithm works with a high-speed digital imaging system, which records thousands of successive image frames in a short time period. The image pattern of the observed object is tracked among successively recorded image frames with a correlation-based algorithm, so that the time histories of the position and displacement of the investigated object in the camera focus plane are determined with high accuracy. The speed, acceleration and harmonic content of the investigated motion are obtained by post processing the position and displacement time histories. The described image pattern tracking algorithm is tested with synthetic image patterns and verified with tests on live insects.

  2. Cutaneous noradrenaline measured by microdialysis in complex regional pain syndrome during whole-body cooling and heating

    DEFF Research Database (Denmark)

    Terkelsen, Astrid Juhl; Gierthmühlen, Janne; Petersen, Lars J.

    2013-01-01

    noradrenaline, vasoconstriction, and reduction in skin temperature. The main findings were that the noradrenaline response did not differ between patients and controls or between the CRPS hand and the contralateral unaffected hand, suggesting that the evoked noradrenaline release from the cutaneous sympathetic......Complex regional pain syndrome (CRPS) is characterised by autonomic, sensory, and motor disturbances. The underlying mechanisms of the autonomic changes in CPRS are unknown. However, it has been postulated that sympathetic inhibition in the acute phase with locally reduced levels of noradrenaline...... and in healthy volunteers. Seven patients and nine controls completed whole-body cooling (sympathetic activation) and heating (sympathetic inhibition) induced by a whole-body thermal suit with simultaneous measurement of the skin temperature, skin blood flow, and release of dermal noradrenaline. CRPS pain...

  3. Study of the liquid vapor equilibrium in the bromine-hydrobromic acid-water system

    Science.gov (United States)

    Benizri, R.; Lessart, P.; Courvoisier, P.

    1984-01-01

    A glass ebullioscope was built and at atmospheric pressure, liquid-vapor equilibria relative to the Br2-HBr-H2O system, in the concentration range of interest for evaluation of the Mark 13 cycle was studied. Measurements were performed for the brome-azeotrope (HBr-H2O) pseudo-binary system and for the ternary system at temperatures lower than 125 C and in the bromine concentration range up to 13% wt.

  4. T-complex measures in bilingual Spanish-English and Turkish-German children and monolingual peers

    Science.gov (United States)

    Rinker, Tanja; Shafer, Valerie L.; Kiefer, Markus; Vidal, Nancy; Yu, Yan H.

    2017-01-01

    Background Lateral temporal neural measures (Na and T-complex Ta and Tb) of the auditory evoked potential (AEP) index maturation of auditory/speech processing. These measures are also sensitive to language experience in adults. This paper examined neural responses to a vowel sound at temporal electrodes in four- to five-year-old Spanish-English bilinguals and English monolinguals and in five- to six-year-old Turkish-German bilinguals and German monolinguals. The goal was to determine whether obligatory AEPs at temporal electrode sites were modulated by language experience. Language experience was defined in terms of monolingual versus bilingual status as well as the amount and quality of the bilingual language experience. Method AEPs were recorded at left and right temporal electrode sites to a 250-ms vowel [Ɛ] from 20 monolingual (American)-English and 18 Spanish-English children from New York City, and from 11 Turkish-German and 13 monolingual German children from Ulm, Germany. Language background information and standardized verbal and non-verbal test scores were obtained for the children. Results The results revealed differences in temporal AEPs (Na and Ta of the T-complex) between monolingual and bilingual children. Specifically, bilingual children showed smaller and/or later peak amplitudes than the monolingual groups. Ta-amplitude distinguished monolingual and bilingual children best at right electrode sites for both the German and American groups. Amount of experience and type of experience with the target language (English and German) influenced processing. Conclusions The finding of reduced amplitudes at the Ta latency for bilingual compared to monolingual children indicates that language specific experience, and not simply maturational factors, influences development of the neural processes underlying the Ta AEP, and suggests that lateral temporal cortex has an important role in language-specific speech perception development. PMID:28267801

  5. Experimental measurement and theoretical assessment of fast lanthanide electronic relaxation in solution with four series of isostructural complexes.

    Science.gov (United States)

    Funk, Alexander M; Fries, Pascal H; Harvey, Peter; Kenwright, Alan M; Parker, David

    2013-02-07

    The rates of longitudinal relaxation for ligand nuclei in four isostructural series of lanthanide(III) complexes have been measured by solution state NMR at 295 K at five magnetic fields in the range 4.7-16.5 T. The electronic relaxation time T(le) is a function of both the lanthanide ion and the local ligand field. It needs to be considered when relaxation probes for magnetic resonance applications are devised because it affects the nuclear relaxation, especially over the field range 0.5 to 4.7 T. Analysis of the data, based on Bloch-Redfield-Wangsness theory describing the paramagnetic enhancement of the nuclear relaxation rate has allowed reliable estimates of electronic relaxation times, T(1e), to be obtained using global minimization methods. Values were found in the range 0.10-0.63 ps, consistent with fluctuations in the transient ligand field induced by solvent collision. A refined theoretical model for lanthanide electronic relaxation beyond the Redfield approximation is introduced, which accounts for the magnitude of the ligand field coefficients of order 2, 4, and 6 and their relative contributions to the rate 1/T(le). Despite the considerable variation of these contributions with the nature of the lanthanide ion and its fluctuating ligand field, the theory explains the modest change of measured T(le) values and their remarkable statistical ordering across the lanthanide series. Both experiment and theory indicate that complexes of terbium and dysprosium should most efficiently promote paramagnetic enhancement of the rate of nuclear relaxation.

  6. Acid Water Neutralization Using Microbial Fuel Cells: An Alternative for Acid Mine Drainage Treatment

    Directory of Open Access Journals (Sweden)

    Eduardo Leiva

    2016-11-01

    Full Text Available Acid mine drainage (AMD is a complex environmental problem, which has adverse effects on surface and ground waters due to low pH, high toxic metals, and dissolved salts. New bioremediation approach based on microbial fuel cells (MFC can be a novel and sustainable alternative for AMD treatment. We studied the potential of MFC for acidic synthetic water treatment through pH neutralization in batch-mode and continuous-flow operation. We observed a marked pH increase, from ~3.7 to ~7.9 under batch conditions and to ~5.8 under continuous-flow operation. Likewise, batch reactors (non-MFC inoculated with different MFC-enriched biofilms showed a very similar pH increase, suggesting that the neutralization observed for batch operation was due to a synergistic influence of these communities. These preliminary results support the idea of using MFC technologies for AMD remediation, which could help to reduce costs associated with conventional technologies. Advances in this configuration could even be extrapolated to the recovery of heavy metals by precipitation or adsorption processes due to the acid neutralization.

  7. Experimental approach for the uncertainty assessment of 3D complex geometry dimensional measurements using computed tomography at the mm and sub-mm scales

    DEFF Research Database (Denmark)

    Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A.

    2017-01-01

    The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems...... techniques, particularly when measuring miniaturized components with complex 3D geometries and their inability to measure inner parts. To validate the presented method, the most accepted standard currently available for CT sensors, the Verein Deutscher Ingenieure/Verband Deutscher Elektrotechniker (VDI...

  8. Remote sensing and in situ measurements of methane and ammonia emissions from a megacity dairy complex: Chino, CA.

    Science.gov (United States)

    Leifer, Ira; Melton, Christopher; Tratt, David M; Buckland, Kerry N; Clarisse, Lieven; Coheur, Pierre; Frash, Jason; Gupta, Manish; Johnson, Patrick D; Leen, J Brian; Van Damme, Martin; Whitburn, Simon; Yurganov, Leonid

    2017-02-01

    Methane (CH4) and ammonia (NH3) directly and indirectly affect the atmospheric radiative balance with the latter leading to aerosol generation. Both have important spectral features in the Thermal InfraRed (TIR) that can be studied by remote sensing, with NH3 allowing discrimination of husbandry from other CH4 sources. Airborne hyperspectral imagery was collected for the Chino Dairy Complex in the Los Angeles Basin as well as in situ CH4, carbon dioxide (CO2) and NH3 data. TIR data showed good spatial agreement with in situ measurements and showed significant emissions heterogeneity between dairies. Airborne remote sensing mapped plume transport for ∼20 km downwind, documenting topographic effects on plume advection. Repeated multiple gas in situ measurements showed that emissions were persistent on half-year timescales. Inversion of one dairy plume found annual emissions of 4.1 × 10(5) kg CH4, 2.2 × 10(5) kg NH3, and 2.3 × 10(7) kg CO2, suggesting 2300, 4000, and 2100 head of cattle, respectively, and Chino Dairy Complex emissions of 42 Gg CH4 and 8.4 Gg NH3 implying ∼200k cows, ∼30% more than Peischl et al. (2013) estimated for June 2010. Far-field data showed chemical conversion and/or deposition of Chino NH3 occurs within the confines of the Los Angeles Basin on a four to six h timescale, faster than most published rates, and likely from higher Los Angeles oxidant loads. Satellite observations from 2011 to 2014 confirmed that observed in situ transport patterns were representative and suggests much of the Chino Dairy Complex emissions are driven towards eastern Orange County, with a lesser amount transported to Palm Springs, CA. Given interest in mitigating husbandry health impacts from air pollution emissions, this study highlights how satellite observations can be leveraged to understand exposure and how multiple gas in situ emissions studies can inform on best practices given that emissions reduction of one gas could increase those of

  9. Influence of polymer molecular weight and concentration on coexistence curve of isobutyric acid + water.

    Science.gov (United States)

    Reddy, P Madhusudhana; Venkatesu, P; Bohidar, H B

    2011-10-27

    We report the influence of variation of molecular weights (MWs = 2, 4, 6, and 9 × 10(5) g mol(-1)) and concentration (C) of a long-chain polymer (polyethylene oxide, PEO) on an upper critical solution temperature (UCST) of isobutyric acid (I) + water (W) using density (ρ) measurements as a function of temperature. The ρ values in each coexisting phase of IW have been measured at three different PEO concentrations (C = 0.395, 0.796, and 1.605 mg/cm(3)) in the near critical composition of IW at temperatures below the system's upper critical point for each molecular weight (MW) of PEO. Further, to ascertain the PEO behavior in IW we have measured the polydispersity values for both coexisting liquid phases by using dynamic light scattering (DLS). The data show that the polymer was significantly affected in the critical region of IW and these various MWs and concentrations of PEO show significant modulation on the critical exponents (β), the critical temperatures (T(c)), and critical composition (ϕ(c)), which are depicting the shape of the coexistence curve. The values of β and T(c) increase with increasing PEO MW and concentrations. Besides, the ϕ(c) values slightly decrease with increasing the C values in the mixture of IW. However, the rate of decrease in ϕ(c) is insignificant. Our experimental results explicitly elucidate that most of polymer chain entangles in water rich phase, thereby the polymer monomers strongly interact with neighbor solvent particles and also intra chain interaction between polymer monomers.

  10. The importance of chemical buffering for pelagic and benthic colonization in acidic waters

    Energy Technology Data Exchange (ETDEWEB)

    Nixdorf, B., E-mail: b.nixdorf@t-online.de; Lessmann, D. [Brandenburg University of Technology at Cottbus, Chair of Water Conservation, Faculty of Environmental Sciences (Germany); Steinberg, C. E. W. [Leibniz-Institute of Freshwater Ecology and Inland Fisheries (Germany)

    2003-01-15

    In poorly buffered areas acidification may occur for two reasons: through atmospheric deposition of acidifying substances and - in mining districts - through pyrite weathering. These different sources of acidity lead to distinct clearly geochemistry in lakes and rivers. In general, the geochemistry is the major determinant for the planktonic composition of the acidified water bodies, whereas the nutrient status mainly determines the level of biomass. A number of acidic mining lakes in Eastern Germany have to be neutralized to meet the water quality goals of the European Union Directives and to overcome the ecological degradation. This neutralization process is limnologically a short-term maturation of lakes, which permits biological succession to overcome two different geochemical buffer systems. First, the iron buffer system characterizes an initial state, when colonization starts: there is low organismic diversity and productivity, clear net heterotrophy in most cases. Organic carbon that serves as fuel for the food web derives mainly from allochthonous sources. In the second, less acidic state aluminum is the buffer. This state is found exceptionally among the hard water mining lakes, often as a result of deposition of acidifying substances onto soft water systems. Colonization in aluminum-buffered lakes is more complex and controlled by the sensitivity of the organisms towards both, protons and inorganic reactive aluminum species. In soft-water systems, calcium may act as antidote against acid and aluminum; however, this function is lost in hard water post mining lakes of similar proton concentrations. Nutrient limitations may occur, but these do not usually control qualitative and quantitative plankton composition. In these lakes, total pelagic biomass is controlled by the bioavailability of nutrients, particularly phosphorus.

  11. Measurements of the Intensity and Polarization of the Anomalous Microwave Emission in the Perseus molecular complex with QUIJOTE

    CERN Document Server

    Génova-Santos, R; Rebolo, R; Peláez-Santos, A; López-Caraballo, C H; Harper, S; Watson, R A; Ashdown, M; Barreiro, R B; Casaponsa, B; Dickinson, C; Diego, J M; Fernández-Cobos, R; Grainge, K J B; Herranz, D; Hoyland, R; Lasenby, A; López-Caniego, M; Martínez-González, E; McCulloch, M; Melhuish, S; Piccirillo, L; Perrott, Y C; Poidevin, F; Razavi-Ghods, N; Scott, P F; Titterington, D; Tramonte, D; Vielva, P; Vignaga, R

    2015-01-01

    Anomalous microwave emission (AME) has been observed in numerous sky regions, in the frequency range ~10-60 GHz. One of the most scrutinized regions is G159.6-18.5, located within the Perseus molecular complex. In this paper we present further observations of this region (194 hours in total over ~250 deg^2), both in intensity and in polarization. They span four frequency channels between 10 and 20 GHz, and were gathered with QUIJOTE, a new CMB experiment with the goal of measuring the polarization of the CMB and Galactic foregrounds. When combined with other publicly-available intensity data, we achieve the most precise spectrum of the AME measured to date, with 13 independent data points being dominated by this emission. The four QUIJOTE data points provide the first independent confirmation of the downturn of the AME spectrum at low frequencies, initially unveiled by the COSMOSOMAS experiment in this region. We accomplish an accurate fit of these data using models based on electric dipole emission from spin...

  12. Building a measurement framework of burden of treatment in complex patients with chronic conditions: a qualitative study

    Directory of Open Access Journals (Sweden)

    Eton DT

    2012-08-01

    Full Text Available David T Eton,1 Djenane Ramalho de Oliveira,2,3 Jason S Egginton,1 Jennifer L Ridgeway,1 Laura Odell,4 Carl R May,5 Victor M Montori1,61Division of Health Care Policy and Research, Department of Health Sciences Research, Mayo Clinic, Rochester, MN, USA; 2College of Pharmacy, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil; 3Medication Therapy Management Program, Fairview Pharmacy Services LLC, Minneapolis, MN, USA; 4Pharmacy Services, Mayo Clinic, Rochester, MN, USA; 5Faculty of Health Sciences, University of Southampton, Southampton, UK; 6Knowledge and Evaluation Research Unit, Mayo Clinic, Rochester, MN, USABackground: Burden of treatment refers to the workload of health care as well as its impact on patient functioning and well-being. We set out to build a conceptual framework of issues descriptive of burden of treatment from the perspective of the complex patient, as a first step in the development of a new patient-reported measure.Methods: We conducted semistructured interviews with patients seeking medication therapy management services at a large, academic medical center. All patients had a complex regimen of self-care (including polypharmacy, and were coping with one or more chronic health conditions. We used framework analysis to identify and code themes and subthemes. A conceptual framework of burden of treatment was outlined from emergent themes and subthemes.Results: Thirty-two patients (20 female, 12 male, age 26–85 years were interviewed. Three broad themes of burden of treatment emerged including: the work patients must do to care for their health; problem-focused strategies and tools to facilitate the work of self-care; and factors that exacerbate the burden felt. The latter theme encompasses six subthemes including challenges with taking medication, emotional problems with others, role and activity limitations, financial challenges, confusion about medical information, and health care delivery obstacles

  13. Very focused expulsion of pore fluid along the western Nankai accreionary complex detected by closely-spaced heat flow measurements

    Science.gov (United States)

    Kinoshita, M.; Goto, S.; Gulick, S. P.; Mikada, H.

    2002-12-01

    During the KR02-10 cruise onboard R/V KAIREI, JAMSTEC, intensive heat flow measurements were carried out across the western and middle Nankai Trough areas, in order to reveal thermal and hydrological process across the frontal thrust and the Large Thrust Slice Zone (LTSZ). Previous heat flow data suggest that the Nankai accretionary complex is basically thermal-conduction dominant, except for strongly channelized flow along the faults. Heat flow was measured using two types of geothermal probes: a 4.5m geothermal probe lowered from the ship, and two 60cm probes manipulated by ROV KAIKO. Probe positions were controlled using SSBL acoustic navigation with the accuracy of 30-70 m. We obtained 19 heat flow data across the second frontal thrust off Muroto. Heat flow is highest at the base of the second frontal thrust. Maximum heat flow reaches up to 280 mW/m2 and its width is probably less than 50 m. We observed no indication of seepage activity at this site. Upslope we found a cold seep site, which was distributed along a topographic contour of 4620 m. Although we measured heat flow in the middle of seep site, no heat flow anomaly was detected. We obtained 12 heat flow data across the lower part of LTSZ off Muroto. Two local heat flow anomalies of up to 250 mW/m2 were detected, both of which are related to cold seep activities. The amplitude of heat flow anomalies is similar to that observed in the frontal thrust area, although the basal heat flow here, 60-80 mW/m2, is much lower that in the frontal thrust area. Also, the width of the anomaly seems similar to frontal thrust area. These data indicates that fluid flow is restricted within the fault or in the hanging wall, and otherwise the thermal regime in the accretionary complex is conduction dominant. On the other hand, difference in heat flow anomaly locations between two areas may provide insights into the maturity of cold seep activity and the thrust as fluid conduits.

  14. Definition of (so MIScalled) ``Complexity" as UTTER-SIMPLICITY!!!(sMciUS!!!) Versus Deviations From( sMciUS!!!): ``COMPLICATEDNESS" Definition(s) and MEASURE(S)!!!

    Science.gov (United States)

    Young, F.; Siegel, E.

    2010-03-01

    (so MIScalled) ``complexity''(sMc) associated BOTH SCALE- INVARIANCE Symmetry-RESTORING(S-I S-R) [vs. S-I S-B!!!], AND X (w) P(w ) 1/w^(1.000...) ``pink''/Zipf/Archimedes-HYPERBOLICITY INEVITABILITY CONNECTION is by simple-calculus SISR's logarithm- function derivative: (d/dw)ln(w)=1/w=1/w^(1.000...), hence: (d/dw) [SISR](w)=1/w=1/w^(1.000...)=(via Noether-theorem relating continuous-(SISR)-symmetries to conservation-laws)=(d/dw)[4-DIV (J(INTER-SCALE)=0](w)=1/w =1/w^(1.000...). Hence sMc is information inter-scale conservation [as Anderson-Mandell, Fractals of Brain; Fractals of Mind(1994)-experimental- psychology!!!], i.e. sMciUS!!!, VERSUS ``COMPLICATEDNESS", is sMcciUS!!!: EITHER: PLUS (Additive: Murphy's-law absence) OR TIMES (Multiplicative: Murphy's-law dominance) various disparate system-specificity ``COMPLICATIONS". ``COMPLICATEDNESS" MEASURES: DEVIATIONS FROM sMciUS!!!: EITHER [S-I S-B] MINUS [S- I S-R] AND/OR [``red"/Pareto X(w) P(w) 1/w^(#=/=1.000...)] MINUS [X(w) P(w) 1/w^(1.000...) ``pink"/Zipf/Archimedes-HYPERBOLICITY INEVITABILITY] = [1/w^(#=/=1.000...)] MINUS [1/w^(1.000...)]; almost but not exactly a fractals Hurst-exponent-like [# - 1.000...]!!!

  15. Rotational study of the NH{sub 3}–CO complex: Millimeter-wave measurements and ab initio calculations

    Energy Technology Data Exchange (ETDEWEB)

    Surin, L. A., E-mail: surin@ph1.uni-koeln.de [I. Physikalisches Institut, University of Cologne, Zülpicher Str. 77, 50937 Cologne (Germany); Institute of Spectroscopy, Russian Academy of Sciences, Fizicheskaya Str. 5, 142190 Troitsk, Moscow (Russian Federation); Potapov, A.; Schlemmer, S. [I. Physikalisches Institut, University of Cologne, Zülpicher Str. 77, 50937 Cologne (Germany); Dolgov, A. A.; Tarabukin, I. V.; Panfilov, V. A. [Institute of Spectroscopy, Russian Academy of Sciences, Fizicheskaya Str. 5, 142190 Troitsk, Moscow (Russian Federation); Kalugina, Y. N. [Department of Optics and Spectroscopy, Tomsk State University, 36 Lenin av., 634050 Tomsk (Russian Federation); Faure, A. [Université de Grenoble Alpes, IPAG, F-38000 Grenoble (France); CNRS, IPAG, F-38000 Grenoble (France); Avoird, A. van der, E-mail: A.vanderAvoird@theochem.ru.nl [Theoretical Chemistry, Institute for Molecules and Materials, Radboud University Nijmegen, Heyendaalseweg 135, 6525 AJ Nijmegen (Netherlands)

    2015-03-21

    The rotational spectrum of the van der Waals complex NH{sub 3}–CO has been measured with the intracavity OROTRON jet spectrometer in the frequency range of 112–139 GHz. Newly observed and assigned transitions belong to the K = 0–0, K = 1–1, K = 1–0, and K = 2–1 subbands correlating with the rotationless (j{sub k}){sub NH3} = 0{sub 0} ground state of free ortho-NH{sub 3} and the K = 0–1 and K = 2–1 subbands correlating with the (j{sub k}){sub NH3} = 1{sub 1} ground state of free para-NH{sub 3}. The (approximate) quantum number K is the projection of the total angular momentum J on the intermolecular axis. Some of these transitions are continuations to higher J values of transition series observed previously [C. Xia et al., Mol. Phys. 99, 643 (2001)], the other transitions constitute newly detected subbands. The new data were analyzed together with the known millimeter-wave and microwave transitions in order to determine the molecular parameters of the ortho-NH{sub 3}–CO and para-NH{sub 3}–CO complexes. Accompanying ab initio calculations of the intermolecular potential energy surface (PES) of NH{sub 3}–CO has been carried out at the explicitly correlated coupled cluster level of theory with single, double, and perturbative triple excitations and an augmented correlation-consistent triple zeta basis set. The global minimum of the five-dimensional PES corresponds to an approximately T-shaped structure with the N atom closest to the CO subunit and binding energy D{sub e} = 359.21 cm{sup −1}. The bound rovibrational levels of the NH{sub 3}–CO complex were calculated for total angular momentum J = 0–6 on this intermolecular potential surface and compared with the experimental results. The calculated dissociation energies D{sub 0} are 210.43 and 218.66 cm{sup −1} for ortho-NH{sub 3}–CO and para-NH{sub 3}–CO, respectively.

  16. Solution structure investigation of Ru(II) complex ion pairs: quantitative NOE measurements and determination of average interionic distances.

    Science.gov (United States)

    Zuccaccia, C; Bellachioma, G; Cardaci, G; Macchioni, A

    2001-11-07

    The structure of the Ru(II) ion pairs trans-[Ru(COMe)[(pz(2))CH(2)](CO)(PMe(3))(2)]X (X(-) = BPh(4)(-), 1a; BPh(3)Me(-), 1b; BPh(3)(n-Bu)(-), 1c; BPh(3)(n-Hex)(-), 1d; B(3, 5-(CF(3))(2)(C(6)H(3)))(4)(-), 1e; PF(6)(-), 1f; and BF(4)(-), 1g; pz = pyrazol-1-yl-ring) was investigated in solution from both a qualitative (chloroform-d, methylene chloride-d(2), nithromethane-d(3)) and quantitative (methylene chloride-d(2)) point of view by performing 1D- and 2D-NOE NMR experiments. In particular, the relative anion-cation localization (interionic structure) was qualitatively determined by (1)H-NOESY and (19)F, (1)H-HOESY (heteronuclear Overhauser effect spectroscopy) NMR experiments. The counteranion locates close to the peripheral protons of the bispyrazolyl ligand independent of its nature and that of the solvent. In complexes 1c and 1d bearing unsymmetrical counteranions, the aliphatic chain points away from the metal center as indicated by the absence of NOE between the terminal Me group and any cationic protons. An estimation of the average interionic distances in solution was obtained by the quantification of the NOE build-up versus the mixing time under the assumption that the interionic and intramolecular correlation times (tau(c)) are the same. Such an assumption was checked by the experimental measurements of tau(c) from both the dipolar contribution to the carbon-13 longitudinal relaxation time T(DD-1)and the comparison of the intramolecular and interionic cross relaxation rate constant (sigma) dependence on the temperature. Both the methodologies indicate that anion and cation have comparable tau(c) values. The determined correlation time values were compared with those obtained for the neutral trans-[Ru(COMe)[(pz(2))BH(2)](CO)(PMe(3))(2)] complex (2), isosteric with the cation of 1. They were significantly shorter (approximately 3.8 times), indicating that the main contribution to dipolar relaxation processes comes from the overall ion pair rotation. As a

  17. The Repeatability Assessment of Three-Dimensional Capsule-Intraocular Lens Complex Measurements by Means of High-Speed Swept-Source Optical Coherence Tomography

    Science.gov (United States)

    Chang, Pingjun; Li, Jin; Savini, Giacomo; Huang, Jinhai; Huang, Shenghai; Zhao, Yinying; Liao, Na; Lin, Lei; Yu, Xiaoyu; Zhao, Yun-e

    2015-01-01

    Purpose To rebuild the three-dimensional (3-D) model of the anterior segment by high-speed swept-source optical coherence tomography (SSOCT) and evaluate the repeatability of measurement for the parameters of capsule-intraocular lens (C-IOL) complex. Methods Twenty-two pseudophakic eyes from 22 patients were enrolled. Three continuous SSOCT measurements were performed in all eyes and the tomograms obtained were used for 3-D reconstruction. The output data were used to evaluate the measurement repeatability. The parameters included postoperative aqueous depth (PAD), the area and diameter of the anterior capsule opening (Area and D), IOL tilt (IOL-T), horizontal, vertical, and space decentration of the IOL, anterior capsule opening, and IOL-anterior capsule opening. Results PAD, IOL-T, Area, D, and all decentration measurements showed high repeatability. Repeated measure analysis showed there was no statistically significant difference among the three continuous measurements (all P > .05). Pearson correlation analysis showed high correlation between each pair of them (all r >0.90, P<0.001). ICCs were all more than 0.9 for all parameters. The 95% LoAs of all parameters were narrow for comparison of three measurements, which showed high repeatability for three measurements. Conclusion SSOCT is available to be a new method for the 3-D measurement of C-IOL complex after cataract surgery. This method presented high repeatability in measuring the parameters of the C-IOL complex. PMID:26600254

  18. Charge carrier effective mass and concentration derived from combination of Seebeck coefficient and 125Te NMR measurements in complex tellurides

    Science.gov (United States)

    Levin, E. M.

    2016-06-01

    Thermoelectric materials utilize the Seebeck effect to convert heat to electrical energy. The Seebeck coefficient (thermopower), S , depends on the free (mobile) carrier concentration, n , and effective mass, m*, as S ˜m*/n2 /3 . The carrier concentration in tellurides can be derived from 125Te nuclear magnetic resonance (NMR) spin-lattice relaxation measurements. The NMR spin-lattice relaxation rate, 1 /T1 , depends on both n and m* as 1 /T1˜(m*)3/2n (within classical Maxwell-Boltzmann statistics) or as 1 /T1˜(m*)2n2 /3 (within quantum Fermi-Dirac statistics), which challenges the correct determination of the carrier concentration in some materials by NMR. Here it is shown that the combination of the Seebeck coefficient and 125Te NMR spin-lattice relaxation measurements in complex tellurides provides a unique opportunity to derive the carrier effective mass and then to calculate the carrier concentration. This approach was used to study A gxS bxG e50-2xT e50 , well-known GeTe-based high-efficiency tellurium-antimony-germanium-silver thermoelectric materials, where the replacement of Ge by [Ag+Sb] results in significant enhancement of the Seebeck coefficient. Values of both m* and n derived using this combination show that the enhancement of thermopower can be attributed primarily to an increase of the carrier effective mass and partially to a decrease of the carrier concentration when the [Ag+Sb] content increases.

  19. Measurement issues associated with quantitative molecular biology analysis of complex food matrices for the detection of food fraud.

    Science.gov (United States)

    Burns, Malcolm; Wiseman, Gordon; Knight, Angus; Bramley, Peter; Foster, Lucy; Rollinson, Sophie; Damant, Andrew; Primrose, Sandy

    2016-01-07

    Following a report on a significant amount of horse DNA being detected in a beef burger product on sale to the public at a UK supermarket in early 2013, the Elliott report was published in 2014 and contained a list of recommendations for helping ensure food integrity. One of the recommendations included improving laboratory testing capacity and capability to ensure a harmonised approach for testing for food authenticity. Molecular biologists have developed exquisitely sensitive methods based on the polymerase chain reaction (PCR) or mass spectrometry for detecting the presence of particular nucleic acid or peptide/protein sequences. These methods have been shown to be specific and sensitive in terms of lower limits of applicability, but they are largely qualitative in nature. Historically, the conversion of these qualitative techniques into reliable quantitative methods has been beset with problems even when used on relatively simple sample matrices. When the methods are applied to complex sample matrices, as found in many foods, the problems are magnified resulting in a high measurement uncertainty associated with the result which may mean that the assay is not fit for purpose. However, recent advances in the technology and the understanding of molecular biology approaches have further given rise to the re-assessment of these methods for their quantitative potential. This review focuses on important issues for consideration when validating a molecular biology assay and the various factors that can impact on the measurement uncertainty of a result associated with molecular biology approaches used in detection of food fraud, with a particular focus on quantitative PCR-based and proteomics assays.

  20. Activity of microorganisms in acid mine water. I. Influence of acid water on aerobic heterotrophs of a normal stream.

    Science.gov (United States)

    Tuttle, J H; Randles, C I; Dugan, P R

    1968-05-01

    Comparison of microbial content of acid-contaminated and nonacid-contaminated streams from the same geographical area indicated that nonacid streams contained relatively low numbers of acid-tolerant heterotrophic microorganisms. The acid-tolerant aerobes survived when acid entered the stream and actually increased in number to about 2 x 10(3) per ml until the pH approached 3.0. The organisms then represented the heterotrophic aerobic microflora of the streams comprised of a mixture of mine drainage and nonacid water. A stream which was entirely acid drainage did not have a similar microflora. Most gram-positive aerobic and anaerobic bacteria died out very rapidly in acidic water, and they comprised a very small percentage of the microbial population of the streams examined. Iron- and sulfur-oxidizing autotrophic bacteria were present wherever mine water entered a stream system. The sulfur-oxidizing bacteria predominated over iron oxidizers. Ecological data from the field were verified by laboratory experiments designed to simulate stream conditions.

  1. New insights into neck-pain-related postural control using measures of signal frequency and complexity in older adults.

    Science.gov (United States)

    Quek, June; Brauer, S G; Clark, Ross; Treleaven, Julia

    2014-04-01

    There is evidence to implicate the role of the cervical spine in influencing postural control, however the underlying mechanisms are unknown. The aim of this study was to explore standing postural control mechanisms in older adults with neck pain (NP) using measures of signal frequency (wavelet analysis) and complexity (entropy). This cross-sectional study compared balance performance of twenty older adults with (age=70.3±4.0 years) and without (age=71.4±5.1 years) NP when standing on a force platform with eyes open and closed. Anterior-posterior centre-of-pressure data were processed using wavelet analysis and sample entropy. Performance-based balance was assessed using the Timed Up-and-Go (TUG) and Dynamic Gait Index (DGI). The NP group demonstrated poorer functional performance (TUG and DGI, pcontrols. Wavelet analysis revealed that standing postural sway in the NP group was positively skewed towards the lower frequency movement (very-low [0.10-0.39Hz] frequency content, p0.05). Our results demonstrate that older adults with NP have poorer balance than controls. Furthermore, wavelet analysis may reveal unique insights into postural control mechanisms. Given that centre-of-pressure signal movements in the very-low and moderate frequencies are postulated to be associated with vestibular and muscular proprioceptive input respectively, we speculated that, because NP demonstrate a diminished ability to recruit the muscular proprioceptive system compared to controls, they rely more on the vestibular system for postural stability.

  2. Embedded Measures of Performance Validity in the Rey Complex Figure Test in a Clinical Sample of Veterans.

    Science.gov (United States)

    Sugarman, Michael A; Holcomb, Erin M; Axelrod, Bradley N; Meyers, John E; Liethen, Philip C

    2016-01-01

    The purpose of this study was to determine how well scores from the Rey Complex Figure Test (RCFT) could serve as embedded measures of performance validity in a large, heterogeneous clinical sample at an urban-based Veterans' Affairs hospital. Participants were divided into credible performance (n = 244) and noncredible performance (n = 87) groups based on common performance validity tests during their respective clinical evaluations. We evaluated how well preselected RCFT scores could discriminate between the 2 groups using cut scores from single indexes as well as multivariate logistic regression prediction models. Additionally, we evaluated how well memory error patterns (MEPs) could discriminate between the 2 groups. Optimal discrimination occurred when indexes from the Copy and Recognition trials were simultaneous predictors in logistic regression models, with 91% specificity and at least 53% sensitivity. Logistic regression yielded superior discrimination compared with individual indexes and compared with the use of MEPs. Specific scores on the RCFT, including the Copy and Recognition trials, can serve as adequate indexes of performance validity, when using both cut scores and logistic regression prediction models. We provide logistic regression equations that can be applied in similar clinical settings to assist in determining performance validity.

  3. Complex role of secondary electron emissions in dust grain charging in space environments: measurements on Apollo 11 & 17 dust grains

    Science.gov (United States)

    Abbas, Mian; Tankosic, Dragana; Spann, James; Leclair, Andre C.

    Dust grains in various astrophysical environments are generally charged electrostatically by photoelectric emissions with radiation from nearby sources, by electron/ion collisions, and sec-ondary electron emissions. Knowledge of the dust grain charges and equilibrium potentials is important for understanding of a variety of physical and dynamical processes in the interstel-lar medium (ISM), and heliospheric, interplanetary, planetary, and lunar environments. The high vacuum environment on the lunar surface leads to some unusual physical and dynam-ical phenomena involving dust grains with high adhesive characteristics, and levitation and transportation over long distances. It has been well recognized that the charging properties of individual micron/submicron size dust grains are expected to be substantially different from the corresponding values for bulk materials and theoretical models. In this paper we present experimental results on charging of individual dust grains selected from Apollo 11 and Apollo 17 dust samples by exposing them to mono-energetic electron beams in the 10-400 eV energy range. The charging rates of positively and negatively charged particles of 0.2 to 13 µm diam-eters are discussed in terms of the secondary electron emission (SEE) process, which is found to be a complex charging process at electron energies as low as 10-25 eV, with strong parti-cle size dependence. The measurements indicate substantial differences between dust charging properties of individual small size dust grains and of bulk materials.

  4. Expanding the scope of CE reactor to ssDNA-binding protein-ssDNA complexes as exemplified for a tool for direct measurement of dissociation kinetics of biomolecular complexes.

    Science.gov (United States)

    Takahashi, Toru; Ohtsuka, Kei-Ichirou; Tomiya, Yoriyuki; Iki, Nobuhiko; Hoshino, Hitoshi

    2009-09-01

    CE reactor (CER), which was developed as a tool for direct measurement of the dissociation kinetics of metal complexes, was successfully applied to the complexes of Escherichia coli ssDNA-binding protein (SSB) with ssDNA. The basic concept of CER is the application of CE separation process as a dissociation kinetic reactor for the complex, and the observation of the on-capillary dissociation reaction profile of the complex as the decrease of the peak height of the complex with increase of the migration time. The peak height of [SSB-ssDNA] decreases as the migration time increases since the degree of the decrease of [SSB-ssDNA] through the on-capillary dissociation reaction is proportional to the degree of the decrease of the peak height of [SSB-ssDNA]. The dissociation degree-time profiles for the complexes are quantitatively described by analyzing a set of electropherograms with different migration times. Dissociation rate constants of [SSB-ssDNA] consisting of 20-mer, 25-mer and 31-mer ssDNA were directly determined to be 3.99x10(-4), 4.82x10(-4) and 1.50x10(-3)/s, respectively. CER is a concise and effective tool for dissociation kinetic analysis of biomolecular complexes.

  5. The development of a quantitative measure for the complexity of emergency tasks stipulated in emergency operating procedures of nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jin Kyun; Jung, Won Dea

    2006-11-15

    Previous studies have continuously pointed out that human performance is a decisive factor affecting the safety of complicated process systems. Subsequently, as the result of extensive efforts, it has been revealed that the provision of procedures is one of the most effective countermeasures, especially if human operators have to carry out their tasks under a very stressful environment. That is, since good procedures are helpful to not only enhance the performance of human operators but also the reduction of the possibility of a human error through stipulating detailed tasks to be done by human operators. Ironically, it has been emphasized that the performance of human operators could be impaired due to complicated procedures, because procedures directly govern the physical as well as cognitive behavior of human operators by institutionalizing detailed actions. Therefore, it is a prerequisite to develop a systematic framework that properly evaluate the complexity of tasks described in procedures. For this reason, a measure called TACOM (Task Complexity) that can quantify the complexity of emergency tasks described in the emergency operating procedures (EOPs) of NPPs has been developed. In this report, a technical background as well as practical steps to quantify the complexity of tasks were presented with a series of studies that were conducted to ensure the validity of the TACOM measure. As a result of validation studies, since it is shown that the TACOM measure seem to properly quantify the complexity of emergency tasks, it is desirable that the TACOM measure plays an important role in improving the performance of human operators.

  6. Measuring and predicting reservoir heterogeneity in complex deposystems: The fluvial-deltaic Big Injun sandstone in West Virginia

    Energy Technology Data Exchange (ETDEWEB)

    Patchen, D.G.; Hohn, M.E.; Aminian, K.; Donaldson, A.; Shumaker, R.; Wilson, T.

    1993-04-01

    The purpose of this research is to develop techniques to measure and predict heterogeneities in oil reservoirs that are the products of complex deposystems. The unit chosen for study is the Lower Mississippian Big Injun sandstone, a prolific oil producer (nearly 60 fields) in West Virginia. This research effort has been designed and is being implemented as an integrated effort involving stratigraphy, structural geology, petrology, seismic study, petroleum engineering, modeling and geostatistics. Sandstone bodies are being mapped within their regional depositional systems, and then sandstone bodies are being classified in a scheme of relative heterogeneity to determine heterogeneity across depositional systems. Facies changes are being mapped within given reservoirs, and the environments of deposition responsible for each facies are being interpreted to predict the inherent relative heterogeneity of each facies. Structural variations will be correlated both with production, where the availability of production data will permit, and with variations in geologic and engineering parameters that affect production. A reliable seismic model of the Big Injun reservoirs in Granny Creek field is being developed to help interpret physical heterogeneity in that field. Pore types are being described and related to permeability, fluid flow and diagenesis, and petrographic data are being integrated with facies and depositional environments to develop a technique to use diagenesis as a predictive tool in future reservoir development. Another objective in the Big Injun study is to determine the effect of heterogeneity on fluid flow and efficient hydrocarbon recovery in order to improve reservoir management. Graphical methods will be applied to Big Injun production data and new geostatistical methods will be developed to detect regional trends in heterogeneity.

  7. Water Accounting Plus (WA+ – a water accounting procedure for complex river basins based on satellite measurements

    Directory of Open Access Journals (Sweden)

    D. Molden

    2012-11-01

    Full Text Available Coping with the issue of water scarcity and growing competition for water among different sectors requires proper water management strategies and decision processes. A pre-requisite is a clear understanding of the basin hydrological processes, manageable and unmanageable water flows, the interaction with land use and opportunities to mitigate the negative effects and increase the benefits of water depletion on society. Currently, water professionals do not have a common framework that links hydrological flows to user groups of water and their benefits. The absence of a standard hydrological and water management summary is causing confusion and wrong decisions. The non-availability of water flow data is one of the underpinning reasons for not having operational water accounting systems for river basins in place. In this paper we introduce Water Accounting Plus (WA+, which is a new framework designed to provide explicit spatial information on water depletion and net withdrawal processes in complex river basins. The influence of land use on the water cycle is described explicitly by defining land use groups with common characteristics. Analogous to financial accounting, WA+ presents four sheets including (i a resource base sheet, (ii a consumption sheet, (iii a productivity sheet, and (iv a withdrawal sheet. Every sheet encompasses a set of indicators that summarize the overall water resources situation. The impact of external (e.g. climate change and internal influences (e.g. infrastructure building can be estimated by studying the changes in these WA+ indicators. Satellite measurements can be used for 3 out of the 4 sheets, but is not a precondition for implementing WA+ framework. Data from hydrological models and water allocation models can also be used as inputs to WA+.

  8. Water Accounting Plus (WA+ – a water accounting procedure for complex river basins based on satellite measurements

    Directory of Open Access Journals (Sweden)

    P. Karimi

    2013-07-01

    Full Text Available Coping with water scarcity and growing competition for water among different sectors requires proper water management strategies and decision processes. A pre-requisite is a clear understanding of the basin hydrological processes, manageable and unmanageable water flows, the interaction with land use and opportunities to mitigate the negative effects and increase the benefits of water depletion on society. Currently, water professionals do not have a common framework that links depletion to user groups of water and their benefits. The absence of a standard hydrological and water management summary is causing confusion and wrong decisions. The non-availability of water flow data is one of the underpinning reasons for not having operational water accounting systems for river basins in place. In this paper, we introduce Water Accounting Plus (WA+, which is a new framework designed to provide explicit spatial information on water depletion and net withdrawal processes in complex river basins. The influence of land use and landscape evapotranspiration on the water cycle is described explicitly by defining land use groups with common characteristics. WA+ presents four sheets including (i a resource base sheet, (ii an evapotranspiration sheet, (iii a productivity sheet, and (iv a withdrawal sheet. Every sheet encompasses a set of indicators that summarise the overall water resources situation. The impact of external (e.g., climate change and internal influences (e.g., infrastructure building can be estimated by studying the changes in these WA+ indicators. Satellite measurements can be used to acquire a vast amount of required data but is not a precondition for implementing WA+ framework. Data from hydrological models and water allocation models can also be used as inputs to WA+.

  9. The complexities of measuring access to parks and physical activity sites in New York City: a quantitative and qualitative approach

    Directory of Open Access Journals (Sweden)

    Sohler Nancy L

    2009-06-01

    Full Text Available Abstract Background Proximity to parks and physical activity sites has been linked to an increase in active behaviors, and positive impacts on health outcomes such as lower rates of cardiovascular disease, diabetes, and obesity. Since populations with a low socio-economic status as well as racial and ethnic minorities tend to experience worse health outcomes in the USA, access to parks and physical activity sites may be an environmental justice issue. Geographic Information systems were used to conduct quantitative and qualitative analyses of park accessibility in New York City, which included kernel density estimation, ordinary least squares (global regression, geographically weighted (local regression, and longitudinal case studies, consisting of field work and archival research. Accessibility was measured by both density of park acreage and density of physical activity sites. Independent variables included percent non-Hispanic black, percent Hispanic, percent below poverty, percent of adults without high school diploma, percent with limited English-speaking ability, and population density. Results The ordinary least squares linear regression found weak relationships in both the park acreage density and the physical activity site density models (Ra2 = .11 and .23, respectively; AIC = 7162 and 3529, respectively. Geographically weighted regression, however, suggested spatial non-stationarity in both models, indicating disparities in accessibility that vary over space with respect to magnitude and directionality of the relationships (AIC = 2014 and -1241, respectively. The qualitative analysis supported the findings of the local regression, confirming that although there is a geographically inequitable distribution of park space and physical activity sites, it is not globally predicted by race, ethnicity, or socio-economic status. Conclusion The combination of quantitative and qualitative analyses demonstrated the complexity of the issues around

  10. Development of InP solid state detector and liquid scintillator containing metal complex for measurement of pp/7Be solar neutrinos and neutrinoless double beta decay

    Science.gov (United States)

    Fukuda, Yoshiyuki; Moriyama, Shigetaka

    2012-07-01

    A large volume solid state detector using a semi-insulating Indium Phosphide (InP) wafer have been developed for measurement of pp/7Be solar neutrinos. Basic performance such as the charge collection efficiency and the energy resolution were measured by 60% and 20%, respectively. In order to detect two gammas (115keV and 497keV) from neutrino capture, we have designed hybrid detector which consist InP detector and liquid xenon scintillator for IPNOS experiment. New InP detector with thin electrode (Cr 50Å- Au 50Å). For another possibility, an organic liquid scintillator containing indium complex and zirconium complex were studied for a measurement of low energy solar neutrinos and neutrinosless double beta decay, respectively. Benzonitrile was chosen as a solvent because of good solubility for the quinolinolato complexes (2 wt%) and of good light yield for the scintillation induced by gamma-ray irradiation. The photo-luminescence emission spectra of InQ3 and ZrQ4 in benzonitrile was measured and liquid scintillator cocktail using InQ3 and ZrQ4 (50mg) in benzonitrile solutions (20 mL) with secondary scintillators with PPO (100mg) and POPOP (10mg) was made. The energy spectra of incident gammas were measured, and they are first results of the gamma-ray energy spectra using luminescent of metal complexes.

  11. Development of X-ray Computed Tomography (CT) Imaging Method for the Measurement of Complex 3D Ice Shapes Project

    Data.gov (United States)

    National Aeronautics and Space Administration — When ice accretes on a wing or other aerodynamic surface, it can produce extremely complex shapes. These are comprised of well-known shapes such as horns and...

  12. Quantitative measurement of the reduction of platinum(IV) complexes using X-ray absorption near-edge spectroscopy (XANES).

    Science.gov (United States)

    Hall, Matthew D; Daly, Helen L; Zhang, Jenny Z; Zhang, Mei; Alderden, Rebecca A; Pursche, Daniel; Foran, Garry J; Hambley, Trevor W

    2012-06-01

    The platinum(II) drugs cisplatin, carboplatin and oxaliplatin are usefully employed against a range of malignancies, but toxicities and resistance have spurred the search for improved analogs. This has included investigation of the platinum(IV) oxidation state, which provides greater kinetic inertness. It is generally accepted that Pt(IV) complexes must be reduced to Pt(II) for activation. As such, the ability to monitor reduction of Pt(IV) complexes is critical to guiding the design of candidates, and providing mechanistic understanding. Here we report in full that the white line height of X-ray absorption near-edge spectra (XANES) of Pt complexes, normalized to the post-edge minima, can be used to quantitatively determine the proportion of each oxidation state in a mixture. A series of Pt(IV) complexes based on the Pt(II) complexes cisplatin and transplatin were prepared with chlorido, acetato or hydroxido axial ligands, and studies into their reduction potential and cytotoxicity against A2780 human ovarian cancer cells were performed, demonstrating the relationship between reduction potential and cytotoxicity. Analysis of white line height demonstrated a clear and consistent difference between Pt(II) (1.52 ± 0.05) and Pt(IV) (2.43 ± 0.19) complexes. Reduction of Pt(IV) complexes over time in cell growth media and A2780 cells was observed by XANES, and shown to correspond with their reduction potentials and cytotoxicities. We propose that this method is useful for monitoring reduction of metal-based drug candidates in complex biological systems.

  13. Measured rates of fluoride/metal association correlate with rates of superoxide/metal reactions for Fe(III)EDTA(H2O)- and related complexes.

    Science.gov (United States)

    Summers, Jack S; Baker, Joseph B; Meyerstein, Dan; Mizrahi, Amir; Zilbermann, Israel; Cohen, Haim; Wilson, Christopher M; Jones, Jamie R

    2008-02-06

    The effects of 10 paramagnetic metal complexes (Fe(III)EDTA(H2O)-, Fe(III)EDTA(OH)2-, Fe(III)PDTA-, Fe(III)DTPA2-, Fe(III)2O(TTHA)2-, Fe(III)(CN)6(3-), Mn(II)EDTA(H2O)2-, Mn(II)PDTA2-, Mn(II)beta-EDDADP2-, and Mn(II)PO4(-)) on F- ion 19F NMR transverse relaxation rates (R2 = 1/T2) were studied in aqueous solutions as a function of temperature. Consistent with efficient relaxation requiring formation of a metal/F- bond, only the substitution inert complexes Fe(III)(CN)6(3-) and Fe(III)EDTA(OH)2- had no measured effect on T2 relaxation of the F- 19F resonance. For the remaining eight complexes, kinetic parameters (apparent second-order rate constants and activation enthalpies) for metal/F- association were determined from the dependence of the observed relaxation enhancements on complex concentration and temperature. Apparent metal/F- association rate constants for these complexes (k(app,F-)) spanned 5 orders of magnitude. In addition, we measured the rates at which O2*- reacts with Fe(III)PDTA-, Mn(II)EDTA(H2O)2-, Mn(II)PDTA2-, and Mn(II)beta-EDDADP2- by pulse radiolysis. Although no intermediate is observed during the reduction of Fe(III)PDTA- by O2*-, each of the Mn(II) complexes reacts with formation of a transient intermediate presumed to form via ligand exchange. These reactivity patterns are consistent with literature precedents for similar complexes. With these data, both k(app,O2-) and k(app,F-) are available for each of the eight reactive complexes. A plot of log(k(app,O2-)) versus log(k(app,F-)) for these eight showed a linear correlation with a slope approximately 1. This correlation suggests that rapid metal/O2*- reactions of these complexes occur via an inner-sphere mechanism whereas formation of an intermediate coordination complex limits the overall rate. This hypothesis is also supported by the very low rates at which the substitution inert complexes (Fe(III)(CN)6(3-) and Fe(III)EDTA(OH)2-) are reduced by O2*-. These results suggest that F- 19F NMR

  14. Measurement of adherence in a randomised controlled trial of a complex intervention: supported self-management for adults with learning disability and type 2 diabetes.

    Science.gov (United States)

    Graham, Liz; Wright, Judy; Walwyn, Rebecca; Russell, Amy M; Bryant, Louise; Farrin, Amanda; House, Allan

    2016-10-06

    Reporting adherence to intervention delivery and uptake is a detailed way of describing what was actually delivered and received, in comparison to what was intended. Measuring and reporting adherence is not routinely done well in complex interventions. The OK Diabetes trial (ISRCTN41897033) aimed to develop and subsequently test the feasibility of implementing a supported self-management intervention in adults with a learning disability and type 2 diabetes. A key study objective was to develop a measure of adherence to the intervention. We conducted a systematic review of published literature, extracting data from included papers using a standardised proforma. We undertook a narrative synthesis of papers to determine the form and content of methods for adherence measurement for self-management interventions in this population that had already been developed. We used the framework and data extraction form developed for the review as the basis for an adherence measurement tool that we applied in the OK Diabetes trial. The literature review found variability in the quality and content of adherence measurement and reporting, with no standardised approach. We were able to develop an adherence measure based upon the review, and populate it with data collected during the OK Diabetes trial. The adherence tool proved satisfactory for recording and measuring adherence in the trial. There remains a need for a standardised approach to adherence measurement in the field of complex interventions. We have shown that it is possible to produce a simple, feasible measure for assessing adherence in the OK Diabetes trial.

  15. A sensitive dynamic viscometer for measuring the complex shear modulus in a steady shear flow using the method of orthogonal superposition

    NARCIS (Netherlands)

    Zeegers, Jos; Ende, van den Dirk; Blom, Cor; Altena, Egbert G.; Beukema, Gerrit J.; Mellema, Jorrit

    1995-01-01

    A new instrument to carry out complex viscosity measurements in equilibrium and in a steady shear flow has been developed. A small amplitude harmonic excitation is superimposed orthogonally to the steady shear rate component. It is realized by a thin-walled cylinder, which oscillates in the axial di

  16. Observations and Measurements of Wing Parameters of the Selected Beetle Species and the Design of a Mechanism Structure Implementing a Complex Wing Movement

    Directory of Open Access Journals (Sweden)

    Geisler T.

    2016-12-01

    Full Text Available Beetle wings perform a flapping movement, consisting of the rotation relative to the two axes. This paper presents the results of observations and measurements of wings operating parameters in different planes of some beetle species. High speed photos and videos were used. The concept of the mechanism performing a complex wing movement was proposed and developed.

  17. The first experimental confirmation of the fractional kinetics containing the complex-power-law exponents: Dielectric measurements of polymerization reactions

    Science.gov (United States)

    Nigmatullin, R. R.; Arbuzov, A. A.; Salehli, F.; Giz, A.; Bayrak, I.; Catalgil-Giz, H.

    2007-01-01

    For the first time we achieved incontestable evidence that the real process of dielectric relaxation during the polymerization reaction of polyvinylpyrrolidone (PVP) is described in terms of the fractional kinetic equations containing complex-power-law exponents. The possibility of the existence of the fractional kinetics containing non-integer complex-power-law exponents follows from the general theory of dielectric relaxation that has been suggested recently by one of the authors (R.R.N). Based on the physical/geometrical meaning of the fractional integral with complex exponents there is a possibility to develop a general theory of dielectric relaxation based on the self-similar (fractal) character of the reduced (averaged) microprocesses that take place in the mesoscale region. This theory contains some essential predictions related to existence of the non-integer power-law kinetics and the results of this paper can be considered as the first confirmation of existence of the kinetic phenomena that are described by fractional derivatives with complex-power-law exponents. We want to stress here that with the help of a new complex fitting function for the complex permittivity it becomes possible to describe the whole process for real and imaginary parts simultaneously throughout the admissible frequency range (30 Hz-13 MHz). The fitting parameters obtained for the complex permittivity function for three temperatures (70, 90 and 110 °C) confirm in general the picture of reaction that was known qualitatively before. They also reveal some new features, which improve the interpretation of the whole polymerization process. We hope that these first results obtained in the paper will serve as a good stimulus for other researches to find the traces of the existence of new fractional kinetics in other relaxation processes unrelated to the dielectric relaxation. These results should lead to the reconsideration and generalization of irreversibility and kinetic phenomena that

  18. The measurement and model construction of complex permittivity of corn leaves at the main frequency points of L/S/C/X-band

    Science.gov (United States)

    Zeng, J. Y.; Li, Z.; Tang, Z. H.; Chen, Q.; Bi, H. Y.; Zhao, L. B.

    2014-03-01

    The complex permittivity of target has a crucial influence on its microwave radiation characteristics. In the quantitative research of microwave remote sensing, the study of the dielectric properties of vegetation to establish the relationship between its specific physical parameters and complex permittivity is the basic work in this field. In this study, corn leaves samples of different types and heights were collected at the city of Zhangye which is the key study area of the Heihe watershed allied telemetry experimental research and also the largest breeding base of hybrid corn seeds in China. Then the vector network analyzer E8362B was used to measure the complex permittivity of these samples from 0.2 to 20 GHz by coaxial probe technique. Based on these measurements, an empirical model of corn leaves which describes the relationship between the gravimetric moisture and both the real part and imaginary part of complex permittivity at the main frequency points of L/S/C/X-band was established. Finally, the empirical model and the classical Debye-Cole model were compared and validated by the measured data collected from the Huailai county in Hebei province. The results show that the empirical model has higher accuracy and is more practical than the traditional Debye-Cole model.

  19. A miniature condensed-phase membrane introduction mass spectrometry (CP-MIMS) probe for direct and on-line measurements of pharmaceuticals and contaminants in small, complex samples.

    Science.gov (United States)

    Duncan, Kyle D; Willis, Megan D; Krogh, Erik T; Gill, Christopher G

    2013-06-15

    High-throughput, automated analytical measurements are desirable in many analytical scenarios, as are rapid sample pre-screening techniques to identify 'positive' samples for subsequent measurements using more time-consuming conventional methodologies (e.g., liquid chromatography/mass spectrometry (LC/MS)). A miniature condensed-phase membrane introduction mass spectrometry (CP-MIMS) probe for the direct and continuous, on-line measurement of pharmaceuticals and environmental contaminants in small, complex samples is presented. A miniature polydimethylsiloxane hollow fibre membrane (PDMS-HFM) probe is coupled with an electrospray ionization (ESI) triple quadrupole mass spectrometer. Analytes are transported from the probe to the ESI source by a methanol acceptor phase. The probe can be autosampler mounted and directly inserted in small samples (≥400 μL) allowing continuous and simultaneous pptr-ppb level detection of target analytes (chlorophenols, triclosan, gemfibrozil, nonylphenol) in complex samples (artificial urine, beer, natural water, waste water, plant tissue). The probe has been characterized and optimized for acceptor phase flow rate, sample mixing and probe washing. Signal response times, detection limits and calibration data are given for selected ion monitoring (SIM) and tandem mass spectrometry (MS/MS) measurements of target analytes at trace levels. Comparisons with flow cell type CP-MIMS systems are given. Analyte depletion effects are evaluated for small samples (≥400 μL). On-line measurements in small volumes of complex samples, temporally resolved reaction monitoring and in situ/in vivo demonstrations are presented. The miniature CP-MIMS probe developed was successfully used for the direct, on-line detection of target analytes in small volumes (40 mL to 400 μL) of complex samples at pptr to low ppb levels. The probe can be readily automated as well as deployed for in situ/in vivo monitoring, including reaction monitoring, small sample

  20. Resting and Task-Modulated High-Frequency Brain Rhythms Measured by Scalp Encephalography in Infants with Tuberous Sclerosis Complex

    Science.gov (United States)

    Stamoulis, Catherine; Vogel-Farley, Vanessa; Degregorio, Geneva; Jeste, Shafali S.; Nelson, Charles A.

    2015-01-01

    The electrophysiological correlates of cognitive deficits in tuberous sclerosis complex (TSC) are not well understood, and modulations of neural dynamics by neuroanatomical abnormalities that characterize the disorder remain elusive. Neural oscillations (rhythms) are a fundamental aspect of brain function, and have dominant frequencies in a wide…