WorldWideScience

Sample records for normalised eigenvectors vnorm

  1. Matrix with Prescribed Eigenvectors

    Science.gov (United States)

    Ahmad, Faiz

    2011-01-01

    It is a routine matter for undergraduates to find eigenvalues and eigenvectors of a given matrix. But the converse problem of finding a matrix with prescribed eigenvalues and eigenvectors is rarely discussed in elementary texts on linear algebra. This problem is related to the "spectral" decomposition of a matrix and has important technical…

  2. Normalising convenience food?

    DEFF Research Database (Denmark)

    Halkier, Bente

    2017-01-01

    The construction of convenience food as a social and cultural category for food provisioning, cooking and eating seems to slide between or across understandings of what is considered “proper food” in the existing discourses in everyday life and media. This article sheds light upon some...... of the social and cultural normativities around convenience food by describing the ways in which convenience food forms part of the daily life of young Danes. Theoretically, the article is based on a practice theoretical perspective. Empirically, the article builds upon a qualitative research project on food...... habits among Danes aged 20–25. The article presents two types of empirical patterns. The first types of patterns are the degree to which and the different ways in which convenience food is normalised to use among the young Danes. The second types of patterns are the normative places of convenient food...

  3. Covariance expressions for eigenvalue and eigenvector problems

    Science.gov (United States)

    Liounis, Andrew J.

    There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.

  4. Supervised Object Class Colour Normalisation

    DEFF Research Database (Denmark)

    Riabchenko, Ekatarina; Lankinen, Jukka; Buch, Anders Glent

    2013-01-01

    . In this work, we develop a such colour normalisation technique, where true colours are not important per se but where examples of same classes have photometrically consistent appearance. This is achieved by supervised estimation of a class specic canonical colour space where the examples have minimal variation......Colour is an important cue in many applications of computer vision and image processing, but robust usage often requires estimation of the unknown illuminant colour. Usually, to obtain images invariant to the illumination conditions under which they were taken, color normalisation is used...... in their colours. We demonstrate the effectiveness of our method with qualitative and quantitative examples from the Caltech-101 data set and a real application of 3D pose estimation for robot grasping....

  5. Nuclear power 1984: Progressive normalisation

    International Nuclear Information System (INIS)

    Popp, M.

    1984-01-01

    The peaceful use of nuclear power is being integrated into the overall concept of a safe long-term power supply in West Germany. The progress of normalisation is shown particularly in the takeover of all stations of the nuclear fuel circuit by the economy, with the exception of the final storage of radioactive waste, which is the responsibility of the West German Government. Normalisation also means the withdrawal of the state from financing projects after completion of the two prototypes SNR-300 and THTR-300 and the German uranium enrichment plant. The state will, however, support future research and development projects in the nuclear field. The expansion of nuclear power capacity is at present being slowed down by the state of the economy, i.e. only nuclear power projects being built are proceeding. (orig./HP) [de

  6. Infinitary Combinatory Reduction Systems: Normalising Reduction Strategies

    NARCIS (Netherlands)

    Ketema, J.; Simonsen, Jakob Grue

    2010-01-01

    We study normalising reduction strategies for infinitary Combinatory Reduction Systems (iCRSs). We prove that all fair, outermost-fair, and needed-fair strategies are normalising for orthogonal, fully-extended iCRSs. These facts properly generalise a number of results on normalising strategies in

  7. Motivating the Concept of Eigenvectors via Cryptography

    Science.gov (United States)

    Siap, Irfan

    2008-01-01

    New methods of teaching linear algebra in the undergraduate curriculum have attracted much interest lately. Most of this work is focused on evaluating and discussing the integration of special computer software into the Linear Algebra curriculum. In this article, I discuss my approach on introducing the concept of eigenvectors and eigenvalues,…

  8. Eigenvectors phase correction in inverse modal problem

    Science.gov (United States)

    Qiao, Guandong; Rahmatalla, Salam

    2017-12-01

    The solution of the inverse modal problem for the spatial parameters of mechanical and structural systems is heavily dependent on the quality of the modal parameters obtained from the experiments. While experimental and environmental noises will always exist during modal testing, the resulting modal parameters are expected to be corrupted with different levels of noise. A novel methodology is presented in this work to mitigate the errors in the eigenvectors when solving the inverse modal problem for the spatial parameters. The phases of the eigenvector component were utilized as design variables within an optimization problem that minimizes the difference between the calculated and experimental transfer functions. The equation of motion in terms of the modal and spatial parameters was used as a constraint in the optimization problem. Constraints that reserve the positive and semi-positive definiteness and the inter-connectivity of the spatial matrices were implemented using semi-definite programming. Numerical examples utilizing noisy eigenvectors with augmented Gaussian white noise of 1%, 5%, and 10% were used to demonstrate the efficacy of the proposed method. The results showed that the proposed method is superior when compared with a known method in the literature.

  9. Eigenvector space model to capture features of documents

    Directory of Open Access Journals (Sweden)

    Choi DONGJIN

    2011-09-01

    Full Text Available Eigenvectors are a special set of vectors associated with a linear system of equations. Because of the special property of eigenvector, it has been used a lot for computer vision area. When the eigenvector is applied to information retrieval field, it is possible to obtain properties of documents data corpus. To capture properties of given documents, this paper conducted simple experiments to prove the eigenvector is also possible to use in document analysis. For the experiment, we use short abstract document of Wikipedia provided by DBpedia as a document corpus. To build an original square matrix, the most popular method named tf-idf measurement will be used. After calculating the eigenvectors of original matrix, each vector will be plotted into 3D graph to find what the eigenvector means in document processing.

  10. Distinct types of eigenvector localization in networks

    Science.gov (United States)

    Pastor-Satorras, Romualdo; Castellano, Claudio

    2016-01-01

    The spectral properties of the adjacency matrix provide a trove of information about the structure and function of complex networks. In particular, the largest eigenvalue and its associated principal eigenvector are crucial in the understanding of nodes’ centrality and the unfolding of dynamical processes. Here we show that two distinct types of localization of the principal eigenvector may occur in heterogeneous networks. For synthetic networks with degree distribution P(q) ~ q-γ, localization occurs on the largest hub if γ > 5/2 for γ < 5/2 a new type of localization arises on a mesoscopic subgraph associated with the shell with the largest index in the K-core decomposition. Similar evidence for the existence of distinct localization modes is found in the analysis of real-world networks. Our results open a new perspective on dynamical processes on networks and on a recently proposed alternative measure of node centrality based on the non-backtracking matrix.

  11. Eigenvector Weighting Function in Face Recognition

    Directory of Open Access Journals (Sweden)

    Pang Ying Han

    2011-01-01

    Full Text Available Graph-based subspace learning is a class of dimensionality reduction technique in face recognition. The technique reveals the local manifold structure of face data that hidden in the image space via a linear projection. However, the real world face data may be too complex to measure due to both external imaging noises and the intra-class variations of the face images. Hence, features which are extracted by the graph-based technique could be noisy. An appropriate weight should be imposed to the data features for better data discrimination. In this paper, a piecewise weighting function, known as Eigenvector Weighting Function (EWF, is proposed and implemented in two graph based subspace learning techniques, namely Locality Preserving Projection and Neighbourhood Preserving Embedding. Specifically, the computed projection subspace of the learning approach is decomposed into three partitions: a subspace due to intra-class variations, an intrinsic face subspace, and a subspace which is attributed to imaging noises. Projected data features are weighted differently in these subspaces to emphasize the intrinsic face subspace while penalizing the other two subspaces. Experiments on FERET and FRGC databases are conducted to show the promising performance of the proposed technique.

  12. Localized eigenvectors of the non-backtracking matrix

    International Nuclear Information System (INIS)

    Kawamoto, Tatsuro

    2016-01-01

    In the case of graph partitioning, the emergence of localized eigenvectors can cause the standard spectral method to fail. To overcome this problem, the spectral method using a non-backtracking matrix was proposed. Based on numerical experiments on several examples of real networks, it is clear that the non-backtracking matrix does not exhibit localization of eigenvectors. However, we show that localized eigenvectors of the non-backtracking matrix can exist outside the spectral band, which may lead to deterioration in the performance of graph partitioning. (paper: interdisciplinary statistical mechanics)

  13. Attitudes to Normalisation and Inclusive Education

    Science.gov (United States)

    Sanagi, Tomomi

    2016-01-01

    The purpose of this paper was to clarify the features of teachers' image on normalisation and inclusive education. The participants of the study were both mainstream teachers and special teachers. One hundred and thirty-eight questionnaires were analysed. (1) Teachers completed the questionnaire of SD (semantic differential) images on…

  14. Use of eigenvectors in understanding and correcting storage ring orbits

    International Nuclear Information System (INIS)

    Friedman, A.; Bozoki, E.

    1994-01-01

    The response matrix A is defined by the equation X=AΘ, where Θ is the kick vector and X is the resulting orbit vector. Since A is not necessarily a symmetric or even a square matrix we symmetrize it by using A T A. Then we find the eigenvalues and eigenvectors of this A T A matrix. The physical interpretation of the eigenvectors for circular machines is discussed. The task of the orbit correction is to find the kick vector Θ for a given measured orbit vector X. We are presenting a method, in which the kick vector is expressed as linear combination of the eigenvectors. An additional advantage of this method is that it yields the smallest possible kick vector to correct the orbit. We will illustrate the application of the method to the NSLS X-ray and UV storage rings and the resulting measurements. It will be evident, that the accuracy of this method allows the combination of the global orbit correction and local optimization of the orbit for beam lines and insertion devices. The eigenvector decomposition can also be used for optimizing kick vectors, taking advantage of the fact that eigenvectors with corresponding small eigenvalue generate negligible orbit changes. Thus, one can reduce a kick vector calculated by any other correction method and still stay within the tolerance for orbit correction. The use of eigenvectors in accurately measuring the response matrix and the use of the eigenvalue decomposition orbit correction algorithm in digital feedback is discussed. (orig.)

  15. Image denoising via adaptive eigenvectors of graph Laplacian

    Science.gov (United States)

    Chen, Ying; Tang, Yibin; Xu, Ning; Zhou, Lin; Zhao, Li

    2016-07-01

    An image denoising method via adaptive eigenvectors of graph Laplacian (EGL) is proposed. Unlike the trivial parameter setting of the used eigenvectors in the traditional EGL method, in our method, the eigenvectors are adaptively selected in the whole denoising procedure. In detail, a rough image is first built with the eigenvectors from the noisy image, where the eigenvectors are selected by using the deviation estimation of the clean image. Subsequently, a guided image is effectively restored with a weighted average of the noisy and rough images. In this operation, the average coefficient is adaptively obtained to set the deviation of the guided image to approximately that of the clean image. Finally, the denoised image is achieved by a group-sparse model with the pattern from the guided image, where the eigenvectors are chosen in the error control of the noise deviation. Moreover, a modified group orthogonal matching pursuit algorithm is developed to efficiently solve the above group sparse model. The experiments show that our method not only improves the practicality of the EGL methods with the dependence reduction of the parameter setting, but also can outperform some well-developed denoising methods, especially for noise with large deviations.

  16. EISPACK, Subroutines for Eigenvalues, Eigenvectors, Matrix Operations

    International Nuclear Information System (INIS)

    Garbow, Burton S.; Cline, A.K.; Meyering, J.

    1993-01-01

    1 - Description of problem or function: EISPACK3 is a collection of 75 FORTRAN subroutines, both single- and double-precision, that compute the eigenvalues and eigenvectors of nine classes of matrices. The package can determine the Eigen-system of complex general, complex Hermitian, real general, real symmetric, real symmetric band, real symmetric tridiagonal, special real tridiagonal, generalized real, and generalized real symmetric matrices. In addition, there are two routines which use the singular value decomposition to solve certain least squares problem. The individual subroutines are - Identification/Description: BAKVEC: Back transform vectors of matrix formed by FIGI; BALANC: Balance a real general matrix; BALBAK: Back transform vectors of matrix formed by BALANC; BANDR: Reduce sym. band matrix to sym. tridiag. matrix; BANDV: Find some vectors of sym. band matrix; BISECT: Find some values of sym. tridiag. matrix; BQR: Find some values of sym. band matrix; CBABK2: Back transform vectors of matrix formed by CBAL; CBAL: Balance a complex general matrix; CDIV: Perform division of two complex quantities; CG: Driver subroutine for a complex general matrix; CH: Driver subroutine for a complex Hermitian matrix; CINVIT: Find some vectors of complex Hess. matrix; COMBAK: Back transform vectors of matrix formed by COMHES; COMHES: Reduce complex matrix to complex Hess. (elementary); COMLR: Find all values of complex Hess. matrix (LR); COMLR2: Find all values/vectors of cmplx Hess. matrix (LR); CCMQR: Find all values of complex Hessenberg matrix (QR); COMQR2: Find all values/vectors of cmplx Hess. matrix (QR); CORTB: Back transform vectors of matrix formed by CORTH; CORTH: Reduce complex matrix to complex Hess. (unitary); CSROOT: Find square root of complex quantity; ELMBAK: Back transform vectors of matrix formed by ELMHES; ELMHES: Reduce real matrix to real Hess. (elementary); ELTRAN: Accumulate transformations from ELMHES (for HQR2); EPSLON: Estimate unit roundoff

  17. A teaching proposal for the study of Eigenvectors and Eigenvalues

    Directory of Open Access Journals (Sweden)

    María José Beltrán Meneu

    2017-03-01

    Full Text Available In this work, we present a teaching proposal which emphasizes on visualization and physical applications in the study of eigenvectors and eigenvalues. These concepts are introduced using the notion of the moment of inertia of a rigid body and the GeoGebra software. The proposal was motivated after observing students’ difficulties when treating eigenvectors and eigenvalues from a geometric point of view. It was designed following a particular sequence of activities with the schema: exploration, introduction of concepts, structuring of knowledge and application, and considering the three worlds of mathematical thinking provided by Tall: embodied, symbolic and formal.

  18. Eigenvectors of Open Bazhanov-Stroganov Quantum Chain

    Directory of Open Access Journals (Sweden)

    Nikolai Iorgov

    2006-02-01

    Full Text Available In this contribution we give an explicit formula for the eigenvectors of Hamiltonians of open Bazhanov-Stroganov quantum chain. The Hamiltonians of this quantum chain is defined by the generation polynomial $A_n(lambda$ which is upper-left matrix element of monodromy matrix built from the Bazhanov-Stroganov $L$-operators. The formulas for the eigenvectors are derived using iterative procedure by Kharchev and Lebedev and given in terms of $w_p(s$-function which is a root of unity analogue of $Gamma_q$-function.

  19. A subspace preconditioning algorithm for eigenvector/eigenvalue computation

    Energy Technology Data Exchange (ETDEWEB)

    Bramble, J.H.; Knyazev, A.V.; Pasciak, J.E.

    1996-12-31

    We consider the problem of computing a modest number of the smallest eigenvalues along with orthogonal bases for the corresponding eigen-spaces of a symmetric positive definite matrix. In our applications, the dimension of a matrix is large and the cost of its inverting is prohibitive. In this paper, we shall develop an effective parallelizable technique for computing these eigenvalues and eigenvectors utilizing subspace iteration and preconditioning. Estimates will be provided which show that the preconditioned method converges linearly and uniformly in the matrix dimension when used with a uniform preconditioner under the assumption that the approximating subspace is close enough to the span of desired eigenvectors.

  20. Queer Literature in Spain: Pathways to Normalisation

    Directory of Open Access Journals (Sweden)

    Martínez-Expósito, Alfredo

    2013-06-01

    Full Text Available More than any other, the idea of normalisation has provoked deep divisions within queer activism both at a philosophical and also at a political level. At the root of these divisions lies the irreconcilable divergence between an agenda for social change, which advocates the need for society to accept all sexual behaviours and identities as normal, and an approach of radical resistance against some social structures that can only offer a bourgeois and conformist normalisation. Literary fiction and homo-gay-queer themed cinema have explored these and other sides of the idea of normalisation and have thus contributed to the taking of decisive steps: from the poetics of transgression towards the poetics of celebration and social transformation. In this paper we examine two of these literary normalisation strategies: the use of humour and the proliferation of discursive perspectives both in the cinema and in narrative fiction during the last decades.Más quizá que ninguna otra, la idea de normalización ha provocado profundas divisiones en el seno del activismo queer, tanto a nivel filosófico/conceptual como a nivel de estrategia política. En el origen de estas divisiones se encuentra la irreconciliable divergencia entre una agenda de cambio social, que propugna la necesidad de que la sociedad acepte como normales todas las conductas e identidades sexuales, y un planteamiento de resistencia radical ante unas estructuras sociales que sólo pueden ofrecer una normalización burguesa y acomodaticia. La literatura de ficción y el cine de temática homo-gay-queer han explorado éstas y otras facetas de la idea de normalización, contribuyendo así a dar pasos decisivos desde las poéticas de la transgresión hacia poéticas de la celebración y transformación social. En esta presentación se exploran dos de estas estrategias de normalización literaria: el uso del humor y la proliferación de perspectivas discursivas en el cine y la narrativa de

  1. RELATIVISTIC MAGNETOHYDRODYNAMICS: RENORMALIZED EIGENVECTORS AND FULL WAVE DECOMPOSITION RIEMANN SOLVER

    International Nuclear Information System (INIS)

    Anton, Luis; MartI, Jose M; Ibanez, Jose M; Aloy, Miguel A.; Mimica, Petar; Miralles, Juan A.

    2010-01-01

    We obtain renormalized sets of right and left eigenvectors of the flux vector Jacobians of the relativistic MHD equations, which are regular and span a complete basis in any physical state including degenerate ones. The renormalization procedure relies on the characterization of the degeneracy types in terms of the normal and tangential components of the magnetic field to the wave front in the fluid rest frame. Proper expressions of the renormalized eigenvectors in conserved variables are obtained through the corresponding matrix transformations. Our work completes previous analysis that present different sets of right eigenvectors for non-degenerate and degenerate states, and can be seen as a relativistic generalization of earlier work performed in classical MHD. Based on the full wave decomposition (FWD) provided by the renormalized set of eigenvectors in conserved variables, we have also developed a linearized (Roe-type) Riemann solver. Extensive testing against one- and two-dimensional standard numerical problems allows us to conclude that our solver is very robust. When compared with a family of simpler solvers that avoid the knowledge of the full characteristic structure of the equations in the computation of the numerical fluxes, our solver turns out to be less diffusive than HLL and HLLC, and comparable in accuracy to the HLLD solver. The amount of operations needed by the FWD solver makes it less efficient computationally than those of the HLL family in one-dimensional problems. However, its relative efficiency increases in multidimensional simulations.

  2. The best of both worlds: Phylogenetic eigenvector regression and mapping

    Directory of Open Access Journals (Sweden)

    José Alexandre Felizola Diniz Filho

    2015-09-01

    Full Text Available Eigenfunction analyses have been widely used to model patterns of autocorrelation in time, space and phylogeny. In a phylogenetic context, Diniz-Filho et al. (1998 proposed what they called Phylogenetic Eigenvector Regression (PVR, in which pairwise phylogenetic distances among species are submitted to a Principal Coordinate Analysis, and eigenvectors are then used as explanatory variables in regression, correlation or ANOVAs. More recently, a new approach called Phylogenetic Eigenvector Mapping (PEM was proposed, with the main advantage of explicitly incorporating a model-based warping in phylogenetic distance in which an Ornstein-Uhlenbeck (O-U process is fitted to data before eigenvector extraction. Here we compared PVR and PEM in respect to estimated phylogenetic signal, correlated evolution under alternative evolutionary models and phylogenetic imputation, using simulated data. Despite similarity between the two approaches, PEM has a slightly higher prediction ability and is more general than the original PVR. Even so, in a conceptual sense, PEM may provide a technique in the best of both worlds, combining the flexibility of data-driven and empirical eigenfunction analyses and the sounding insights provided by evolutionary models well known in comparative analyses.

  3. Computing the eigenvalues and eigenvectors of a fuzzy matrix

    Directory of Open Access Journals (Sweden)

    A. Kumar

    2012-08-01

    Full Text Available Computation of fuzzy eigenvalues and fuzzy eigenvectors of a fuzzy matrix is a challenging problem. Determining the maximal and minimal symmetric solution can help to find the eigenvalues. So, we try to compute these eigenvalues by determining the maximal and minimal symmetric solution of the fully fuzzy linear system $widetilde{A}widetilde{X}= widetilde{lambda} widetilde{X}.$

  4. EIGENVECTOR-BASED CENTRALITY MEASURES FOR TEMPORAL NETWORKS*

    Science.gov (United States)

    TAYLOR, DANE; MYERS, SEAN A.; CLAUSET, AARON; PORTER, MASON A.; MUCHA, PETER J.

    2017-01-01

    Numerous centrality measures have been developed to quantify the importances of nodes in time-independent networks, and many of them can be expressed as the leading eigenvector of some matrix. With the increasing availability of network data that changes in time, it is important to extend such eigenvector-based centrality measures to time-dependent networks. In this paper, we introduce a principled generalization of network centrality measures that is valid for any eigenvector-based centrality. We consider a temporal network with N nodes as a sequence of T layers that describe the network during different time windows, and we couple centrality matrices for the layers into a supra-centrality matrix of size NT × NT whose dominant eigenvector gives the centrality of each node i at each time t. We refer to this eigenvector and its components as a joint centrality, as it reflects the importances of both the node i and the time layer t. We also introduce the concepts of marginal and conditional centralities, which facilitate the study of centrality trajectories over time. We find that the strength of coupling between layers is important for determining multiscale properties of centrality, such as localization phenomena and the time scale of centrality changes. In the strong-coupling regime, we derive expressions for time-averaged centralities, which are given by the zeroth-order terms of a singular perturbation expansion. We also study first-order terms to obtain first-order-mover scores, which concisely describe the magnitude of nodes’ centrality changes over time. As examples, we apply our method to three empirical temporal networks: the United States Ph.D. exchange in mathematics, costarring relationships among top-billed actors during the Golden Age of Hollywood, and citations of decisions from the United States Supreme Court. PMID:29046619

  5. Chirality correlation within Dirac eigenvectors from domain wall fermions

    International Nuclear Information System (INIS)

    Blum, T.; Christ, N.; Cristian, C.; Liao, X.; Liu, G.; Mawhinney, R.; Wu, L.; Zhestkov, Y.; Dawson, C.

    2002-01-01

    In the dilute instanton gas model of the QCD vacuum, one expects a strong spatial correlation between chirality and the maxima of the Dirac eigenvectors with small eigenvalues. Following Horvath et al. we examine this question using lattice gauge theory within the quenched approximation. We extend the work of those authors by using weaker coupling, β=6.0, larger lattices, 16 4 , and an improved fermion formulation, domain wall fermions. In contrast with this earlier work, we find a striking correlation between the magnitudes of the chirality density, |ψ † (x)γ 5 ψ(x)|, and the normal density, ψ † (x)ψ(x), for the low-lying Dirac eigenvectors

  6. Normalisation: ROI optimal treatment planning - SNDH pattern

    International Nuclear Information System (INIS)

    Shilvat, D.V.; Bhandari, Virendra; Tamane, Chandrashekhar; Pangam, Suresh

    2001-01-01

    Dose precision maximally to the target / ROI (Region of Interest), taking care of tolerance dose of normal tissue is the aim of ideal treatment planning. This goal is achieved with advanced modalities such as, micro MLC, simulator and 3-dimensional treatment planning system. But SNDH PATTERN uses minimum available resources as, ALCYON II Telecobalt unit, CT Scan, MULTIDATA 2-dimensional treatment planning system to their maximum utility and reaches to the required precision, same as that with advance modalities. Among the number of parameters used, 'NORMALISATION TO THE ROI' will achieve the aim of the treatment planning effectively. This is dealing with an example of canal of esophagus modified treatment planning based on SNDH pattern. Results are attractive and self explanatory. By implementing SNDH pattern, the QUALITY INDEX of treatment plan will reach to greater than 90%, with substantial reduction in dose to the vital organs. Aim is to utilize the minimum available resources efficiently to achieve highest possible precision for delivering homogenous dose to ROI while taking care of tolerance dose to vital organs

  7. Technology, normalisation and male sex work.

    Science.gov (United States)

    MacPhail, Catherine; Scott, John; Minichiello, Victor

    2015-01-01

    Technological change, particularly the growth of the Internet and smart phones, has increased the visibility of male escorts, expanded their client base and diversified the range of venues in which male sex work can take place. Specifically, the Internet has relocated some forms of male sex work away from the street and thereby increased market reach, visibility and access and the scope of sex work advertising. Using the online profiles of 257 male sex workers drawn from six of the largest websites advertising male sexual services in Australia, the role of the Internet in facilitating the normalisation of male sex work is discussed. Specifically we examine how engagement with the sex industry has been reconstituted in term of better informed consumer-seller decisions for both clients and sex workers. Rather than being seen as a 'deviant' activity, understood in terms of pathology or criminal activity, male sex work is increasingly presented as an everyday commodity in the market place. In this context, the management of risks associated with sex work has shifted from formalised social control to more informal practices conducted among online communities of clients and sex workers. We discuss the implications for health, legal and welfare responses within an empowerment paradigm.

  8. Eigenvector centrality for geometric and topological characterization of porous media

    Science.gov (United States)

    Jimenez-Martinez, Joaquin; Negre, Christian F. A.

    2017-07-01

    Solving flow and transport through complex geometries such as porous media is computationally difficult. Such calculations usually involve the solution of a system of discretized differential equations, which could lead to extreme computational cost depending on the size of the domain and the accuracy of the model. Geometric simplifications like pore networks, where the pores are represented by nodes and the pore throats by edges connecting pores, have been proposed. These models, despite their ability to preserve the connectivity of the medium, have difficulties capturing preferential paths (high velocity) and stagnation zones (low velocity), as they do not consider the specific relations between nodes. Nonetheless, network theory approaches, where a complex network is a graph, can help to simplify and better understand fluid dynamics and transport in porous media. Here we present an alternative method to address these issues based on eigenvector centrality, which has been corrected to overcome the centralization problem and modified to introduce a bias in the centrality distribution along a particular direction to address the flow and transport anisotropy in porous media. We compare the model predictions with millifluidic transport experiments, which shows that, albeit simple, this technique is computationally efficient and has potential for predicting preferential paths and stagnation zones for flow and transport in porous media. We propose to use the eigenvector centrality probability distribution to compute the entropy as an indicator of the "mixing capacity" of the system.

  9. An adaptive left–right eigenvector evolution algorithm for vibration isolation control

    International Nuclear Information System (INIS)

    Wu, T Y

    2009-01-01

    The purpose of this research is to investigate the feasibility of utilizing an adaptive left and right eigenvector evolution (ALREE) algorithm for active vibration isolation. As depicted in the previous paper presented by Wu and Wang (2008 Smart Mater. Struct. 17 015048), the structural vibration behavior depends on both the disturbance rejection capability and mode shape distributions, which correspond to the left and right eigenvector distributions of the system, respectively. In this paper, a novel adaptive evolution algorithm is developed for finding the optimal combination of left–right eigenvectors of the vibration isolator, which is an improvement over the simultaneous left–right eigenvector assignment (SLREA) method proposed by Wu and Wang (2008 Smart Mater. Struct. 17 015048). The isolation performance index used in the proposed algorithm is defined by combining the orthogonality index of left eigenvectors and the modal energy ratio index of right eigenvectors. Through the proposed ALREE algorithm, both the left and right eigenvectors evolve such that the isolation performance index decreases, and therefore one can find the optimal combination of left–right eigenvectors of the closed-loop system for vibration isolation purposes. The optimal combination of left–right eigenvectors is then synthesized to determine the feedback gain matrix of the closed-loop system. The result of the active isolation control shows that the proposed method can be utilized to improve the vibration isolation performance compared with the previous approaches

  10. Recursive Principal Components Analysis Using Eigenvector Matrix Perturbation

    Directory of Open Access Journals (Sweden)

    Deniz Erdogmus

    2004-10-01

    Full Text Available Principal components analysis is an important and well-studied subject in statistics and signal processing. The literature has an abundance of algorithms for solving this problem, where most of these algorithms could be grouped into one of the following three approaches: adaptation based on Hebbian updates and deflation, optimization of a second-order statistical criterion (like reconstruction error or output variance, and fixed point update rules with deflation. In this paper, we take a completely different approach that avoids deflation and the optimization of a cost function using gradients. The proposed method updates the eigenvector and eigenvalue matrices simultaneously with every new sample such that the estimates approximately track their true values as would be calculated from the current sample estimate of the data covariance matrix. The performance of this algorithm is compared with that of traditional methods like Sanger's rule and APEX, as well as a structurally similar matrix perturbation-based method.

  11. Multivariate analysis of eigenvalues and eigenvectors in tensor based morphometry

    Science.gov (United States)

    Rajagopalan, Vidya; Schwartzman, Armin; Hua, Xue; Leow, Alex; Thompson, Paul; Lepore, Natasha

    2015-01-01

    We develop a new algorithm to compute voxel-wise shape differences in tensor-based morphometry (TBM). As in standard TBM, we non-linearly register brain T1-weighed MRI data from a patient and control group to a template, and compute the Jacobian of the deformation fields. In standard TBM, the determinants of the Jacobian matrix at each voxel are statistically compared between the two groups. More recently, a multivariate extension of the statistical analysis involving the deformation tensors derived from the Jacobian matrices has been shown to improve statistical detection power.7 However, multivariate methods comprising large numbers of variables are computationally intensive and may be subject to noise. In addition, the anatomical interpretation of results is sometimes difficult. Here instead, we analyze the eigenvalues and the eigenvectors of the Jacobian matrices. Our method is validated on brain MRI data from Alzheimer's patients and healthy elderly controls from the Alzheimer's Disease Neuro Imaging Database.

  12. Asymptotics of eigenvalues and eigenvectors of Toeplitz matrices

    Science.gov (United States)

    Böttcher, A.; Bogoya, J. M.; Grudsky, S. M.; Maximenko, E. A.

    2017-11-01

    Analysis of the asymptotic behaviour of the spectral characteristics of Toeplitz matrices as the dimension of the matrix tends to infinity has a history of over 100 years. For instance, quite a number of versions of Szegő's theorem on the asymptotic behaviour of eigenvalues and of the so-called strong Szegő theorem on the asymptotic behaviour of the determinants of Toeplitz matrices are known. Starting in the 1950s, the asymptotics of the maximum and minimum eigenvalues were actively investigated. However, investigation of the individual asymptotics of all the eigenvalues and eigenvectors of Toeplitz matrices started only quite recently: the first papers on this subject were published in 2009-2010. A survey of this new field is presented here. Bibliography: 55 titles.

  13. Normalised flood losses in Europe: 1970-2006

    Science.gov (United States)

    Barredo, J. I.

    2009-02-01

    This paper presents an assessment of normalised flood losses in Europe for the period 1970-2006. Normalisation provides an estimate of the losses that would occur if the floods from the past take place under current societal conditions. Economic losses from floods are the result of both societal and climatological factors. Failing to adjust for time-variant socio-economic factors produces loss amounts that are not directly comparable over time, but rather show an ever-growing trend for purely socio-economic reasons. This study has used available information on flood losses from the Emergency Events Database (EM-DAT) and the Natural Hazards Assessment Network (NATHAN). Following the conceptual approach of previous studies, we normalised flood losses by considering the effects of changes in population, wealth, and inflation at the country level. Furthermore, we removed inter-country price differences by adjusting the losses for purchasing power parities (PPP). We assessed normalised flood losses in 31 European countries. These include the member states of the European Union, Norway, Switzerland, Croatia, and the Former Yugoslav Republic of Macedonia. Results show no detectable sign of human-induced climate change in normalised flood losses in Europe. The observed increase in the original flood losses is mostly driven by societal factors.

  14. Protein Structure Recognition: From Eigenvector Analysis to Structural Threading Method

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Haibo [Iowa State Univ., Ames, IA (United States)

    2003-01-01

    In this work, they try to understand the protein folding problem using pair-wise hydrophobic interaction as the dominant interaction for the protein folding process. They found a strong correlation between amino acid sequences and the corresponding native structure of the protein. Some applications of this correlation were discussed in this dissertation include the domain partition and a new structural threading method as well as the performance of this method in the CASP5 competition. In the first part, they give a brief introduction to the protein folding problem. Some essential knowledge and progress from other research groups was discussed. This part includes discussions of interactions among amino acids residues, lattice HP model, and the design ability principle. In the second part, they try to establish the correlation between amino acid sequence and the corresponding native structure of the protein. This correlation was observed in the eigenvector study of protein contact matrix. They believe the correlation is universal, thus it can be used in automatic partition of protein structures into folding domains. In the third part, they discuss a threading method based on the correlation between amino acid sequences and ominant eigenvector of the structure contact-matrix. A mathematically straightforward iteration scheme provides a self-consistent optimum global sequence-structure alignment. The computational efficiency of this method makes it possible to search whole protein structure databases for structural homology without relying on sequence similarity. The sensitivity and specificity of this method is discussed, along with a case of blind test prediction. In the appendix, they list the overall performance of this threading method in CASP5 blind test in comparison with other existing approaches.

  15. Protein structure recognition: From eigenvector analysis to structural threading method

    Science.gov (United States)

    Cao, Haibo

    In this work, we try to understand the protein folding problem using pair-wise hydrophobic interaction as the dominant interaction for the protein folding process. We found a strong correlation between amino acid sequence and the corresponding native structure of the protein. Some applications of this correlation were discussed in this dissertation include the domain partition and a new structural threading method as well as the performance of this method in the CASP5 competition. In the first part, we give a brief introduction to the protein folding problem. Some essential knowledge and progress from other research groups was discussed. This part include discussions of interactions among amino acids residues, lattice HP model, and the designablity principle. In the second part, we try to establish the correlation between amino acid sequence and the corresponding native structure of the protein. This correlation was observed in our eigenvector study of protein contact matrix. We believe the correlation is universal, thus it can be used in automatic partition of protein structures into folding domains. In the third part, we discuss a threading method based on the correlation between amino acid sequence and ominant eigenvector of the structure contact-matrix. A mathematically straightforward iteration scheme provides a self-consistent optimum global sequence-structure alignment. The computational efficiency of this method makes it possible to search whole protein structure databases for structural homology without relying on sequence similarity. The sensitivity and specificity of this method is discussed, along with a case of blind test prediction. In the appendix, we list the overall performance of this threading method in CASP5 blind test in comparison with other existing approaches.

  16. Protein Structure Recognition: From Eigenvector Analysis to Structural Threading Method

    International Nuclear Information System (INIS)

    Haibo Cao

    2003-01-01

    In this work, they try to understand the protein folding problem using pair-wise hydrophobic interaction as the dominant interaction for the protein folding process. They found a strong correlation between amino acid sequences and the corresponding native structure of the protein. Some applications of this correlation were discussed in this dissertation include the domain partition and a new structural threading method as well as the performance of this method in the CASP5 competition. In the first part, they give a brief introduction to the protein folding problem. Some essential knowledge and progress from other research groups was discussed. This part includes discussions of interactions among amino acids residues, lattice HP model, and the design ability principle. In the second part, they try to establish the correlation between amino acid sequence and the corresponding native structure of the protein. This correlation was observed in the eigenvector study of protein contact matrix. They believe the correlation is universal, thus it can be used in automatic partition of protein structures into folding domains. In the third part, they discuss a threading method based on the correlation between amino acid sequences and ominant eigenvector of the structure contact-matrix. A mathematically straightforward iteration scheme provides a self-consistent optimum global sequence-structure alignment. The computational efficiency of this method makes it possible to search whole protein structure databases for structural homology without relying on sequence similarity. The sensitivity and specificity of this method is discussed, along with a case of blind test prediction. In the appendix, they list the overall performance of this threading method in CASP5 blind test in comparison with other existing approaches

  17. A spatial-spectral approach for deriving high signal quality eigenvectors for remote sensing image transformations

    DEFF Research Database (Denmark)

    Rogge, Derek; Bachmann, Martin; Rivard, Benoit

    2014-01-01

    Spectral decorrelation (transformations) methods have long been used in remote sensing. Transformation of the image data onto eigenvectors that comprise physically meaningful spectral properties (signal) can be used to reduce the dimensionality of hyperspectral images as the number of spectrally...... distinct signal sources composing a given hyperspectral scene is generally much less than the number of spectral bands. Determining eigenvectors dominated by signal variance as opposed to noise is a difficult task. Problems also arise in using these transformations on large images, multiple flight...... and spectral subsampling to the data, which is accomplished by deriving a limited set of eigenvectors for spatially contiguous subsets. These subset eigenvectors are compiled together to form a new noise reduced data set, which is subsequently used to derive a set of global orthogonal eigenvectors. Data from...

  18. Laplacian eigenvectors of graphs Perron-Frobenius and Faber-Krahn type theorems

    CERN Document Server

    Biyikoğu, Türker; Stadler, Peter F

    2007-01-01

    Eigenvectors of graph Laplacians have not, to date, been the subject of expository articles and thus they may seem a surprising topic for a book. The authors propose two motivations for this new LNM volume: (1) There are fascinating subtle differences between the properties of solutions of Schrödinger equations on manifolds on the one hand, and their discrete analogs on graphs. (2) "Geometric" properties of (cost) functions defined on the vertex sets of graphs are of practical interest for heuristic optimization algorithms. The observation that the cost functions of quite a few of the well-studied combinatorial optimization problems are eigenvectors of associated graph Laplacians has prompted the investigation of such eigenvectors. The volume investigates the structure of eigenvectors and looks at the number of their sign graphs ("nodal domains"), Perron components, graphs with extremal properties with respect to eigenvectors. The Rayleigh quotient and rearrangement of graphs form the main methodology.

  19. Normalisation and weighting in life cycle assessment: quo vadis?

    DEFF Research Database (Denmark)

    Pizzol, Massimo; Laurent, Alexis; Sala, Serenella

    2017-01-01

    Purpose: Building on the rhetoric question “quo vadis?” (literally “Where are you going?”), this article critically investigates the state of the art of normalisation and weighting approaches within life cycle assessment. It aims at identifying purposes, current practises, pros and cons, as well...

  20. Eigenvectors and fixed point of non-linear operators

    Directory of Open Access Journals (Sweden)

    Giulio Trombetta

    2007-12-01

    Full Text Available Let X be a real infinite-dimensional Banach space and ψ a measure of noncompactness on X. Let Ω be a bounded open subset of X and A : Ω → X a ψ-condensing operator, which has no fixed points on ∂Ω.Then the fixed point index, ind(A,Ω, of A on Ω is defined (see, for example, ([1] and [18]. In particular, if A is a compact operator ind(A,Ω agrees with the classical Leray-Schauder degree of I −A on Ω relative to the point 0, deg(I −A,Ω,0. The main aim of this note is to investigate boundary conditions, under which the fixed point index of strict- ψ-contractive or ψ-condensing operators A : Ω → X is equal to zero. Correspondingly, results on eigenvectors and nonzero fixed points of k-ψ-contractive and ψ-condensing operators are obtained. In particular we generalize the Birkhoff-Kellog theorem [4] and Guo’s domain compression and expansion theorem [17]. The note is based mainly on the results contained in [7] and [8].

  1. AMDLIBF, IBM 360 Subroutine Library, Eigenvalues, Eigenvectors, Matrix Inversion

    International Nuclear Information System (INIS)

    Wang, Jesse Y.

    1980-01-01

    Description of problem or function: AMDLIBF is a subset of the IBM 360 Subroutine Library at the Applied Mathematics Division at Argonne. This subset includes library category F: Identification/Description: F152S F SYMINV: Invert sym. matrices, solve lin. systems; F154S A DOTP: Double plus precision accum. inner prod.; F156S F RAYCOR: Rayleigh corrections for eigenvalues; F161S F XTRADP: A fast extended precision inner product; F162S A XTRADP: Inner product of two DP real vectors; F202S F1 EIGEN: Eigen-system for real symmetric matrix; F203S F: Driver for F202S; F248S F RITZIT: Largest eigenvalue and vec. real sym. matrix; F261S F EIGINV: Inverse eigenvalue problem; F313S F CQZHES: Reduce cmplx matrices to upper Hess and tri; F314S F CQZVAL: Reduce complex matrix to upper Hess. form; F315S F CQZVEC: Eigenvectors of cmplx upper triang. syst.; F316S F CGG: Driver for complex general Eigen-problem; F402S F MATINV: Matrix inversion and sol. of linear eqns.; F403S F: Driver for F402S; F452S F CHOLLU,CHOLEQ: Sym. decomp. of pos. def. band matrices; F453S F MATINC: Inversion of complex matrices; F454S F CROUT: Solution of simultaneous linear equations; F455S F CROUTC: Sol. of simultaneous complex linear eqns.; F456S F1 DIAG: Integer preserving Gaussian elimination

  2. Decaying states as complex energy eigenvectors in generalized quantum mechanics

    International Nuclear Information System (INIS)

    Sudarshan, E.C.G.; Chiu, C.B.; Gorini, V.

    1977-04-01

    The problem of particle decay is reexamined within the Hamiltonian formalism. By deforming contours of integration, the survival amplitude is expressed as a sum of purely exponential contributions arising from the simple poles of the resolvent on the second sheet plus a background integral along a complex contour GAMMA running below the location of the poles. One observes that the time dependence of the survival amplitude in the small time region is strongly correlated to the asymptotic behaviour of the energy spectrum of the system; one computes the small time behavior of the survival amplitude for a wide variety of asymptotic behaviors. In the special case of the Lee model, using a formal procedure of analytic continuation, it is shown that a complete set of complex energy eigenvectors of the Hamiltonian can be associated with the poles of the resolvent of the background contour GAMMA. These poles and points along GAMMA correspond to the discrete and the continuum states respectively. In this context, each unstable particle is associated with a well defined object, which is a discrete generalized eigenstate of the Hamiltonian having a complex eigenvalue, with its real and negative imaginary parts being the mass and half width of the particle respectively. Finally, one briefly discusses the analytic continuation of the scattering amplitude within this generalized scheme, and notes the appearance of ''redundant poles'' which do not correspond to discrete solutions of the modified eigenvalue problem

  3. Semi-supervised eigenvectors for large-scale locally-biased learning

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Mahoney, Michael W.

    2014-01-01

    improved scaling properties. We provide several empirical examples demonstrating how these semi-supervised eigenvectors can be used to perform locally-biased learning; and we discuss the relationship between our results and recent machine learning algorithms that use global eigenvectors of the graph......In many applications, one has side information, e.g., labels that are provided in a semi-supervised manner, about a specific target region of a large data set, and one wants to perform machine learning and data analysis tasks nearby that prespecified target region. For example, one might......-based machine learning and data analysis tools. At root, the reason is that eigenvectors are inherently global quantities, thus limiting the applicability of eigenvector-based methods in situations where one is interested in very local properties of the data. In this paper, we address this issue by providing...

  4. On the raising and lowering difference operators for eigenvectors of the finite Fourier transform

    International Nuclear Information System (INIS)

    Atakishiyeva, M K; Atakishiyev, N M

    2015-01-01

    We construct explicit forms of raising and lowering difference operators that govern eigenvectors of the finite (discrete) Fourier transform. Some of the algebraic properties of these operators are also examined. (paper)

  5. Total body neutron activation analysis of calcium: calibration and normalisation

    Energy Technology Data Exchange (ETDEWEB)

    Kennedy, N S.J.; Eastell, R; Ferrington, C M; Simpson, J D; Strong, J A [Western General Hospital, Edinburgh (UK); Smith, M A; Tothill, P [Royal Infirmary, Edinburgh (UK)

    1982-05-01

    An irradiation system has been designed, using a neutron beam from a cyclotron, which optimises the uniformity of activation of calcium. Induced activity is measured in a scanning, shadow-shield whole-body counter. Calibration has been effected and reproducibility assessed with three different types of phantom. Corrections were derived for variations in body height, depth and fat thickness. The coefficient of variation for repeated measurements of an anthropomorphic phantom was 1.8% for an absorbed dose equivalent of 13 mSv (1.3 rem). Measurements of total body calcium in 40 normal adults were used to derive normalisation factors which predict the normal calcium in a subject of given size and age. The coefficient of variation of normalised calcium was 6.2% in men and 6.6% in women, with the demonstration of an annual loss of 1.5% after the menopause. The narrow range should make single measurements useful for diagnostic purposes.

  6. Guidelines for normalising Early Modern English corpora: Decisions and justifications

    Directory of Open Access Journals (Sweden)

    Archer Dawn

    2015-03-01

    Full Text Available Corpora of Early Modern English have been collected and released for research for a number of years. With large scale digitisation activities gathering pace in the last decade, much more historical textual data is now available for research on numerous topics including historical linguistics and conceptual history. We summarise previous research which has shown that it is necessary to map historical spelling variants to modern equivalents in order to successfully apply natural language processing and corpus linguistics methods. Manual and semiautomatic methods have been devised to support this normalisation and standardisation process. We argue that it is important to develop a linguistically meaningful rationale to achieve good results from this process. In order to do so, we propose a number of guidelines for normalising corpora and show how these guidelines have been applied in the Corpus of English Dialogues.

  7. Eigenvector of gravity gradient tensor for estimating fault dips considering fault type

    Science.gov (United States)

    Kusumoto, Shigekazu

    2017-12-01

    The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.

  8. Those Do What? Connecting Eigenvectors and Eigenvalues to the Rest of Linear Algebra: Using Visual Enhancements to Help Students Connect Eigenvectors to the Rest of Linear Algebra

    Science.gov (United States)

    Nyman, Melvin A.; Lapp, Douglas A.; St. John, Dennis; Berry, John S.

    2010-01-01

    This paper discusses student difficulties in grasping concepts from Linear Algebra--in particular, the connection of eigenvalues and eigenvectors to other important topics in linear algebra. Based on our prior observations from student interviews, we propose technology-enhanced instructional approaches that might positively impact student…

  9. Eigenvector centrality mapping for analyzing connectivity patterns in fMRI data of the human brain.

    Directory of Open Access Journals (Sweden)

    Gabriele Lohmann

    Full Text Available Functional magnetic resonance data acquired in a task-absent condition ("resting state" require new data analysis techniques that do not depend on an activation model. In this work, we introduce an alternative assumption- and parameter-free method based on a particular form of node centrality called eigenvector centrality. Eigenvector centrality attributes a value to each voxel in the brain such that a voxel receives a large value if it is strongly correlated with many other nodes that are themselves central within the network. Google's PageRank algorithm is a variant of eigenvector centrality. Thus far, other centrality measures - in particular "betweenness centrality" - have been applied to fMRI data using a pre-selected set of nodes consisting of several hundred elements. Eigenvector centrality is computationally much more efficient than betweenness centrality and does not require thresholding of similarity values so that it can be applied to thousands of voxels in a region of interest covering the entire cerebrum which would have been infeasible using betweenness centrality. Eigenvector centrality can be used on a variety of different similarity metrics. Here, we present applications based on linear correlations and on spectral coherences between fMRI times series. This latter approach allows us to draw conclusions of connectivity patterns in different spectral bands. We apply this method to fMRI data in task-absent conditions where subjects were in states of hunger or satiety. We show that eigenvector centrality is modulated by the state that the subjects were in. Our analyses demonstrate that eigenvector centrality is a computationally efficient tool for capturing intrinsic neural architecture on a voxel-wise level.

  10. Acceleration of criticality analysis solution convergence by matrix eigenvector for a system with weak neutron interaction

    Energy Technology Data Exchange (ETDEWEB)

    Nomura, Yasushi; Takada, Tomoyuki; Kuroishi, Takeshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kadotani, Hiroyuki [Shizuoka Sangyo Univ., Iwata, Shizuoka (Japan)

    2003-03-01

    In the case of Monte Carlo calculation to obtain a neutron multiplication factor for a system of weak neutron interaction, there might be some problems concerning convergence of the solution. Concerning this difficulty in the computer code calculations, theoretical derivation was made from the general neutron transport equation and consideration was given for acceleration of solution convergence by using the matrix eigenvector in this report. Accordingly, matrix eigenvector calculation scheme was incorporated together with procedure to make acceleration of convergence into the continuous energy Monte Carlo code MCNP. Furthermore, effectiveness of acceleration of solution convergence by matrix eigenvector was ascertained with the results obtained by applying to the two OECD/NEA criticality analysis benchmark problems. (author)

  11. Methods for computing SN eigenvalues and eigenvectors of slab geometry transport problems

    International Nuclear Information System (INIS)

    Yavuz, Musa

    1998-01-01

    We discuss computational methods for computing the eigenvalues and eigenvectors of single energy-group neutral particle transport (S N ) problems in homogeneous slab geometry, with an arbitrary scattering anisotropy of order L. These eigensolutions are important when exact (or very accurate) solutions are desired for coarse spatial cell problems demanding rapid execution times. Three methods, one of which is 'new', are presented for determining the eigenvalues and eigenvectors of such S N problems. In the first method, separation of variables is directly applied to the S N equations. In the second method, common characteristics of the S N and P N-1 equations are used. In the new method, the eigenvalues and eigenvectors can be computed provided that the cell-interface Green's functions (transmission and reflection factors) are known. Numerical results for S 4 test problems are given to compare the new method with the existing methods

  12. Methods for computing SN eigenvalues and eigenvectors of slab geometry transport problems

    International Nuclear Information System (INIS)

    Yavuz, M.

    1997-01-01

    We discuss computational methods for computing the eigenvalues and eigenvectors of single energy-group neutral particle transport (S N ) problems in homogeneous slab geometry, with an arbitrary scattering anisotropy of order L. These eigensolutions are important when exact (or very accurate) solutions are desired for coarse spatial cell problems demanding rapid execution times. Three methods, one of which is 'new', are presented for determining the eigenvalues and eigenvectors of such S N problems. In the first method, separation of variables is directly applied to the S N equations. In the second method, common characteristics of the S N and P N-1 equations are used. In the new method, the eigenvalues and eigenvectors can be computed provided that the cell-interface Green's functions (transmission and reflection factors) are known. Numerical results for S 4 test problems are given to compare the new method with the existing methods. (author)

  13. Simple eigenvectors of unbounded operators of the type “normal plus compact”

    Directory of Open Access Journals (Sweden)

    Michael Gil'

    2015-01-01

    Full Text Available The paper deals with operators of the form \\(A=S+B\\, where \\(B\\ is a compact operator in a Hilbert space \\(H\\ and \\(S\\ is an unbounded normal one in \\(H\\, having a compact resolvent. We consider approximations of the eigenvectors of \\(A\\, corresponding to simple eigenvalues by the eigenvectors of the operators \\(A_n=S+B_n\\ (\\(n=1,2, \\ldots\\, where \\(B_n\\ is an \\(n\\-dimensional operator. In addition, we obtain the error estimate of the approximation.

  14. A Markov chain representation of the normalized Perron–Frobenius eigenvector

    OpenAIRE

    Cerf, Raphaël; Dalmau, Joseba

    2017-01-01

    We consider the problem of finding the Perron–Frobenius eigenvector of a primitive matrix. Dividing each of the rows of the matrix by the sum of the elements in the row, the resulting new matrix is stochastic. We give a formula for the normalized Perron–Frobenius eigenvector of the original matrix, in terms of a realization of the Markov chain defined by the associated stochastic matrix. This formula is a generalization of the classical formula for the invariant probability measure of a Marko...

  15. Eigenvectors determination of the ribosome dynamics model during mRNA translation using the Kleene Star algorithm

    Science.gov (United States)

    Ernawati; Carnia, E.; Supriatna, A. K.

    2018-03-01

    Eigenvalues and eigenvectors in max-plus algebra have the same important role as eigenvalues and eigenvectors in conventional algebra. In max-plus algebra, eigenvalues and eigenvectors are useful for knowing dynamics of the system such as in train system scheduling, scheduling production systems and scheduling learning activities in moving classes. In the translation of proteins in which the ribosome move uni-directionally along the mRNA strand to recruit the amino acids that make up the protein, eigenvalues and eigenvectors are used to calculate protein production rates and density of ribosomes on the mRNA. Based on this, it is important to examine the eigenvalues and eigenvectors in the process of protein translation. In this paper an eigenvector formula is given for a ribosome dynamics during mRNA translation by using the Kleene star algorithm in which the resulting eigenvector formula is simpler and easier to apply to the system than that introduced elsewhere. This paper also discusses the properties of the matrix {B}λ \\otimes n of model. Among the important properties, it always has the same elements in the first column for n = 1, 2,… if the eigenvalue is the time of initiation, λ = τin , and the column is the eigenvector of the model corresponding to λ.

  16. On the Eigenvalues and Eigenvectors of Block Triangular Preconditioned Block Matrices

    KAUST Repository

    Pestana, Jennifer

    2014-01-01

    Block lower triangular matrices and block upper triangular matrices are popular preconditioners for 2×2 block matrices. In this note we show that a block lower triangular preconditioner gives the same spectrum as a block upper triangular preconditioner and that the eigenvectors of the two preconditioned matrices are related. © 2014 Society for Industrial and Applied Mathematics.

  17. Rules of Normalisation and their Importance for Interpretation of Systems of Optimal Taxation

    DEFF Research Database (Denmark)

    Munk, Knud Jørgen

    representation of the general equilibrium conditions the rules of normalisation in standard optimal tax models. This allows us to provide an intuitive explanation of what determines the optimal tax system. Finally, we review a number of examples where lack of precision with respect to normalisation in otherwise...... important contributions to the literature on optimal taxation has given rise to misinterpretations of of analytical results....

  18. Normalisation of body composition parameters for nutritional assessment

    International Nuclear Information System (INIS)

    Preston, Thomas

    2014-01-01

    Full text: Normalisation of body composition parameters to an index of body size facilitates comparison of a subject’s measurements with those of a population. There is an obvious focus on indexes of obesity, but first it is informative to consider Fat Free Mass (FFM) in the context of common anthropometric measures of body size namely, height and weight. The contention is that FFM is a more physiological measure of body size than body mass. Many studies have shown that FFM relates to height ^p. Although there is debate over the appropriate exponent especially in early life, it appears to lie between 2 and 3. If 2, then FFM Index (FFMI; kg/m2) and Fat Mass Index (FMI; kg/m2) can be summed to give BMI. If 3 were used as exponent, then FFMI (kg/m3) plus FMI (kg/m3) gives the Ponderal Index (PI; weight/height3). In 2013, Burton argued that that a cubic exponent is appropriate for normalisation as it is a dimensionless quotient. In 2012, Wang and co-workers repeated earlier observations showing a strong linear relationship between FFM and height3. The importance of the latter study comes from the fact that a 4 compartment body composition model was used, which is recognised as the most accurate means of describing FFM. Once the basis of a FFMI has been defined it can be used to compare measurements with those of a population, either directly, as a ratio to a norm or as a Z-score. FFMI charts could be developed for use in child growth. Other related indexes can be determined for use in specific circumstances such as: body cell mass index (growth and wasting); skeletal muscle mass index (SMMI) or appendicular SMMI (growth and sarcopenia); bone mineral mass index (osteoporosis); extracellular fluid index (hydration). Finally, it is logical that the same system is used to define an adiposity index, so Fat Mass Index (FMI; kg/height3) can be used as it is consistent with FFMI (kg/height3) and PI. It should also be noted that the index FM/FFM, describes an individual

  19. Inference of financial networks using the normalised mutual information rate

    Science.gov (United States)

    2018-01-01

    In this paper, we study data from financial markets, using the normalised Mutual Information Rate. We show how to use it to infer the underlying network structure of interrelations in the foreign currency exchange rates and stock indices of 15 currency areas. We first present the mathematical method and discuss its computational aspects, and apply it to artificial data from chaotic dynamics and to correlated normal-variates data. We then apply the method to infer the structure of the financial system from the time-series of currency exchange rates and stock indices. In particular, we study and reveal the interrelations among the various foreign currency exchange rates and stock indices in two separate networks, of which we also study their structural properties. Our results show that both inferred networks are small-world networks, sharing similar properties and having differences in terms of assortativity. Importantly, our work shows that global economies tend to connect with other economies world-wide, rather than creating small groups of local economies. Finally, the consistent interrelations depicted among the 15 currency areas are further supported by a discussion from the viewpoint of economics. PMID:29420644

  20. Inference of financial networks using the normalised mutual information rate.

    Science.gov (United States)

    Goh, Yong Kheng; Hasim, Haslifah M; Antonopoulos, Chris G

    2018-01-01

    In this paper, we study data from financial markets, using the normalised Mutual Information Rate. We show how to use it to infer the underlying network structure of interrelations in the foreign currency exchange rates and stock indices of 15 currency areas. We first present the mathematical method and discuss its computational aspects, and apply it to artificial data from chaotic dynamics and to correlated normal-variates data. We then apply the method to infer the structure of the financial system from the time-series of currency exchange rates and stock indices. In particular, we study and reveal the interrelations among the various foreign currency exchange rates and stock indices in two separate networks, of which we also study their structural properties. Our results show that both inferred networks are small-world networks, sharing similar properties and having differences in terms of assortativity. Importantly, our work shows that global economies tend to connect with other economies world-wide, rather than creating small groups of local economies. Finally, the consistent interrelations depicted among the 15 currency areas are further supported by a discussion from the viewpoint of economics.

  1. Quasars in the 4D Eigenvector 1 Context: a stroll down memory lane

    Science.gov (United States)

    Sulentic, Jack; Marziani, Paola

    2015-10-01

    Recently some pessimism has been expressed about our lack of progress in understanding quasars over more than fifty year since their discovery. It is worthwhile to look back at some of the progress that has been made - but still lies under the radar - perhaps because few people are working on optical/UV spectroscopy in this field. Great advances in understanding quasar phenomenology have emerged using eigenvector techniques. The 4D eigenvector 1 context provides a surrogate H-R Diagram for quasars with a source main sequence driven by Eddington ratio convolved with line-of-sight orientation. Appreciating the striking differences between quasars at opposite ends of the main sequence (so-called population A and B sources) opens the door towards a unified model of quasar physics, geometry and kinematics. We present a review of some of the progress that has been made over the past 15 years, and point out unsolved issues.

  2. Quasars in the 4D Eigenvector 1 Context: a stroll down memory lane

    Directory of Open Access Journals (Sweden)

    Jack W. Sulentic

    2015-10-01

    Full Text Available Recently some pessimism has been expressed about our lack of progress in understanding quasars over more than fifty year since their discovery. It is worthwhile to look back at some of the progress that has been made – but still lies under the radar – perhaps because few people are working on optical/UV spectroscopy in this field. Great advances in understanding quasar phenomenology have emerged using eigenvector techniques. The 4D eigenvector 1 context provides a surrogate H-R Diagram for quasars with a source main sequence driven by Eddington ratio convolved with line-of-sight orientation. Appreciating the striking differences between quasars at opposite ends of the main sequence (so-called population A and B sources opens the door towards a unified model of quasar physics, geometry and kinematics. We present a review of some of the progress that has been made over the past 15 years, and point out unsolved issues.

  3. Quasars in the 4D eigenvector 1 context: a stroll down memory lane

    International Nuclear Information System (INIS)

    Sulentic, Jack W.; Marziani, Paola

    2015-01-01

    Recently some pessimism has been expressed about our lack of progress in understanding quasars over the 50+ year since their discovery (Antonucci, 2013). It is worthwhile to look back at some of the progress that has been made—but still lies under the radar—perhaps because few people are working on optical/UV spectroscopy in this field. Great advances in understanding quasar phenomenology have emerged using eigenvector techniques. The 4D eigenvector 1 context provides a surrogate H-R Diagram for quasars with a source main sequence driven by Eddington ratio convolved with line-of-sight orientation. Appreciating the striking differences between quasars at opposite ends of the main sequence (so-called population A and B sources) opens the door toward a unified model of quasar physics, geometry and kinematics. We present a review of some of the progress that has been made over the past 15 years, and point out unsolved issues.

  4. Quasars in the 4D eigenvector 1 context: a stroll down memory lane

    Energy Technology Data Exchange (ETDEWEB)

    Sulentic, Jack W. [Instituto de Astrofísica de Andalucía-Consejo Superior de Investigaciones Científicas, Granada (Spain); Marziani, Paola, E-mail: paola.marziani@oapd.inaf.it [Istituto Nazionale di Astrofisica, Osservatorio Astronomico di Padova, Padova (Italy)

    2015-10-13

    Recently some pessimism has been expressed about our lack of progress in understanding quasars over the 50+ year since their discovery (Antonucci, 2013). It is worthwhile to look back at some of the progress that has been made—but still lies under the radar—perhaps because few people are working on optical/UV spectroscopy in this field. Great advances in understanding quasar phenomenology have emerged using eigenvector techniques. The 4D eigenvector 1 context provides a surrogate H-R Diagram for quasars with a source main sequence driven by Eddington ratio convolved with line-of-sight orientation. Appreciating the striking differences between quasars at opposite ends of the main sequence (so-called population A and B sources) opens the door toward a unified model of quasar physics, geometry and kinematics. We present a review of some of the progress that has been made over the past 15 years, and point out unsolved issues.

  5. Introducing carrying capacity-based normalisation in LCA: framework and development of references at midpoint level

    DEFF Research Database (Denmark)

    Bjørn, Anders; Hauschild, Michael Zwicky

    2015-01-01

    carrying capacity-based normalisation references. The purpose of this article is to present a framework for normalisation against carrying capacity-based references and to develop average normalisation references (NR) for Europe and the world for all those midpoint impact categories commonly included....... A literature review was carried out to identify scientifically sound thresholds for each impact category. Carrying capacities were then calculated from these thresholds and expressed in metrics identical to midpoint indicators giving priority to those recommended by ILCD. NR was expressed as the carrying...... ozone formation and soil quality were found to exceed carrying capacities several times.The developed carrying capacity-based normalisation references offer relevant supplementary reference information to the currently applied references based on society’s background interventions by supporting...

  6. Correlation of errors in the Monte Carlo fission source and the fission matrix fundamental-mode eigenvector

    International Nuclear Information System (INIS)

    Dufek, Jan; Holst, Gustaf

    2016-01-01

    Highlights: • Errors in the fission matrix eigenvector and fission source are correlated. • The error correlations depend on coarseness of the spatial mesh. • The error correlations are negligible when the mesh is very fine. - Abstract: Previous studies raised a question about the level of a possible correlation of errors in the cumulative Monte Carlo fission source and the fundamental-mode eigenvector of the fission matrix. A number of new methods tally the fission matrix during the actual Monte Carlo criticality calculation, and use its fundamental-mode eigenvector for various tasks. The methods assume the fission matrix eigenvector is a better representation of the fission source distribution than the actual Monte Carlo fission source, although the fission matrix and its eigenvectors do contain statistical and other errors. A recent study showed that the eigenvector could be used for an unbiased estimation of errors in the cumulative fission source if the errors in the eigenvector and the cumulative fission source were not correlated. Here we present new numerical study results that answer the question about the level of the possible error correlation. The results may be of importance to all methods that use the fission matrix. New numerical tests show that the error correlation is present at a level which strongly depends on properties of the spatial mesh used for tallying the fission matrix. The error correlation is relatively strong when the mesh is coarse, while the correlation weakens as the mesh gets finer. We suggest that the coarseness of the mesh is measured in terms of the value of the largest element in the tallied fission matrix as that way accounts for the mesh as well as system properties. In our test simulations, we observe only negligible error correlations when the value of the largest element in the fission matrix is about 0.1. Relatively strong error correlations appear when the value of the largest element in the fission matrix raises

  7. An exploration of diffusion tensor eigenvector variability within human calf muscles.

    Science.gov (United States)

    Rockel, Conrad; Noseworthy, Michael D

    2016-01-01

    To explore the effect of diffusion tensor imaging (DTI) acquisition parameters on principal and minor eigenvector stability within human lower leg skeletal muscles. Lower leg muscles were evaluated in seven healthy subjects at 3T using an 8-channel transmit/receive coil. Diffusion-encoding was performed with nine signal averages (NSA) using 6, 15, and 25 directions (NDD). Individual DTI volumes were combined into aggregate volumes of 3, 2, and 1 NSA according to number of directions. Tensor eigenvalues (λ1 , λ2 , λ3 ), eigenvectors (ε1 , ε2 , ε3 ), and DTI metrics (fractional anisotropy [FA] and mean diffusivity [MD]) were calculated for each combination of NSA and NDD. Spatial maps of signal-to-noise ratio (SNR), λ3 :λ2 ratio, and zenith angle were also calculated for region of interest (ROI) analysis of vector orientation consistency. ε1 variability was only moderately related to ε2 variability (r = 0.4045). Variation of ε1 was affected by NDD, not NSA (P < 0.0002), while variation of ε2 was affected by NSA, not NDD (P < 0.0003). In terms of tensor shape, vector variability was weakly related to FA (ε1 :r = -0.1854, ε2 : ns), but had a stronger relation to the λ3 :λ2 ratio (ε1 :r = -0.5221, ε2 :r = -0.1771). Vector variability was also weakly related to SNR (ε1 :r = -0.2873, ε2 :r = -0.3483). Zenith angle was found to be strongly associated with variability of ε1 (r = 0.8048) but only weakly with that of ε2 (r = 0.2135). The second eigenvector (ε2 ) displayed higher directional variability relative to ε1 , and was only marginally affected by experimental conditions that impacted ε1 variability. © 2015 Wiley Periodicals, Inc.

  8. Heisenberg XXX Model with General Boundaries: Eigenvectors from Algebraic Bethe Ansatz

    Directory of Open Access Journals (Sweden)

    Samuel Belliard

    2013-11-01

    Full Text Available We propose a generalization of the algebraic Bethe ansatz to obtain the eigenvectors of the Heisenberg spin chain with general boundaries associated to the eigenvalues and the Bethe equations found recently by Cao et al. The ansatz takes the usual form of a product of operators acting on a particular vector except that the number of operators is equal to the length of the chain. We prove this result for the chains with small length. We obtain also an off-shell equation (i.e. satisfied without the Bethe equations formally similar to the ones obtained in the periodic case or with diagonal boundaries.

  9. Experimentation of Eigenvector Dynamics in a Multiple Input Multiple Output Channel in the 5GHz Band

    DEFF Research Database (Denmark)

    Brown, Tim; Eggers, Patrick Claus F.; Katz, Marcos

    2005-01-01

    Much research has been carried out on the production of both physical and non physical Multiple Input Multiple Output channel models with regard to increased channel capacity as well as analysis of eigenvalues through the use of singular value decomposition. Little attention has been paid...... to the analysis of vector dynamics in terms of how the state of eigenvectors will change as a mobile is moving through a changing physical environment. This is important in terms of being able to track the orthogonal eigenmodes at system level, while also relieving the burden of tracking of the full channel...

  10. Computation of dominant eigenvalues and eigenvectors: A comparative study of algorithms

    International Nuclear Information System (INIS)

    Nightingale, M.P.; Viswanath, V.S.; Mueller, G.

    1993-01-01

    We investigate two widely used recursive algorithms for the computation of eigenvectors with extreme eigenvalues of large symmetric matrices---the modified Lanczoes method and the conjugate-gradient method. The goal is to establish a connection between their underlying principles and to evaluate their performance in applications to Hamiltonian and transfer matrices of selected model systems of interest in condensed matter physics and statistical mechanics. The conjugate-gradient method is found to converge more rapidly for understandable reasons, while storage requirements are the same for both methods

  11. Geometric and topological characterization of porous media: insights from eigenvector centrality

    Science.gov (United States)

    Jimenez-Martinez, J.; Negre, C.

    2017-12-01

    Solving flow and transport through complex geometries such as porous media involves an extreme computational cost. Simplifications such as pore networks, where the pores are represented by nodes and the pore throats by edges connecting pores, have been proposed. These models have the ability to preserve the connectivity of the medium. However, they have difficulties capturing preferential paths (high velocity) and stagnation zones (low velocity), as they do not consider the specific relations between nodes. Network theory approaches, where the complex network is conceptualized like a graph, can help to simplify and better understand fluid dynamics and transport in porous media. To address this issue, we propose a method based on eigenvector centrality. It has been corrected to overcome the centralization problem and modified to introduce a bias in the centrality distribution along a particular direction which allows considering the flow and transport anisotropy in porous media. The model predictions are compared with millifluidic transport experiments, showing that this technique is computationally efficient and has potential for predicting preferential paths and stagnation zones for flow and transport in porous media. Entropy computed from the eigenvector centrality probability distribution is proposed as an indicator of the "mixing capacity" of the system.

  12. Application Research of the Sparse Representation of Eigenvector on the PD Positioning in the Transformer Oil

    Directory of Open Access Journals (Sweden)

    Qing Xie

    2016-01-01

    Full Text Available The partial discharge (PD detection of electrical equipment is important for the safe operation of power system. The ultrasonic signal generated by the PD in the oil is a broadband signal. However, most methods of the array signal processing are used for the narrowband signal at present, and the effect of some methods for processing wideband signals is not satisfactory. Therefore, it is necessary to find new broadband signal processing methods to improve detection ability of the PD source. In this paper, the direction of arrival (DOA estimation method based on sparse representation of eigenvector is proposed, and this method can further reduce the noise interference. Moreover, the simulation results show that this direction finding method is feasible for broadband signal and thus improve the following positioning accuracy of the three-array localization method. And experimental results verify that the direction finding method based on sparse representation of eigenvector is feasible for the ultrasonic array, which can achieve accurate estimation of direction of arrival and improve the following positioning accuracy. This can provide important guidance information for the equipment maintenance in the practical application.

  13. Normalisation of spot urine samples to 24-h collection for assessment of exposure to uranium

    International Nuclear Information System (INIS)

    Marco, R.; Katorza, E.; Gonen, R.; German, U.; Tshuva, A.; Pelled, O.; Paz-tal, O.; Adout, A.; Karpas, Z.

    2008-01-01

    For dose assessment of workers at Nuclear Research Center Negev exposed to natural uranium, spot urine samples are analysed and the results are normalised to 24-h urine excretion based on 'standard' man urine volume of 1.6 l d -1 . In the present work, the urine volume, uranium level and creatinine concentration were determined in two or three 24-h urine collections from 133 male workers (319 samples) and 33 female workers (88 samples). Three volunteers provided urine spot samples from each voiding during a 24-h period and a good correlation was found between the relative level of creatinine and uranium in spot samples collected from the same individual. The results show that normalisation of uranium concentration to creatinine in a spot sample represents the 24-h content of uranium better than normalisation to the standard volume and may be used to reduce the uncertainty of dose assessment based on spot samples. (authors)

  14. A novel approach to signal normalisation in atmospheric pressure ionisation mass spectrometry.

    Science.gov (United States)

    Vogeser, Michael; Kirchhoff, Fabian; Geyer, Roland

    2012-07-01

    The aim of our study was to test an alternative principle of signal normalisation in LC-MS/MS. During analyses, post column infusion of the target analyte is done via a T-piece, generating an "area under the analyte peak" (AUP). The ratio of peak area to AUP is assessed as assay response. Acceptable analytical performance of this principle was found for an exemplary analyte. Post-column infusion may allow normalisation of ion suppression not requiring any additional standard compound. This approach can be useful in situations where no appropriate compound is available for classical internal standardisation. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Detection of multiple damages employing best achievable eigenvectors under Bayesian inference

    Science.gov (United States)

    Prajapat, Kanta; Ray-Chaudhuri, Samit

    2018-05-01

    A novel approach is presented in this work to localize simultaneously multiple damaged elements in a structure along with the estimation of damage severity for each of the damaged elements. For detection of damaged elements, a best achievable eigenvector based formulation has been derived. To deal with noisy data, Bayesian inference is employed in the formulation wherein the likelihood of the Bayesian algorithm is formed on the basis of errors between the best achievable eigenvectors and the measured modes. In this approach, the most probable damage locations are evaluated under Bayesian inference by generating combinations of various possible damaged elements. Once damage locations are identified, damage severities are estimated using a Bayesian inference Markov chain Monte Carlo simulation. The efficiency of the proposed approach has been demonstrated by carrying out a numerical study involving a 12-story shear building. It has been found from this study that damage scenarios involving as low as 10% loss of stiffness in multiple elements are accurately determined (localized and severities quantified) even when 2% noise contaminated modal data are utilized. Further, this study introduces a term parameter impact (evaluated based on sensitivity of modal parameters towards structural parameters) to decide the suitability of selecting a particular mode, if some idea about the damaged elements are available. It has been demonstrated here that the accuracy and efficiency of the Bayesian quantification algorithm increases if damage localization is carried out a-priori. An experimental study involving a laboratory scale shear building and different stiffness modification scenarios shows that the proposed approach is efficient enough to localize the stories with stiffness modification.

  16. On a q-extension of Mehta's eigenvectors of the finite Fourier transform for q a root of unity

    OpenAIRE

    Atakishiyeva, Mesuma K.; Atakishiyev, Natig M.; Koornwinder, Tom H.

    2008-01-01

    It is shown that the continuous q-Hermite polynomials for q a root of unity have simple transformation properties with respect to the classical Fourier transform. This result is then used to construct q-extended eigenvectors of the finite Fourier transform in terms of these polynomials.

  17. q-Extension of Mehta's eigenvectors of the finite Fourier transform for q, a root of unity

    NARCIS (Netherlands)

    Atakishiyeva, M.K.; Atakishiyev, N.M.; Koornwinder, T.H.

    2009-01-01

    It is shown that the continuous q-Hermite polynomials for q, a root of unity, have simple transformation properties with respect to the classical Fourier transform. This result is then used to construct q-extended eigenvectors of the finite Fourier transform in terms of these polynomials.

  18. A normalisation for the four - detector system for gamma - gamma angular correlation studies

    International Nuclear Information System (INIS)

    Kiang, G.C.; Chen, C.H.; Niu, W.F.

    1994-01-01

    A normalisation method for the multiple - HPGe - detector system is described. The system consists of four coaxial HPGe detectors with a CAMAC event - by - event data acquisition system, enabling to measure six gamma -gamma coincidences of angles simultaneously. An application for gamma - gamma correlation studies of Kr 82 is presented and discussed. 3 figs., 6 refs. (author)

  19. Normalisation of the peaceful use of nuclear energy - consequences for its legal regulation

    International Nuclear Information System (INIS)

    Birkhofer, A.; Lukes, R.

    1985-01-01

    The five reports in this book deal with the importance of the peaceful use of nuclear energy, as well as with several aspects of normalisation. The spectrum of the reports underlines the benefit for the support of the peaceful use of nuclear energy. (WG) [de

  20. Normalised quantitative polymerase chain reaction for diagnosis of tuberculosis-associated uveitis.

    Science.gov (United States)

    Barik, Manas Ranjan; Rath, Soveeta; Modi, Rohit; Rana, Rajkishori; Reddy, Mamatha M; Basu, Soumyava

    2018-05-01

    Polymerase chain reaction (PCR)-based diagnosis of tuberculosis-associated uveitis (TBU) in TB-endemic countries is challenging due to likelihood of latent mycobacterial infection in both immune and non-immune cells. In this study, we investigated normalised quantitative PCR (nqPCR) in ocular fluids (aqueous/vitreous) for diagnosis of TBU in a TB-endemic population. Mycobacterial copy numbers (mpb64 gene) were normalised to host genome copy numbers (RNAse P RNA component H1 [RPPH1] gene) in TBU (n = 16) and control (n = 13) samples (discovery cohort). The mpb64:RPPH1 ratios (normalised value) from each TBU and control sample were tested against the current reference standard i.e. clinically-diagnosed TBU, to generate Receiver Operating Characteristic (ROC) curves. The optimum cut-off value of mpb64:RPPH1 ratio (0.011) for diagnosing TBU was identified from the highest Youden index. This cut-off value was then tested in a different cohort of TBU and controls (validation cohort, 20 cases and 18 controls), where it yielded specificity, sensitivity and diagnostic accuracy of 94.4%, 85.0%, and 89.4% respectively. The above values for conventional quantitative PCR (≥1 copy of mpb64 per reaction) were 61.1%, 90.0%, and 74.3% respectively. Normalisation markedly improved the specificity and diagnostic accuracy of quantitative PCR for diagnosis of TBU. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Relationships between the normalised difference vegetation index and temperature fluctuations in post-mining sites

    Czech Academy of Sciences Publication Activity Database

    Bujalský, L.; Jirka, V.; Zemek, František; Frouz, J.

    2018-01-01

    Roč. 32, č. 4 (2018), s. 254-263 ISSN 1748-0930 R&D Projects: GA MŠk(CZ) LO1415 Institutional support: RVO:67179843 Keywords : temperature * normalised difference * vegetation index (NDVI) * vegetation cover * remote sensing Subject RIV: DF - Soil Science Impact factor: 1.078, year: 2016

  2. An efficient Markov chain Monte Carlo method for distributions with intractable normalising constants

    DEFF Research Database (Denmark)

    Møller, Jesper; Pettitt, A. N.; Reeves, R.

    2006-01-01

    Maximum likelihood parameter estimation and sampling from Bayesian posterior distributions are problematic when the probability density for the parameter of interest involves an intractable normalising constant which is also a function of that parameter. In this paper, an auxiliary variable metho...

  3. Normalisation et certification dans le photovoltaïque: perspectives juridiques.

    OpenAIRE

    Boy , Laurence

    2012-01-01

    International audience; Legal approach of standardization in photovoltaic industry in France. Legal sources. Stakeholder"s liabillities. Competition aspects.; Approche juridique de la normalisation et de la certification dans le domaine du photovoltaïque en France. Sources du droit. Responsabilités des acteurs.Aspects concurrentiels.

  4. Bounded real and positive real balanced truncation using Σ-normalised coprime factors

    NARCIS (Netherlands)

    Trentelman, H.L.

    2009-01-01

    In this article, we will extend the method of balanced truncation using normalised right coprime factors of the system transfer matrix to balanced truncation with preservation of half line dissipativity. Special cases are preservation of positive realness and bounded realness. We consider a half

  5. Random forest meteorological normalisation models for Swiss PM10 trend analysis

    Science.gov (United States)

    Grange, Stuart K.; Carslaw, David C.; Lewis, Alastair C.; Boleti, Eirini; Hueglin, Christoph

    2018-05-01

    Meteorological normalisation is a technique which accounts for changes in meteorology over time in an air quality time series. Controlling for such changes helps support robust trend analysis because there is more certainty that the observed trends are due to changes in emissions or chemistry, not changes in meteorology. Predictive random forest models (RF; a decision tree machine learning technique) were grown for 31 air quality monitoring sites in Switzerland using surface meteorological, synoptic scale, boundary layer height, and time variables to explain daily PM10 concentrations. The RF models were used to calculate meteorologically normalised trends which were formally tested and evaluated using the Theil-Sen estimator. Between 1997 and 2016, significantly decreasing normalised PM10 trends ranged between -0.09 and -1.16 µg m-3 yr-1 with urban traffic sites experiencing the greatest mean decrease in PM10 concentrations at -0.77 µg m-3 yr-1. Similar magnitudes have been reported for normalised PM10 trends for earlier time periods in Switzerland which indicates PM10 concentrations are continuing to decrease at similar rates as in the past. The ability for RF models to be interpreted was leveraged using partial dependence plots to explain the observed trends and relevant physical and chemical processes influencing PM10 concentrations. Notably, two regimes were suggested by the models which cause elevated PM10 concentrations in Switzerland: one related to poor dispersion conditions and a second resulting from high rates of secondary PM generation in deep, photochemically active boundary layers. The RF meteorological normalisation process was found to be robust, user friendly and simple to implement, and readily interpretable which suggests the technique could be useful in many air quality exploratory data analysis situations.

  6. Eigenvector Spatial Filtering Regression Modeling of Ground PM2.5 Concentrations Using Remotely Sensed Data

    Directory of Open Access Journals (Sweden)

    Jingyi Zhang

    2018-06-01

    Full Text Available This paper proposes a regression model using the Eigenvector Spatial Filtering (ESF method to estimate ground PM2.5 concentrations. Covariates are derived from remotely sensed data including aerosol optical depth, normal differential vegetation index, surface temperature, air pressure, relative humidity, height of planetary boundary layer and digital elevation model. In addition, cultural variables such as factory densities and road densities are also used in the model. With the Yangtze River Delta region as the study area, we constructed ESF-based Regression (ESFR models at different time scales, using data for the period between December 2015 and November 2016. We found that the ESFR models effectively filtered spatial autocorrelation in the OLS residuals and resulted in increases in the goodness-of-fit metrics as well as reductions in residual standard errors and cross-validation errors, compared to the classic OLS models. The annual ESFR model explained 70% of the variability in PM2.5 concentrations, 16.7% more than the non-spatial OLS model. With the ESFR models, we performed detail analyses on the spatial and temporal distributions of PM2.5 concentrations in the study area. The model predictions are lower than ground observations but match the general trend. The experiment shows that ESFR provides a promising approach to PM2.5 analysis and prediction.

  7. Eigenvector Spatial Filtering Regression Modeling of Ground PM2.5 Concentrations Using Remotely Sensed Data.

    Science.gov (United States)

    Zhang, Jingyi; Li, Bin; Chen, Yumin; Chen, Meijie; Fang, Tao; Liu, Yongfeng

    2018-06-11

    This paper proposes a regression model using the Eigenvector Spatial Filtering (ESF) method to estimate ground PM 2.5 concentrations. Covariates are derived from remotely sensed data including aerosol optical depth, normal differential vegetation index, surface temperature, air pressure, relative humidity, height of planetary boundary layer and digital elevation model. In addition, cultural variables such as factory densities and road densities are also used in the model. With the Yangtze River Delta region as the study area, we constructed ESF-based Regression (ESFR) models at different time scales, using data for the period between December 2015 and November 2016. We found that the ESFR models effectively filtered spatial autocorrelation in the OLS residuals and resulted in increases in the goodness-of-fit metrics as well as reductions in residual standard errors and cross-validation errors, compared to the classic OLS models. The annual ESFR model explained 70% of the variability in PM 2.5 concentrations, 16.7% more than the non-spatial OLS model. With the ESFR models, we performed detail analyses on the spatial and temporal distributions of PM 2.5 concentrations in the study area. The model predictions are lower than ground observations but match the general trend. The experiment shows that ESFR provides a promising approach to PM 2.5 analysis and prediction.

  8. Normalisation genes for expression analyses in the brown alga model Ectocarpus siliculosus

    Directory of Open Access Journals (Sweden)

    Rousvoal Sylvie

    2008-08-01

    Full Text Available Abstract Background Brown algae are plant multi-cellular organisms occupying most of the world coasts and are essential actors in the constitution of ecological niches at the shoreline. Ectocarpus siliculosus is an emerging model for brown algal research. Its genome has been sequenced, and several tools are being developed to perform analyses at different levels of cell organization, including transcriptomic expression analyses. Several topics, including physiological responses to osmotic stress and to exposure to contaminants and solvents are being studied in order to better understand the adaptive capacity of brown algae to pollution and environmental changes. A series of genes that can be used to normalise expression analyses is required for these studies. Results We monitored the expression of 13 genes under 21 different culture conditions. These included genes encoding proteins and factors involved in protein translation (ribosomal protein 26S, EF1alpha, IF2A, IF4E and protein degradation (ubiquitin, ubiquitin conjugating enzyme or folding (cyclophilin, and proteins involved in both the structure of the cytoskeleton (tubulin alpha, actin, actin-related proteins and its trafficking function (dynein, as well as a protein implicated in carbon metabolism (glucose 6-phosphate dehydrogenase. The stability of their expression level was assessed using the Ct range, and by applying both the geNorm and the Normfinder principles of calculation. Conclusion Comparisons of the data obtained with the three methods of calculation indicated that EF1alpha (EF1a was the best reference gene for normalisation. The normalisation factor should be calculated with at least two genes, alpha tubulin, ubiquitin-conjugating enzyme or actin-related proteins being good partners of EF1a. Our results exclude actin as a good normalisation gene, and, in this, are in agreement with previous studies in other organisms.

  9. REPORTING SOCIETAL : LIMITES ET ENJEUX DE LA PROPOSITION DE NORMALISATION INTERNATIONALE " GLOBAL REPORTING INITIATIVE "

    OpenAIRE

    Michel Capron; Françoise Quairel

    2003-01-01

    International audience; En s'inspirant de la normalisation comptable anglo-saxonne, la Global Reporting Initiative (GRI) propose un référentiel de publication volontaire d'informations sociétales. La transposition présente des limites qui rendent en fait ses principes inapplicables. Néanmoins il tend à s'imposer et les grandes entreprises peuvent y trouver le moyen d'éviter une régulation contraignante.

  10. Eigenvector Subset Selection Using Bayesian Optimization Algorithm%基于贝叶斯优化算法的脸面特征向量子集选择

    Institute of Scientific and Technical Information of China (English)

    郭卫锋; 林亚平; 罗光平

    2002-01-01

    Eigenvector subset selection is the key to face recognition. In this paper ,we propose ESS-BOA, a newrandomized, population-based evolutionary algorithm which deals with the Eigenvector Subset Selection (ESS)prob-lem on face recognition application. In ESS-BOA ,the ESS problem, stated as a search problem ,uses the BayesianOptimization Algorithm (BOA) as searching engine and the distance degree as the object function to select eigenvec-tor. Experimental results show that ESS-BOA outperforms the traditional the eigenface selection algorithm.

  11. Use and misuse of temperature normalisation in meta-analyses of thermal responses of biological traits

    Directory of Open Access Journals (Sweden)

    Dimitrios - Georgios Kontopoulos

    2018-02-01

    Full Text Available There is currently unprecedented interest in quantifying variation in thermal physiology among organisms, especially in order to understand and predict the biological impacts of climate change. A key parameter in this quantification of thermal physiology is the performance or value of a rate, across individuals or species, at a common temperature (temperature normalisation. An increasingly popular model for fitting thermal performance curves to data—the Sharpe-Schoolfield equation—can yield strongly inflated estimates of temperature-normalised rate values. These deviations occur whenever a key thermodynamic assumption of the model is violated, i.e., when the enzyme governing the performance of the rate is not fully functional at the chosen reference temperature. Using data on 1,758 thermal performance curves across a wide range of species, we identify the conditions that exacerbate this inflation. We then demonstrate that these biases can compromise tests to detect metabolic cold adaptation, which requires comparison of fitness or rate performance of different species or genotypes at some fixed low temperature. Finally, we suggest alternative methods for obtaining unbiased estimates of temperature-normalised rate values for meta-analyses of thermal performance across species in climate change impact studies.

  12. Normalisation process theory: a framework for developing, evaluating and implementing complex interventions

    LENUS (Irish Health Repository)

    Murray, Elizabeth

    2010-10-20

    Abstract Background The past decade has seen considerable interest in the development and evaluation of complex interventions to improve health. Such interventions can only have a significant impact on health and health care if they are shown to be effective when tested, are capable of being widely implemented and can be normalised into routine practice. To date, there is still a problematic gap between research and implementation. The Normalisation Process Theory (NPT) addresses the factors needed for successful implementation and integration of interventions into routine work (normalisation). Discussion In this paper, we suggest that the NPT can act as a sensitising tool, enabling researchers to think through issues of implementation while designing a complex intervention and its evaluation. The need to ensure trial procedures that are feasible and compatible with clinical practice is not limited to trials of complex interventions, and NPT may improve trial design by highlighting potential problems with recruitment or data collection, as well as ensuring the intervention has good implementation potential. Summary The NPT is a new theory which offers trialists a consistent framework that can be used to describe, assess and enhance implementation potential. We encourage trialists to consider using it in their next trial.

  13. Oral benfotiamine plus alpha-lipoic acid normalises complication-causing pathways in type 1 diabetes.

    Science.gov (United States)

    Du, X; Edelstein, D; Brownlee, M

    2008-10-01

    We determined whether fixed doses of benfotiamine in combination with slow-release alpha-lipoic acid normalise markers of reactive oxygen species-induced pathways of complications in humans. Male participants with and without type 1 diabetes were studied in the General Clinical Research Centre of the Albert Einstein College of Medicine. Glycaemic status was assessed by measuring baseline values of three different indicators of hyperglycaemia. Intracellular AGE formation, hexosamine pathway activity and prostacyclin synthase activity were measured initially, and after 2 and 4 weeks of treatment. In the nine participants with type 1 diabetes, treatment had no effect on any of the three indicators used to assess hyperglycaemia. However, treatment with benfotiamine plus alpha-lipoic acid completely normalised increased AGE formation, reduced increased monocyte hexosamine-modified proteins by 40% and normalised the 70% decrease in prostacyclin synthase activity from 1,709 +/- 586 pg/ml 6-keto-prostaglandin F(1alpha) to 4,696 +/- 533 pg/ml. These results show that the previously demonstrated beneficial effects of these agents on complication-causing pathways in rodent models of diabetic complications also occur in humans with type 1 diabetes.

  14. Normalisation process theory: a framework for developing, evaluating and implementing complex interventions

    Directory of Open Access Journals (Sweden)

    Ong Bie

    2010-10-01

    Full Text Available Abstract Background The past decade has seen considerable interest in the development and evaluation of complex interventions to improve health. Such interventions can only have a significant impact on health and health care if they are shown to be effective when tested, are capable of being widely implemented and can be normalised into routine practice. To date, there is still a problematic gap between research and implementation. The Normalisation Process Theory (NPT addresses the factors needed for successful implementation and integration of interventions into routine work (normalisation. Discussion In this paper, we suggest that the NPT can act as a sensitising tool, enabling researchers to think through issues of implementation while designing a complex intervention and its evaluation. The need to ensure trial procedures that are feasible and compatible with clinical practice is not limited to trials of complex interventions, and NPT may improve trial design by highlighting potential problems with recruitment or data collection, as well as ensuring the intervention has good implementation potential. Summary The NPT is a new theory which offers trialists a consistent framework that can be used to describe, assess and enhance implementation potential. We encourage trialists to consider using it in their next trial.

  15. A comparison of parametric and nonparametric methods for normalising cDNA microarray data.

    Science.gov (United States)

    Khondoker, Mizanur R; Glasbey, Chris A; Worton, Bruce J

    2007-12-01

    Normalisation is an essential first step in the analysis of most cDNA microarray data, to correct for effects arising from imperfections in the technology. Loess smoothing is commonly used to correct for trends in log-ratio data. However, parametric models, such as the additive plus multiplicative variance model, have been preferred for scale normalisation, though the variance structure of microarray data may be of a more complex nature than can be accommodated by a parametric model. We propose a new nonparametric approach that incorporates location and scale normalisation simultaneously using a Generalised Additive Model for Location, Scale and Shape (GAMLSS, Rigby and Stasinopoulos, 2005, Applied Statistics, 54, 507-554). We compare its performance in inferring differential expression with Huber et al.'s (2002, Bioinformatics, 18, 96-104) arsinh variance stabilising transformation (AVST) using real and simulated data. We show GAMLSS to be as powerful as AVST when the parametric model is correct, and more powerful when the model is wrong. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

  16. Unveiling the significance of eigenvectors in diffusing non-Hermitian matrices by identifying the underlying Burgers dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Burda, Zdzislaw, E-mail: zdzislaw.burda@agh.edu.pl [AGH University of Science and Technology, Faculty of Physics and Applied Computer Science, al. Mickiewicza 30, PL-30059 Kraków (Poland); Grela, Jacek, E-mail: jacekgrela@gmail.com [M. Smoluchowski Institute of Physics and Mark Kac Complex Systems Research Centre, Jagiellonian University, PL-30348 Kraków (Poland); Nowak, Maciej A., E-mail: nowak@th.if.uj.edu.pl [M. Smoluchowski Institute of Physics and Mark Kac Complex Systems Research Centre, Jagiellonian University, PL-30348 Kraków (Poland); Tarnowski, Wojciech, E-mail: wojciech.tarnowski@uj.edu.pl [M. Smoluchowski Institute of Physics and Mark Kac Complex Systems Research Centre, Jagiellonian University, PL-30348 Kraków (Poland); Warchoł, Piotr, E-mail: piotr.warchol@uj.edu.pl [M. Smoluchowski Institute of Physics and Mark Kac Complex Systems Research Centre, Jagiellonian University, PL-30348 Kraków (Poland)

    2015-08-15

    Following our recent letter, we study in detail an entry-wise diffusion of non-hermitian complex matrices. We obtain an exact partial differential equation (valid for any matrix size N and arbitrary initial conditions) for evolution of the averaged extended characteristic polynomial. The logarithm of this polynomial has an interpretation of a potential which generates a Burgers dynamics in quaternionic space. The dynamics of the ensemble in the large N limit is completely determined by the coevolution of the spectral density and a certain eigenvector correlation function. This coevolution is best visible in an electrostatic potential of a quaternionic argument built of two complex variables, the first of which governs standard spectral properties while the second unravels the hidden dynamics of eigenvector correlation function. We obtain general formulas for the spectral density and the eigenvector correlation function for large N and for any initial conditions. We exemplify our studies by solving three examples, and we verify the analytic form of our solutions with numerical simulations.

  17. Reference gene identification for reliable normalisation of quantitative RT-PCR data in Setaria viridis.

    Science.gov (United States)

    Nguyen, Duc Quan; Eamens, Andrew L; Grof, Christopher P L

    2018-01-01

    Quantitative real-time polymerase chain reaction (RT-qPCR) is the key platform for the quantitative analysis of gene expression in a wide range of experimental systems and conditions. However, the accuracy and reproducibility of gene expression quantification via RT-qPCR is entirely dependent on the identification of reliable reference genes for data normalisation. Green foxtail ( Setaria viridis ) has recently been proposed as a potential experimental model for the study of C 4 photosynthesis and is closely related to many economically important crop species of the Panicoideae subfamily of grasses, including Zea mays (maize), Sorghum bicolor (sorghum) and Sacchurum officinarum (sugarcane). Setaria viridis (Accession 10) possesses a number of key traits as an experimental model, namely; (i) a small sized, sequenced and well annotated genome; (ii) short stature and generation time; (iii) prolific seed production, and; (iv) is amendable to Agrobacterium tumefaciens -mediated transformation. There is currently however, a lack of reference gene expression information for Setaria viridis ( S. viridis ). We therefore aimed to identify a cohort of suitable S. viridis reference genes for accurate and reliable normalisation of S. viridis RT-qPCR expression data. Eleven putative candidate reference genes were identified and examined across thirteen different S. viridis tissues. Of these, the geNorm and NormFinder analysis software identified SERINE / THERONINE - PROTEIN PHOSPHATASE 2A ( PP2A ), 5 '- ADENYLYLSULFATE REDUCTASE 6 ( ASPR6 ) and DUAL SPECIFICITY PHOSPHATASE ( DUSP ) as the most suitable combination of reference genes for the accurate and reliable normalisation of S. viridis RT-qPCR expression data. To demonstrate the suitability of the three selected reference genes, PP2A , ASPR6 and DUSP , were used to normalise the expression of CINNAMYL ALCOHOL DEHYDROGENASE ( CAD ) genes across the same tissues. This approach readily demonstrated the suitably of the three

  18. The Physical Driver of the Optical Eigenvector 1 in Quasar Main Sequence

    Energy Technology Data Exchange (ETDEWEB)

    Panda, Swayamtrupta; Czerny, Bożena [Center for Theoretical Physics, Polish Academy of Sciences, Warsaw (Poland); Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences, Warsaw (Poland); Wildy, Conor, E-mail: panda@cft.edu.pl [Center for Theoretical Physics, Polish Academy of Sciences, Warsaw (Poland)

    2017-11-07

    Quasars are complex sources, characterized by broad band spectra from radio through optical to X-ray band, with numerous emission and absorption features. This complexity leads to rich diagnostics. However, Boroson and Green (1992) used Principal Component Analysis (PCA), and with this analysis they were able to show significant correlations between the measured parameters. The leading component, related to Eigenvector 1 (EV1) was dominated by the anticorrelation between the FeII optical emission and [OIII] line and EV1 alone contained 30% of the total variance. It opened a way in defining a quasar main sequence, in close analogy to the stellar main sequence on the Hertzsprung-Russel (HR) diagram (Sulentic et al., 2001). The question still remains which of the basic theoretically motivated parameters of an active nucleus (Eddington ratio, black hole mass, accretion rate, spin, and viewing angle) is the main driver behind the EV1. Here we limit ourselves to the optical waveband, and concentrate on theoretical modeling the FeII to Hβ ratio, and we test the hypothesis that the physical driver of EV1 is the maximum of the accretion disk temperature, reflected in the shape of the spectral energy distribution (SED). We performed computations of the Hβ and optical FeII for a broad range of SED peak position using CLOUDY photoionisation code. We assumed that both Hβ and FeII emission come from the Broad Line Region represented as a constant density cloud in a plane-parallel geometry. We expected that a hotter disk continuum will lead to more efficient production of FeII but our computations show that the FeII to Hβ ratio actually drops with the rise of the disk temperature. Thus either hypothesis is incorrect, or approximations used in our paper for the description of the line emissivity is inadequate.

  19. The Physical Driver of the Optical Eigenvector 1 in Quasar Main Sequence

    Directory of Open Access Journals (Sweden)

    Swayamtrupta Panda

    2017-11-01

    Full Text Available Quasars are complex sources, characterized by broad band spectra from radio through optical to X-ray band, with numerous emission and absorption features. This complexity leads to rich diagnostics. However, Boroson and Green (1992 used Principal Component Analysis (PCA, and with this analysis they were able to show significant correlations between the measured parameters. The leading component, related to Eigenvector 1 (EV1 was dominated by the anticorrelation between the FeII optical emission and [OIII] line and EV1 alone contained 30% of the total variance. It opened a way in defining a quasar main sequence, in close analogy to the stellar main sequence on the Hertzsprung-Russel (HR diagram (Sulentic et al., 2001. The question still remains which of the basic theoretically motivated parameters of an active nucleus (Eddington ratio, black hole mass, accretion rate, spin, and viewing angle is the main driver behind the EV1. Here we limit ourselves to the optical waveband, and concentrate on theoretical modeling the FeII to Hβ ratio, and we test the hypothesis that the physical driver of EV1 is the maximum of the accretion disk temperature, reflected in the shape of the spectral energy distribution (SED. We performed computations of the Hβ and optical FeII for a broad range of SED peak position using CLOUDY photoionisation code. We assumed that both Hβ and FeII emission come from the Broad Line Region represented as a constant density cloud in a plane-parallel geometry. We expected that a hotter disk continuum will lead to more efficient production of FeII but our computations show that the FeII to Hβ ratio actually drops with the rise of the disk temperature. Thus either hypothesis is incorrect, or approximations used in our paper for the description of the line emissivity is inadequate.

  20. Confluence via strong normalisation in an algebraic λ-calculus with rewriting

    Directory of Open Access Journals (Sweden)

    Pablo Buiras

    2012-03-01

    Full Text Available The linear-algebraic lambda-calculus and the algebraic lambda-calculus are untyped lambda-calculi extended with arbitrary linear combinations of terms. The former presents the axioms of linear algebra in the form of a rewrite system, while the latter uses equalities. When given by rewrites, algebraic lambda-calculi are not confluent unless further restrictions are added. We provide a type system for the linear-algebraic lambda-calculus enforcing strong normalisation, which gives back confluence. The type system allows an abstract interpretation in System F.

  1. Analysis of the characteristics of the global virtual water trade network using degree and eigenvector centrality, with a focus on food and feed crops

    Directory of Open Access Journals (Sweden)

    S.-H. Lee

    2016-10-01

    Full Text Available This study aims to analyze the characteristics of global virtual water trade (GVWT, such as the connectivity of each trader, vulnerable importers, and influential countries, using degree and eigenvector centrality during the period 2006–2010. The degree centrality was used to measure the connectivity, and eigenvector centrality was used to measure the influence on the entire GVWT network. Mexico, Egypt, China, the Republic of Korea, and Japan were classified as vulnerable importers, because they imported large quantities of virtual water with low connectivity. In particular, Egypt had a 15.3 Gm3 year−1 blue water saving effect through GVWT: the vulnerable structure could cause a water shortage problem for the importer. The entire GVWT network could be changed by a few countries, termed "influential traders". We used eigenvector centrality to identify those influential traders. In GVWT for food crops, the USA, Russian Federation, Thailand, and Canada had high eigenvector centrality with large volumes of green water trade. In the case of blue water trade, western Asia, Pakistan, and India had high eigenvector centrality. For feed crops, the green water trade in the USA, Brazil, and Argentina was the most influential. However, Argentina and Pakistan used high proportions of internal water resources for virtual water export (32.9 and 25.1 %; thus other traders should carefully consider water resource management in these exporters.

  2. Analysis of the characteristics of the global virtual water trade network using degree and eigenvector centrality, with a focus on food and feed crops

    Science.gov (United States)

    Lee, Sang-Hyun; Mohtar, Rabi H.; Choi, Jin-Yong; Yoo, Seung-Hwan

    2016-10-01

    This study aims to analyze the characteristics of global virtual water trade (GVWT), such as the connectivity of each trader, vulnerable importers, and influential countries, using degree and eigenvector centrality during the period 2006-2010. The degree centrality was used to measure the connectivity, and eigenvector centrality was used to measure the influence on the entire GVWT network. Mexico, Egypt, China, the Republic of Korea, and Japan were classified as vulnerable importers, because they imported large quantities of virtual water with low connectivity. In particular, Egypt had a 15.3 Gm3 year-1 blue water saving effect through GVWT: the vulnerable structure could cause a water shortage problem for the importer. The entire GVWT network could be changed by a few countries, termed "influential traders". We used eigenvector centrality to identify those influential traders. In GVWT for food crops, the USA, Russian Federation, Thailand, and Canada had high eigenvector centrality with large volumes of green water trade. In the case of blue water trade, western Asia, Pakistan, and India had high eigenvector centrality. For feed crops, the green water trade in the USA, Brazil, and Argentina was the most influential. However, Argentina and Pakistan used high proportions of internal water resources for virtual water export (32.9 and 25.1 %); thus other traders should carefully consider water resource management in these exporters.

  3. Joint eigenvector estimation from mutually anisotropic tensors improves susceptibility tensor imaging of the brain, kidney, and heart.

    Science.gov (United States)

    Dibb, Russell; Liu, Chunlei

    2017-06-01

    To develop a susceptibility-based MRI technique for probing microstructure and fiber architecture of magnetically anisotropic tissues-such as central nervous system white matter, renal tubules, and myocardial fibers-in three dimensions using susceptibility tensor imaging (STI) tools. STI can probe tissue microstructure, but is limited by reconstruction artifacts because of absent phase information outside the tissue and noise. STI accuracy may be improved by estimating a joint eigenvector from mutually anisotropic susceptibility and relaxation tensors. Gradient-recalled echo image data were simulated using a numerical phantom and acquired from the ex vivo mouse brain, kidney, and heart. Susceptibility tensor data were reconstructed using STI, regularized STI, and the proposed algorithm of mutually anisotropic and joint eigenvector STI (MAJESTI). Fiber map and tractography results from each technique were compared with diffusion tensor data. MAJESTI reduced the estimated susceptibility tensor orientation error by 30% in the phantom, 36% in brain white matter, 40% in the inner medulla of the kidney, and 45% in myocardium. This improved the continuity and consistency of susceptibility-based fiber tractography in each tissue. MAJESTI estimation of the susceptibility tensors yields lower orientation errors for susceptibility-based fiber mapping and tractography in the intact brain, kidney, and heart. Magn Reson Med 77:2331-2346, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  4. Normalised subband adaptive filtering with extended adaptiveness on degree of subband filters

    Science.gov (United States)

    Samuyelu, Bommu; Rajesh Kumar, Pullakura

    2017-12-01

    This paper proposes an adaptive normalised subband adaptive filtering (NSAF) to accomplish the betterment of NSAF performance. In the proposed NSAF, an extended adaptiveness is introduced from its variants in two ways. In the first way, the step-size is set adaptive, and in the second way, the selection of subbands is set adaptive. Hence, the proposed NSAF is termed here as variable step-size-based NSAF with selected subbands (VS-SNSAF). Experimental investigations are carried out to demonstrate the performance (in terms of convergence) of the VS-SNSAF against the conventional NSAF and its state-of-the-art adaptive variants. The results report the superior performance of VS-SNSAF over the traditional NSAF and its variants. It is also proved for its stability, robustness against noise and substantial computing complexity.

  5. Living under the influence: normalisation of alcohol consumption in our cities

    Directory of Open Access Journals (Sweden)

    Xisca Sureda

    2017-01-01

    Full Text Available Harmful use of alcohol is one of the world's leading health risks. A positive association between certain characteristics of the urban environment and individual alcohol consumption has been documented in previous research. When developing a tool characterising the urban environment of alcohol in the cities of Barcelona and Madrid we observed that alcohol is ever present in our cities. Urban residents are constantly exposed to a wide variety of alcohol products, marketing and promotion and signs of alcohol consumption. In this field note, we reflect the normalisation of alcohol in urban environments. We highlight the need for further research to better understand attitudes and practices in relation to alcohol consumption. This type of urban studies is necessary to support policy interventions to prevent and control harmful alcohol use.

  6. The normalisation of terror: the response of Israel's stock market to long periods of terrorism.

    Science.gov (United States)

    Peleg, Kobi; Regens, James L; Gunter, James T; Jaffe, Dena H

    2011-01-01

    Man-made disasters such as acts of terrorism may affect a society's resiliency and sensitivity to prolonged physical and psychological stress. The Israeli Tel Aviv stock market TA-100 Index was used as an indicator of reactivity to suicide terror bombings. After accounting for factors such as world market changes and attack severity and intensity, the analysis reveals that although Israel's financial base remained sensitive to each act of terror across the entire period of the Second Intifada (2000-06), sustained psychological resilience was indicated with no apparent overall market shift. In other words, we saw a 'normalisation of terror' following an extended period of continued suicide bombings. The results suggest that investors responded to less transitory global market forces, indicating sustained resilience and long-term market confidence. Future studies directly measuring investor expectations and reactions to man-made disasters, such as terrorism, are warranted. © 2011 The Author(s). Disasters © Overseas Development Institute, 2011.

  7. Microstructural characterisation of a P91 steel normalised and tempered at different temperatures

    International Nuclear Information System (INIS)

    Hurtado-Norena, C.; Danon, C.A.; Luppo, M.I.; Bruzzoni, P.

    2015-01-01

    9%Cr-1%Mo martensitic-ferritic steels are used in power plant components with operating temperatures of around 600 deg. C because of their good mechanical properties at high temperature as well as good oxidation resistance. These steels are generally used in the normalised and tempered condition. This treatment results in a structure of tempered lath martensite where the precipitates are distributed along the lath interfaces and within the martensite laths. The characterisation of these precipitates is of fundamental importance because of their relationship with the creep behaviour of these steels in service. In the present work, the different types of precipitates found in these steels have been studied on specimens in different metallurgical conditions. The techniques used in this investigation were X-ray diffraction with synchrotron light, scanning electron microscopy, energy dispersive microanalysis and transmission electron microscopy. (authors)

  8. Living under the influence: normalisation of alcohol consumption in our cities.

    Science.gov (United States)

    Sureda, Xisca; Villalbí, Joan R; Espelt, Albert; Franco, Manuel

    Harmful use of alcohol is one of the world's leading health risks. A positive association between certain characteristics of the urban environment and individual alcohol consumption has been documented in previous research. When developing a tool characterising the urban environment of alcohol in the cities of Barcelona and Madrid we observed that alcohol is ever present in our cities. Urban residents are constantly exposed to a wide variety of alcohol products, marketing and promotion and signs of alcohol consumption. In this field note, we reflect the normalisation of alcohol in urban environments. We highlight the need for further research to better understand attitudes and practices in relation to alcohol consumption. This type of urban studies is necessary to support policy interventions to prevent and control harmful alcohol use. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  9. Investigation, development and application of optimal output feedback theory. Vol. 4: Measures of eigenvalue/eigenvector sensitivity to system parameters and unmodeled dynamics

    Science.gov (United States)

    Halyo, Nesim

    1987-01-01

    Some measures of eigenvalue and eigenvector sensitivity applicable to both continuous and discrete linear systems are developed and investigated. An infinite series representation is developed for the eigenvalues and eigenvectors of a system. The coefficients of the series are coupled, but can be obtained recursively using a nonlinear coupled vector difference equation. A new sensitivity measure is developed by considering the effects of unmodeled dynamics. It is shown that the sensitivity is high when any unmodeled eigenvalue is near a modeled eigenvalue. Using a simple example where the sensor dynamics have been neglected, it is shown that high feedback gains produce high eigenvalue/eigenvector sensitivity. The smallest singular value of the return difference is shown not to reflect eigenvalue sensitivity since it increases with the feedback gains. Using an upper bound obtained from the infinite series, a procedure to evaluate whether the sensitivity to parameter variations is within given acceptable bounds is developed and demonstrated by an example.

  10. The stories we tell: qualitative research interviews, talking technologies and the 'normalisation' of life with HIV.

    Science.gov (United States)

    Mazanderani, Fadhila; Paparini, Sara

    2015-04-01

    Since the earliest days of the HIV/AIDS epidemic, talking about the virus has been a key way affected communities have challenged the fear and discrimination directed against them and pressed for urgent medical and political attention. Today, HIV/AIDS is one of the most prolifically and intimately documented of all health conditions, with entrenched infrastructures, practices and technologies--what Vinh-Kim Nguyen has dubbed 'confessional technologies'--aimed at encouraging those affected to share their experiences. Among these technologies, we argue, is the semi-structured interview: the principal methodology used in qualitative social science research focused on patient experiences. Taking the performative nature of the research interview as a talking technology seriously has epistemological implications not merely for how we interpret interview data, but also for how we understand the role of research interviews in the enactment of 'life with HIV'. This paper focuses on one crucial aspect of this enactment: the contemporary 'normalisation' of HIV as 'just another' chronic condition--a process taking place at the level of individual subjectivities, social identities, clinical practices and global health policy, and of which social science research is a vital part. Through an analysis of 76 interviews conducted in London (2009-10), we examine tensions in the experiential narratives of individuals living with HIV in which life with the virus is framed as 'normal', yet where this 'normality' is beset with contradictions and ambiguities. Rather than viewing these as a reflection of resistances to or failures of the enactment of HIV as 'normal', we argue that, insofar as these contradictions are generated by the research interview as a distinct 'talking technology', they emerge as crucial to the normative (re)production of what counts as 'living with HIV' (in the UK) and are an inherent part of the broader performative 'normalisation' of the virus. Copyright © 2015

  11. Eigenvector/eigenvalue analysis of a 3D current referential fault detection and diagnosis of an induction motor

    International Nuclear Information System (INIS)

    Pires, V. Fernao; Martins, J.F.; Pires, A.J.

    2010-01-01

    In this paper an integrated approach for on-line induction motor fault detection and diagnosis is presented. The need to insure a continuous and safety operation for induction motors involves preventive maintenance procedures combined with fault diagnosis techniques. The proposed approach uses an automatic three step algorithm. Firstly, the induction motor stator currents are measured which will give typical patterns that can be used to identify the fault. Secondly, the eigenvectors/eigenvalues of the 3D current referential are computed. Finally the proposed algorithm will discern if the motor is healthy or not and report the extent of the fault. Furthermore this algorithm is able to identify distinct faults (stator winding faults or broken bars). The proposed approach was experimentally implemented and its performance verified on various types of working conditions.

  12. Évolution de la normalisation dans le domaine des oléagineux et des corps gras

    Directory of Open Access Journals (Sweden)

    Quinsac Alain

    2003-07-01

    Full Text Available La normalisation joue un grand rôle dans les échanges économiques en participant à l’ouverture et à la transparence des marchés. La filière des Oléagineux et des Corps Gras a intégré depuis longtemps la normalisation dans sa stratégie. Élaborés à partir des besoins de la profession et notamment au niveau de la relation client-fournisseur, les programmes ont concerné principalement l’échantillonnage et l’analyse. Depuis quelques années, une forte évolution du contexte socio-économique et réglementaire (utilisation non-alimentaire, sécurité alimentaire, assurance qualité, a élargi le champ de la normalisation. La démarche normative adoptée dans le cas des bio-diesels et de la détection des OGM dans les oléagineux est expliquée. Les conséquences de l’évolution de la normalisation et les enjeux pour la profession des oléagineux dans le futur sont évoqués.

  13. ENEKuS--A Key Model for Managing the Transformation of the Normalisation of the Basque Language in the Workplace

    Science.gov (United States)

    Marko, Inazio; Pikabea, Inaki

    2013-01-01

    The aim of this study is to develop a reference model for intervention in the language processes applied to the transformation of language normalisation within organisations of a socio-economic nature. It is based on a case study of an experiment carried out over 10 years within a trade union confederation, and has pursued a strategy of a…

  14. Implementation of the SMART MOVE intervention in primary care: a qualitative study using normalisation process theory.

    Science.gov (United States)

    Glynn, Liam G; Glynn, Fergus; Casey, Monica; Wilkinson, Louise Gaffney; Hayes, Patrick S; Heaney, David; Murphy, Andrew W M

    2018-05-02

    Problematic translational gaps continue to exist between demonstrating the positive impact of healthcare interventions in research settings and their implementation into routine daily practice. The aim of this qualitative evaluation of the SMART MOVE trial was to conduct a theoretically informed analysis, using normalisation process theory, of the potential barriers and levers to the implementation of a mhealth intervention to promote physical activity in primary care. The study took place in the West of Ireland with recruitment in the community from the Clare Primary Care Network. SMART MOVE trial participants and the staff from four primary care centres were invited to take part and all agreed to do so. A qualitative methodology with a combination of focus groups (general practitioners, practice nurses and non-clinical staff from four separate primary care centres, n = 14) and individual semi-structured interviews (intervention and control SMART MOVE trial participants, n = 4) with purposeful sampling utilising the principles of Framework Analysis was utilised. The Normalisation Process Theory was used to develop the topic guide for the interviews and also informed the data analysis process. Four themes emerged from the analysis: personal and professional exercise strategies; roles and responsibilities to support active engagement; utilisation challenges; and evaluation, adoption and adherence. It was evident that introducing a new healthcare intervention demands a comprehensive evaluation of the intervention itself and also the environment in which it is to operate. Despite certain obstacles, the opportunity exists for the successful implementation of a novel healthcare intervention that addresses a hitherto unresolved healthcare need, provided that the intervention has strong usability attributes for both disseminators and target users and coheres strongly with the core objectives and culture of the health care environment in which it is to operate. We

  15. Technical Note: On methodologies for determining the size-normalised weight of planktic foraminifera

    Directory of Open Access Journals (Sweden)

    C. J. Beer

    2010-07-01

    Full Text Available The size-normalised weight (SNW of planktic foraminifera, a measure of test wall thickness and density, is potentially a valuable palaeo-proxy for marine carbon chemistry. As increasing attention is given to developing this proxy it is important that methods are comparable between studies. Here, we compare SNW data generated using two different methods to account for variability in test size, namely (i the narrow (50 μm range sieve fraction method and (ii the individually measured test size method. Using specimens from the 200–250 μm sieve fraction range collected in multinet samples from the North Atlantic, we find that sieving does not constrain size sufficiently well to isolate changes in weight driven by variations in test wall thickness and density from those driven by size. We estimate that the SNW data produced as part of this study are associated with an uncertainty, or error bar, of about ±11%. Errors associated with the narrow sieve fraction method may be reduced by decreasing the size of the sieve window, by using larger tests and by increasing the number tests employed. In situations where numerous large tests are unavailable, however, substantial errors associated with this sieve method remain unavoidable. In such circumstances the individually measured test size method provides a better means for estimating SNW because, as our results show, this method isolates changes in weight driven by variations in test wall thickness and density from those driven by size.

  16. No upward trend in normalised windstorm losses in Europe: 1970-2008

    Science.gov (United States)

    Barredo, J. I.

    2010-01-01

    On 18 January 2007, windstorm Kyrill battered Europe with hurricane-force winds killing 47 people and causing 10 billion US in damage. Kyrill poses several questions: is Kyrill an isolated or exceptional case? Have there been events costing as much in the past? This paper attempts to put Kyrill into an historical context by examining large historical windstorm event losses in Europe for the period 1970-2008 across 29 European countries. It asks the question what economic losses would these historical events cause if they were to recur under 2008 societal conditions? Loss data were sourced from reinsurance firms and augmented with historical reports, peer-reviewed articles and other ancillary sources. Following the same conceptual approach outlined in previous studies, the data were then adjusted for changes in population, wealth, and inflation at the country level and for inter-country price differences using purchasing power parity. The analyses reveal no trend in the normalised windstorm losses and confirm increasing disaster losses are driven by societal factors and increasing exposure.

  17. Normalised Mutual Information of High-Density Surface Electromyography during Muscle Fatigue

    Directory of Open Access Journals (Sweden)

    Adrian Bingham

    2017-12-01

    Full Text Available This study has developed a technique for identifying the presence of muscle fatigue based on the spatial changes of the normalised mutual information (NMI between multiple high density surface electromyography (HD-sEMG channels. Muscle fatigue in the tibialis anterior (TA during isometric contractions at 40% and 80% maximum voluntary contraction levels was investigated in ten healthy participants (Age range: 21 to 35 years; Mean age = 26 years; Male = 4, Female = 6. HD-sEMG was used to record 64 channels of sEMG using a 16 by 4 electrode array placed over the TA. The NMI of each electrode with every other electrode was calculated to form an NMI distribution for each electrode. The total NMI for each electrode (the summation of the electrode’s NMI distribution highlighted regions of high dependence in the electrode array and was observed to increase as the muscle fatigued. To summarise this increase, a function, M(k, was defined and was found to be significantly affected by fatigue and not by contraction force. The technique discussed in this study has overcome issues regarding electrode placement and was used to investigate how the dependences between sEMG signals within the same muscle change spatially during fatigue.

  18. A combination of low-dose bevacizumab and imatinib enhances vascular normalisation without inducing extracellular matrix deposition.

    Science.gov (United States)

    Schiffmann, L M; Brunold, M; Liwschitz, M; Goede, V; Loges, S; Wroblewski, M; Quaas, A; Alakus, H; Stippel, D; Bruns, C J; Hallek, M; Kashkar, H; Hacker, U T; Coutelle, O

    2017-02-28

    Vascular endothelial growth factor (VEGF)-targeting drugs normalise the tumour vasculature and improve access for chemotherapy. However, excessive VEGF inhibition fails to improve clinical outcome, and successive treatment cycles lead to incremental extracellular matrix (ECM) deposition, which limits perfusion and drug delivery. We show here, that low-dose VEGF inhibition augmented with PDGF-R inhibition leads to superior vascular normalisation without incremental ECM deposition thus maintaining access for therapy. Collagen IV expression was analysed in response to VEGF inhibition in liver metastasis of colorectal cancer (CRC) patients, in syngeneic (Panc02) and xenograft tumours of human colorectal cancer cells (LS174T). The xenograft tumours were treated with low (0.5 mg kg -1 body weight) or high (5 mg kg -1 body weight) doses of the anti-VEGF antibody bevacizumab with or without the tyrosine kinase inhibitor imatinib. Changes in tumour growth, and vascular parameters, including microvessel density, pericyte coverage, leakiness, hypoxia, perfusion, fraction of vessels with an open lumen, and type IV collagen deposition were compared. ECM deposition was increased after standard VEGF inhibition in patients and tumour models. In contrast, treatment with low-dose bevacizumab and imatinib produced similar growth inhibition without inducing detrimental collagen IV deposition, leading to superior vascular normalisation, reduced leakiness, improved oxygenation, more open vessels that permit perfusion and access for therapy. Low-dose bevacizumab augmented by imatinib selects a mature, highly normalised and well perfused tumour vasculature without inducing incremental ECM deposition that normally limits the effectiveness of VEGF targeting drugs.

  19. Identification of endogenous control genes for normalisation of real-time quantitative PCR data in colorectal cancer.

    LENUS (Irish Health Repository)

    Kheirelseid, Elrasheid A H

    2010-01-01

    BACKGROUND: Gene expression analysis has many applications in cancer diagnosis, prognosis and therapeutic care. Relative quantification is the most widely adopted approach whereby quantification of gene expression is normalised relative to an endogenously expressed control (EC) gene. Central to the reliable determination of gene expression is the choice of control gene. The purpose of this study was to evaluate a panel of candidate EC genes from which to identify the most stably expressed gene(s) to normalise RQ-PCR data derived from primary colorectal cancer tissue. RESULTS: The expression of thirteen candidate EC genes: B2M, HPRT, GAPDH, ACTB, PPIA, HCRT, SLC25A23, DTX3, APOC4, RTDR1, KRTAP12-3, CHRNB4 and MRPL19 were analysed in a cohort of 64 colorectal tumours and tumour associated normal specimens. CXCL12, FABP1, MUC2 and PDCD4 genes were chosen as target genes against which a comparison of the effect of each EC gene on gene expression could be determined. Data analysis using descriptive statistics, geNorm, NormFinder and qBasePlus indicated significant difference in variances between candidate EC genes. We determined that two genes were required for optimal normalisation and identified B2M and PPIA as the most stably expressed and reliable EC genes. CONCLUSION: This study identified that the combination of two EC genes (B2M and PPIA) more accurately normalised RQ-PCR data in colorectal tissue. Although these control genes might not be optimal for use in other cancer studies, the approach described herein could serve as a template for the identification of valid ECs in other cancer types.

  20. The one-dimensional normalised generalised equivalence theory (NGET) for generating equivalent diffusion theory group constants for PWR reflector regions

    International Nuclear Information System (INIS)

    Mueller, E.Z.

    1991-01-01

    An equivalent diffusion theory PWR reflector model is presented, which has as its basis Smith's generalisation of Koebke's Equivalent Theory. This method is an adaptation, in one-dimensional slab geometry, of the Generalised Equivalence Theory (GET). Since the method involves the renormalisation of the GET discontinuity factors at nodal interfaces, it is called the Normalised Generalised Equivalence Theory (NGET) method. The advantages of the NGET method for modelling the ex-core nodes of a PWR are summarized. 23 refs

  1. Good quality of oral anticoagulation treatment in general practice using international normalised ratio point of care testing

    DEFF Research Database (Denmark)

    Løkkegaard, Thomas; Pedersen, Tina Heidi; Lind, Bent

    2015-01-01

    INTRODUCTION: Oral anticoagulation treatment (OACT) with warfarin is common in general practice. Increasingly, international normalised ratio (INR) point of care testing (POCT) is being used to manage patients. The aim of this study was to describe and analyse the quality of OACT with warfarin...... practices using INR POCT in the management of patients in warfarin treatment provided good quality of care. Sampling interval and diagnostic coding were significantly correlated with treatment quality....

  2. Using Normalisation Process Theory to investigate the implementation of school-based oral health promotion.

    Science.gov (United States)

    Olajide, O J; Shucksmith, J; Maguire, A; Zohoori, F V

    2017-09-01

    Despite the considerable improvement in oral health of children in the UK over the last forty years, a significant burden of dental caries remains prevalent in some groups of children, indicating the need for more effective oral health promotion intervention (OHPI) strategies in this population. To explore the implementation process of a community-based OHPI, in the North East of England, using Normalisation Process Theory (NPT) to provide insights on how effectiveness could be maximised. Utilising a generic qualitative research approach, 19 participants were recruited into the study. In-depth interviews were conducted with relevant National Health Service (NHS) staff and primary school teachers while focus group discussions were conducted with reception teachers and teaching assistants. Analyses were conducted using thematic analysis with emergent themes mapped onto NPT constructs. Participants highlighted the benefits of OHPI and the need for evidence in practice. However, implementation of 'best evidence' was hampered by lack of adequate synthesis of evidence from available clinical studies on effectiveness of OHPI as these generally have insufficient information on the dynamics of implementation and how effectiveness obtained in clinical studies could be achieved in 'real life'. This impacted on the decision-making process, levels of commitment, collaboration among OHP teams, resource allocation and evaluation of OHPI. A large gap exists between available research evidence and translation of evidence in OHPI in community settings. Effectiveness of OHPI requires not only an awareness of evidence of clinical effectiveness but also synthesised information about change mechanisms and implementation protocols. Copyright© 2017 Dennis Barber Ltd.

  3. Diabetic ketoacidosis in adult patients: an audit of factors influencing time to normalisation of metabolic parameters.

    Science.gov (United States)

    Lee, Melissa H; Calder, Genevieve L; Santamaria, John D; MacIsaac, Richard J

    2018-05-01

    Diabetic ketoacidosis (DKA) is an acute life-threatening metabolic complication of diabetes that imposes substantial burden on our healthcare system. There is a paucity of published data in Australia assessing factors influencing time to resolution of DKA and length of stay (LOS). To identify factors that predict a slower time to resolution of DKA in adults with diabetes. Retrospective audit of patients admitted to St Vincent's Hospital Melbourne between 2010 to 2014 coded with a diagnosis of 'Diabetic Ketoacidosis'. The primary outcome was time to resolution of DKA based on normalisation of biochemical markers. Episodes of DKA within the wider Victorian hospital network were also explored. Seventy-one patients met biochemical criteria for DKA; median age 31 years (26-45 years), 59% were male and 23% had newly diagnosed diabetes. Insulin omission was the most common precipitant (42%). Median time to resolution of DKA was 11 h (6.5-16.5 h). Individual factors associated with slower resolution of DKA were lower admission pH (P < 0.001) and higher admission serum potassium level (P = 0.03). Median LOS was 3 days (2-5 days), compared to a Victorian state-wide LOS of 2 days. Higher comorbidity scores were associated with longer LOS (P < 0.001). Lower admission pH levels and higher admission serum potassium levels are independent predictors of slower time to resolution of DKA. This may assist to stratify patients with DKA using markers of severity to determine who may benefit from closer monitoring and to predict LOS. © 2018 Royal Australasian College of Physicians.

  4. Quantification of tumour {sup 18}F-FDG uptake: Normalise to blood glucose or scale to liver uptake?

    Energy Technology Data Exchange (ETDEWEB)

    Keramida, Georgia [Brighton and Sussex Medical School, Clinical Imaging Sciences Centre, Brighton (United Kingdom); Brighton and Sussex University Hospitals NHS Trust, Department of Nuclear Medicine, Brighton (United Kingdom); University of Sussex, Clinical Imaging Sciences Centre, Brighton (United Kingdom); Dizdarevic, Sabina; Peters, A.M. [Brighton and Sussex Medical School, Clinical Imaging Sciences Centre, Brighton (United Kingdom); Brighton and Sussex University Hospitals NHS Trust, Department of Nuclear Medicine, Brighton (United Kingdom); Bush, Janice [Brighton and Sussex Medical School, Clinical Imaging Sciences Centre, Brighton (United Kingdom)

    2015-09-15

    To compare normalisation to blood glucose (BG) with scaling to hepatic uptake for quantification of tumour {sup 18}F-FDG uptake using the brain as a surrogate for tumours. Standardised uptake value (SUV) was measured over the liver, cerebellum, basal ganglia, and frontal cortex in 304 patients undergoing {sup 18}F-FDG PET/CT. The relationship between brain FDG clearance and SUV was theoretically defined. Brain SUV decreased exponentially with BG, with similar constants between cerebellum, basal ganglia, and frontal cortex (0.099-0.119 mmol/l{sup -1}) and similar to values for tumours estimated from the literature. Liver SUV, however, correlated positively with BG. Brain-to-liver SUV ratio therefore showed an inverse correlation with BG, well-fitted with a hyperbolic function (R = 0.83), as theoretically predicted. Brain SUV normalised to BG (nSUV) displayed a nonlinear correlation with BG (R = 0.55); however, as theoretically predicted, brain nSUV/liver SUV showed almost no correlation with BG. Correction of brain SUV using BG raised to an exponential power of 0.099 mmol/l{sup -1} also eliminated the correlation between brain SUV and BG. Brain SUV continues to correlate with BG after normalisation to BG. Likewise, liver SUV is unsuitable as a reference for tumour FDG uptake. Brain SUV divided by liver SUV, however, shows minimal dependence on BG. (orig.)

  5. Analysis of a simulated microarray dataset: Comparison of methods for data normalisation and detection of differential expression (Open Access publication

    Directory of Open Access Journals (Sweden)

    Mouzaki Daphné

    2007-11-01

    Full Text Available Abstract Microarrays allow researchers to measure the expression of thousands of genes in a single experiment. Before statistical comparisons can be made, the data must be assessed for quality and normalisation procedures must be applied, of which many have been proposed. Methods of comparing the normalised data are also abundant, and no clear consensus has yet been reached. The purpose of this paper was to compare those methods used by the EADGENE network on a very noisy simulated data set. With the a priori knowledge of which genes are differentially expressed, it is possible to compare the success of each approach quantitatively. Use of an intensity-dependent normalisation procedure was common, as was correction for multiple testing. Most variety in performance resulted from differing approaches to data quality and the use of different statistical tests. Very few of the methods used any kind of background correction. A number of approaches achieved a success rate of 95% or above, with relatively small numbers of false positives and negatives. Applying stringent spot selection criteria and elimination of data did not improve the false positive rate and greatly increased the false negative rate. However, most approaches performed well, and it is encouraging that widely available techniques can achieve such good results on a very noisy data set.

  6. OpenPrescribing: normalised data and software tool to research trends in English NHS primary care prescribing 1998-2016.

    Science.gov (United States)

    Curtis, Helen J; Goldacre, Ben

    2018-02-23

    We aimed to compile and normalise England's national prescribing data for 1998-2016 to facilitate research on long-term time trends and create an open-data exploration tool for wider use. We compiled data from each individual year's national statistical publications and normalised them by mapping each drug to its current classification within the national formulary where possible. We created a freely accessible, interactive web tool to allow anyone to interact with the processed data. We downloaded all available annual prescription cost analysis datasets, which include cost and quantity for all prescription items dispensed in the community in England. Medical devices and appliances were excluded. We measured the extent of normalisation of data and aimed to produce a functioning accessible analysis tool. All data were imported successfully. 87.5% of drugs were matched exactly on name to the current formulary and a further 6.5% to similar drug names. All drugs in core clinical chapters were reconciled to their current location in the data schema, with only 1.26% of drugs not assigned a current chemical code. We created an openly accessible interactive tool to facilitate wider use of these data. Publicly available data can be made accessible through interactive online tools to help researchers and policy-makers explore time trends in prescribing. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  7. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-01

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.

  8. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach.

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-05

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. The implementation of medical revalidation: an assessment using normalisation process theory

    Directory of Open Access Journals (Sweden)

    Abigail Tazzyman

    2017-11-01

    Full Text Available Abstract Background Medical revalidation is the process by which all licensed doctors are legally required to demonstrate that they are up to date and fit to practise in order to maintain their licence. Revalidation was introduced in the United Kingdom (UK in 2012, constituting significant change in the regulation of doctors. The governing body, the General Medical Council (GMC, envisages that revalidation will improve patient care and safety. This potential however is, in part, dependent upon how successfully revalidation is embedded into routine practice. The aim of this study was to use Normalisation Process Theory (NPT to explore issues contributing to or impeding the implementation of revalidation in practice. Methods We conducted seventy-one interviews with sixty UK policymakers and senior leaders at different points during the development and implementation of revalidation: in 2011 (n = 31, 2013 (n = 26 and 2015 (n = 14. We selected interviewees using purposeful sampling. NPT was used as a framework to enable systematic analysis across the interview sets. Results Initial lack of consensus over revalidation’s purpose, and scepticism about its value, decreased over time as participants recognised the benefits it brought to their practice (coherence category of NPT. Though acceptance increased across time, revalidation was not seen as a legitimate part of their role by all doctors. Key individuals, notably the Responsible Officer (RO, were vital for the successful implementation of revalidation in organisations (cognitive participation category. The ease with which revalidation could be integrated into working practices varied greatly depending on the type of role a doctor held and the organisation they work for and the provision of resources was a significant variable in this (collective action category. Formal evaluation of revalidation in organisations was lacking but informal evaluation was taking place. Revalidation had

  10. Trends of air pollution in Denmark - Normalised by a simple weather index model

    International Nuclear Information System (INIS)

    Kiilsholm, S.; Rasmussen, A.

    2000-01-01

    This report is a part of the Traffic Pool projects on 'Traffic and Environments', 1995-99, financed by the Danish Ministry of Transport. The Traffic Pool projects included five different projects on 'Surveillance of the Air Quality', 'Atmospheric Modelling', 'Atmospheric Chemistry Modelling', 'Smog and ozone' and 'Greenhouse effects and Climate', [Rasmussen, 2000]. This work is a part of the project on 'Surveillance of the Air Quality' with the main objectives to make trend analysis of levels of air pollution from traffic in Denmark. Other participants were from the Road Directory mainly focusing on measurement of traffic and trend analysis of the air quality utilising a nordic model for the air pollution in street canyons called BLB (Beregningsmodel for Luftkvalitet i Byluftgader) [Vejdirektoratet 2000], National Environmental Research Institute (HERI) mainly focusing on. measurements of air pollution and trend analysis with the Operational Street Pollution Model (OSPM) [DMU 2000], and the Copenhagen Environmental Protection Agency mainly focusing on measurements. In this study a more simple statistical model has been developed for trend analysis of the air quality. The model is filtering out the influence of the variations from year to year in the meteorological conditions on the air pollution levels. The weather factors found most important are wind speed, wind direction and mixing height. Measurements of CO, NO and NO 2 from three streets in Copenhagen have been used, these streets are Jagtvej, Bredgade and H. C. Andersen's Boulevard (HCAB). The years 1994-1996 were used for evaluation of the method and annual indexes of air pollution index dependent only on meteorological parameters, called WEATHIX, were calculated for the years 1990-1997 and used for normalisation of the observed air pollution trends. Meteorological data were taken from either the background stations at the H.C. Oersted - building situated close to one of the street stations or the synoptic

  11. Preoperative mapping of cortical language areas in adult brain tumour patients using PET and individual non-normalised SPM analyses

    International Nuclear Information System (INIS)

    Meyer, Philipp T.; Sturz, Laszlo; Schreckenberger, Mathias; Setani, Keyvan S.; Buell, Udalrich; Spetzger, Uwe; Meyer, Georg F.; Sabri, Osama

    2003-01-01

    In patients scheduled for the resection of perisylvian brain tumours, knowledge of the cortical topography of language functions is crucial in order to avoid neurological deficits. We investigated the applicability of statistical parametric mapping (SPM) without stereotactic normalisation for individual preoperative language function brain mapping using positron emission tomography (PET). Seven right-handed adult patients with left-sided brain tumours (six frontal and one temporal) underwent 12 oxygen-15 labelled water PET scans during overt verb generation and rest. Individual activation maps were calculated for P<0.005 and P<0.001 without anatomical normalisation and overlaid onto the individuals' magnetic resonance images for preoperative planning. Activations corresponding to Broca's and Wernicke's areas were found in five and six cases, respectively, for P<0.005 and in three and six cases, respectively, for P<0.001. One patient with a glioma located in the classical Broca's area without aphasic symptoms presented an activation of the adjacent inferior frontal cortex and of a right-sided area homologous to Broca's area. Four additional patients with left frontal tumours also presented activations of the right-sided Broca's homologue; two of these showed aphasic symptoms and two only a weak or no activation of Broca's area. Other frequently observed activations included bilaterally the superior temporal gyri, prefrontal cortices, anterior insulae, motor areas and the cerebellum. The middle and inferior temporal gyri were activated predominantly on the left. An SPM group analysis (P<0.05, corrected) in patients with left frontal tumours confirmed the activation pattern shown by the individual analyses. We conclude that SPM analyses without stereotactic normalisation offer a promising alternative for analysing individual preoperative language function brain mapping studies. The observed right frontal activations agree with proposed reorganisation processes, but

  12. Learning Eigenvectors for Free

    NARCIS (Netherlands)

    W.M. Koolen-Wijkstra (Wouter); W.T. Kotlowski (Wojciech); M.K. Warmuth

    2011-01-01

    htmlabstractWe extend the classical problem of predicting a sequence of outcomes from a finite alphabet to the matrix domain. In this extension, the alphabet of n outcomes is replaced by the set of all dyads, i.e. outer products uu^T where u is a vector in R^n of unit length. Whereas in the

  13. Good quality of oral anticoagulation treatment in general practice using international normalised ratio point of care testing

    DEFF Research Database (Denmark)

    Løkkegaard, Thomas; Pedersen, Tina Heidi; Lind, Bent

    2015-01-01

    collected retrospectively for a period of six months. For each patient, time in therapeutic range (TTR) was calculated and correlated with practice and patient characteristics using multilevel linear regression models. RESULTS: We identified 447 patients in warfarin treatment in the 20 practices using POCT......INTRODUCTION: Oral anticoagulation treatment (OACT) with warfarin is common in general practice. Increasingly, international normalised ratio (INR) point of care testing (POCT) is being used to manage patients. The aim of this study was to describe and analyse the quality of OACT with warfarin...

  14. Good quality of oral anticoagulation treatment in general practice using international normalised ratio point of care testing

    DEFF Research Database (Denmark)

    Løkkegaard, Thomas; Pedersen, Tina Heidi; Lind, Bent

    2015-01-01

    INTRODUCTION: Oral anticoagulation treatment (OACT)with warfarin is common in general practice. Increasingly,international normalised ratio (INR) point of care testing(POCT) is being used to manage patients. The aim of thisstudy was to describe and analyse the quality of OACT withwarfarin...... in the management of patients in warfarintreatment provided good quality of care. Sampling intervaland diagnostic coding were significantly correlated withtreatment quality. FUNDING: The study received financial support from theSarah Krabbe Foundation, the General Practitioners’ Educationand Development Foundation...

  15. Calculation of normalised organ and effective doses to adult reference computational phantoms from contemporary computed tomography scanners

    International Nuclear Information System (INIS)

    Jansen, Jan T.M.; Shrimpton, Paul C.

    2010-01-01

    The general-purpose Monte Carlo radiation transport code MCNPX has been used to simulate photon transport and energy deposition in anthropomorphic phantoms due to the x-ray exposure from the Philips iCT 256 and Siemens Definition CT scanners, together with the previously studied General Electric 9800. The MCNPX code was compiled with the Intel FORTRAN compiler and run on a Linux PC cluster. A patch has been successfully applied to reduce computing times by about 4%. The International Commission on Radiological Protection (ICRP) has recently published the Adult Male (AM) and Adult Female (AF) reference computational voxel phantoms as successors to the Medical Internal Radiation Dose (MIRD) stylised hermaphrodite mathematical phantoms that form the basis for the widely-used ImPACT CT dosimetry tool. Comparisons of normalised organ and effective doses calculated for a range of scanner operating conditions have demonstrated significant differences in results (in excess of 30%) between the voxel and mathematical phantoms as a result of variations in anatomy. These analyses illustrate the significant influence of choice of phantom on normalised organ doses and the need for standardisation to facilitate comparisons of dose. Further such dose simulations are needed in order to update the ImPACT CT Patient Dosimetry spreadsheet for contemporary CT practice. (author)

  16. Normalisation in product life cycle assessment: an LCA of the global and European economic systems in the year 2000.

    Science.gov (United States)

    Sleeswijk, Anneke Wegener; van Oers, Lauran F C M; Guinée, Jeroen B; Struijs, Jaap; Huijbregts, Mark A J

    2008-02-01

    In the methodological context of the interpretation of environmental life cycle assessment (LCA) results, a normalisation study was performed. 15 impact categories were accounted for, including climate change, acidification, eutrophication, human toxicity, ecotoxicity, depletion of fossil energy resources, and land use. The year 2000 was chosen as a reference year, and information was gathered on two spatial levels: the global and the European level. From the 860 environmental interventions collected, 48 interventions turned out to account for at least 75% of the impact scores of all impact categories. All non-toxicity related, emission dependent impacts are fully dominated by the bulk emissions of only 10 substances or substance groups: CO(2), CH(4), SO(2), NO(x), NH(3), PM(10), NMVOC, and (H)CFCs emissions to air and emissions of N- and P-compounds to fresh water. For the toxicity-related emissions (pesticides, organics, metal compounds and some specific inorganics), the availability of information was still very limited, leading to large uncertainty in the corresponding normalisation factors. Apart from their usefulness as a reference for LCA studies, the results of this study stress the importance of efficient measures to combat bulk emissions and to promote the registration of potentially toxic emissions on a more comprehensive scale.

  17. 18S rRNA is a reliable normalisation gene for real time PCR based on influenza virus infected cells

    Directory of Open Access Journals (Sweden)

    Kuchipudi Suresh V

    2012-10-01

    Full Text Available Abstract Background One requisite of quantitative reverse transcription PCR (qRT-PCR is to normalise the data with an internal reference gene that is invariant regardless of treatment, such as virus infection. Several studies have found variability in the expression of commonly used housekeeping genes, such as beta-actin (ACTB and glyceraldehyde-3-phosphate dehydrogenase (GAPDH, under different experimental settings. However, ACTB and GAPDH remain widely used in the studies of host gene response to virus infections, including influenza viruses. To date no detailed study has been described that compares the suitability of commonly used housekeeping genes in influenza virus infections. The present study evaluated several commonly used housekeeping genes [ACTB, GAPDH, 18S ribosomal RNA (18S rRNA, ATP synthase, H+ transporting, mitochondrial F1 complex, beta polypeptide (ATP5B and ATP synthase, H+ transporting, mitochondrial Fo complex, subunit C1 (subunit 9 (ATP5G1] to identify the most stably expressed gene in human, pig, chicken and duck cells infected with a range of influenza A virus subtypes. Results The relative expression stability of commonly used housekeeping genes were determined in primary human bronchial epithelial cells (HBECs, pig tracheal epithelial cells (PTECs, and chicken and duck primary lung-derived cells infected with five influenza A virus subtypes. Analysis of qRT-PCR data from virus and mock infected cells using NormFinder and BestKeeper software programmes found that 18S rRNA was the most stable gene in HBECs, PTECs and avian lung cells. Conclusions Based on the presented data from cell culture models (HBECs, PTECs, chicken and duck lung cells infected with a range of influenza viruses, we found that 18S rRNA is the most stable reference gene for normalising qRT-PCR data. Expression levels of the other housekeeping genes evaluated in this study (including ACTB and GPADH were highly affected by influenza virus infection and

  18. From theory to 'measurement' in complex interventions: methodological lessons from the development of an e-health normalisation instrument.

    Science.gov (United States)

    Finch, Tracy L; Mair, Frances S; O'Donnell, Catherine; Murray, Elizabeth; May, Carl R

    2012-05-17

    Although empirical and theoretical understanding of processes of implementation in health care is advancing, translation of theory into structured measures that capture the complex interplay between interventions, individuals and context remain limited. This paper aimed to (1) describe the process and outcome of a project to develop a theory-based instrument for measuring implementation processes relating to e-health interventions; and (2) identify key issues and methodological challenges for advancing work in this field. A 30-item instrument (Technology Adoption Readiness Scale (TARS)) for measuring normalisation processes in the context of e-health service interventions was developed on the basis on Normalization Process Theory (NPT). NPT focuses on how new practices become routinely embedded within social contexts. The instrument was pre-tested in two health care settings in which e-health (electronic facilitation of healthcare decision-making and practice) was used by health care professionals. The developed instrument was pre-tested in two professional samples (N=46; N=231). Ratings of items representing normalisation 'processes' were significantly related to staff members' perceptions of whether or not e-health had become 'routine'. Key methodological challenges are discussed in relation to: translating multi-component theoretical constructs into simple questions; developing and choosing appropriate outcome measures; conducting multiple-stakeholder assessments; instrument and question framing; and more general issues for instrument development in practice contexts. To develop theory-derived measures of implementation process for progressing research in this field, four key recommendations are made relating to (1) greater attention to underlying theoretical assumptions and extent of translation work required; (2) the need for appropriate but flexible approaches to outcomes measurement; (3) representation of multiple perspectives and collaborative nature of

  19. Aberrant brain responses to emotionally valent words is normalised after cognitive behavioural therapy in female depressed adolescents.

    Science.gov (United States)

    Chuang, Jie-Yu; J Whitaker, Kirstie; Murray, Graham K; Elliott, Rebecca; Hagan, Cindy C; Graham, Julia Me; Ooi, Cinly; Tait, Roger; Holt, Rosemary J; van Nieuwenhuizen, Adrienne O; Reynolds, Shirley; Wilkinson, Paul O; Bullmore, Edward T; Lennox, Belinda R; Sahakian, Barbara J; Goodyer, Ian; Suckling, John

    2016-01-01

    Depression in adolescence is debilitating with high recurrence in adulthood, yet its pathophysiological mechanism remains enigmatic. To examine the interaction between emotion, cognition and treatment, functional brain responses to sad and happy distractors in an affective go/no-go task were explored before and after Cognitive Behavioural Therapy (CBT) in depressed female adolescents, and healthy participants. Eighty-two Depressed and 24 healthy female adolescents, aged 12-17 years, performed a functional magnetic resonance imaging (fMRI) affective go/no-go task at baseline. Participants were instructed to withhold their responses upon seeing happy or sad words. Among these participants, 13 patients had CBT over approximately 30 weeks. These participants and 20 matched controls then repeated the task. At baseline, increased activation in response to happy relative to neutral distractors was observed in the orbitofrontal cortex in depressed patients which was normalised after CBT. No significant group differences were found behaviourally or in brain activation in response to sad distractors. Improvements in symptoms (mean: 9.31, 95% CI: 5.35-13.27) were related at trend-level to activation changes in orbitofrontal cortex. In the follow-up section, a limited number of post-CBT patients were recruited. To our knowledge, this is the first fMRI study addressing the effect of CBT in adolescent depression. Although a bias toward negative information is widely accepted as a hallmark of depression, aberrant brain hyperactivity to positive distractors was found and normalised after CBT. Research, assessment and treatment focused on positive stimuli could be a future consideration. Moreover, a pathophysiological mechanism distinct from adult depression may be suggested and awaits further exploration. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  20. From theory to 'measurement' in complex interventions: Methodological lessons from the development of an e-health normalisation instrument

    Directory of Open Access Journals (Sweden)

    Finch Tracy L

    2012-05-01

    Full Text Available Abstract Background Although empirical and theoretical understanding of processes of implementation in health care is advancing, translation of theory into structured measures that capture the complex interplay between interventions, individuals and context remain limited. This paper aimed to (1 describe the process and outcome of a project to develop a theory-based instrument for measuring implementation processes relating to e-health interventions; and (2 identify key issues and methodological challenges for advancing work in this field. Methods A 30-item instrument (Technology Adoption Readiness Scale (TARS for measuring normalisation processes in the context of e-health service interventions was developed on the basis on Normalization Process Theory (NPT. NPT focuses on how new practices become routinely embedded within social contexts. The instrument was pre-tested in two health care settings in which e-health (electronic facilitation of healthcare decision-making and practice was used by health care professionals. Results The developed instrument was pre-tested in two professional samples (N = 46; N = 231. Ratings of items representing normalisation ‘processes’ were significantly related to staff members’ perceptions of whether or not e-health had become ‘routine’. Key methodological challenges are discussed in relation to: translating multi-component theoretical constructs into simple questions; developing and choosing appropriate outcome measures; conducting multiple-stakeholder assessments; instrument and question framing; and more general issues for instrument development in practice contexts. Conclusions To develop theory-derived measures of implementation process for progressing research in this field, four key recommendations are made relating to (1 greater attention to underlying theoretical assumptions and extent of translation work required; (2 the need for appropriate but flexible approaches to outcomes

  1. Rational parametrisation of normalised Stiefel manifolds, and explicit non-'t Hooft solutions of the Atiyah-Drinfeld-Hitchin-Manin instanton matrix equations for Sp(n)

    International Nuclear Information System (INIS)

    McCarthy, P.J.

    1981-01-01

    It is proved that normalised Stiefel manifolds admit a rational parametrisation which generalises Cayley's parametrisation of the unitary groups. Applying (the quaternionic case of) this parametrisation to the Atiyah-Drinfeld-Hitchin-Manin (ADHM) instanton matrix equations, large families of new explicit rational solutions emerge. In particular, new explicit non-'t Hooft solutions are presented. (orig.)

  2. A normalised seawater strontium isotope curve. Possible implications for Neoproterozoic-Cambrian weathering rates and the further oxygenation of the Earth

    International Nuclear Information System (INIS)

    Shields, G.A.

    2007-01-01

    The strontium isotope composition of seawater is strongly influenced on geological time scales by changes in the rates of continental weathering relative to ocean crust alteration. However, the potential of the seawater 87 Sr/ 86 Sr curve to trace globally integrated chemical weathering rates has not been fully realised because ocean 87 Sr/ 86 Sr is also influenced by the isotopic evolution of Sr sources to the ocean. A preliminary attempt is made here to normalise the seawater 87 Sr/ 86 Sr curve to plausible trends in the 87 Sr/ 86 Sr ratios of the three major Sr sources: carbonate dissolution, silicate weathering and submarine hydrothermal exchange. The normalised curve highlights the Neoproterozoic-Phanerozoic transition as a period of exceptionally high continental influence, indicating that this interval was characterised by a transient increase in global weathering rates and/or by the weathering of unusually radiogenic crustal rocks. Close correlation between the normalised 87 Sr/ 86 Sr curve, a published seawater δ 34 S curve and atmospheric pCO 2 models is used here to argue that elevated chemical weathering rates were a major contributing factor to the steep rise in seawater 87 Sr/ 86 Sr from 650 Ma to 500 Ma. Elevated weathering rates during the Neoproterozoic-Cambrian interval led to increased nutrient availability, organic burial and to the further oxygenation of Earth's surface environment. Use of normalised seawater 87 Sr/ 86 Sr curves will, it is hoped, help to improve future geochemical models of Earth System dynamics. (orig.)

  3. The Application of Principal Component Analysis Using Fixed Eigenvectors to the Infrared Thermographic Inspection of the Space Shuttle Thermal Protection System

    Science.gov (United States)

    Cramer, K. Elliott; Winfree, William P.

    2006-01-01

    The Nondestructive Evaluation Sciences Branch at NASA s Langley Research Center has been actively involved in the development of thermographic inspection techniques for more than 15 years. Since the Space Shuttle Columbia accident, NASA has focused on the improvement of advanced NDE techniques for the Reinforced Carbon-Carbon (RCC) panels that comprise the orbiter s wing leading edge. Various nondestructive inspection techniques have been used in the examination of the RCC, but thermography has emerged as an effective inspection alternative to more traditional methods. Thermography is a non-contact inspection method as compared to ultrasonic techniques which typically require the use of a coupling medium between the transducer and material. Like radiographic techniques, thermography can be used to inspect large areas, but has the advantage of minimal safety concerns and the ability for single-sided measurements. Principal Component Analysis (PCA) has been shown effective for reducing thermographic NDE data. A typical implementation of PCA is when the eigenvectors are generated from the data set being analyzed. Although it is a powerful tool for enhancing the visibility of defects in thermal data, PCA can be computationally intense and time consuming when applied to the large data sets typical in thermography. Additionally, PCA can experience problems when very large defects are present (defects that dominate the field-of-view), since the calculation of the eigenvectors is now governed by the presence of the defect, not the good material. To increase the processing speed and to minimize the negative effects of large defects, an alternative method of PCA is being pursued when a fixed set of eigenvectors is used to process the thermal data from the RCC materials. These eigen vectors can be generated either from an analytic model of the thermal response of the material under examination, or from a large cross section of experimental data. This paper will provide the

  4. Learning from doing: the case for combining normalisation process theory and participatory learning and action research methodology for primary healthcare implementation research.

    Science.gov (United States)

    de Brún, Tomas; O'Reilly-de Brún, Mary; O'Donnell, Catherine A; MacFarlane, Anne

    2016-08-03

    The implementation of research findings is not a straightforward matter. There are substantive and recognised gaps in the process of translating research findings into practice and policy. In order to overcome some of these translational difficulties, a number of strategies have been proposed for researchers. These include greater use of theoretical approaches in research focused on implementation, and use of a wider range of research methods appropriate to policy questions and the wider social context in which they are placed. However, questions remain about how to combine theory and method in implementation research. In this paper, we respond to these proposals. Focussing on a contemporary social theory, Normalisation Process Theory, and a participatory research methodology, Participatory Learning and Action, we discuss the potential of their combined use for implementation research. We note ways in which Normalisation Process Theory and Participatory Learning and Action are congruent and may therefore be used as heuristic devices to explore, better understand and support implementation. We also provide examples of their use in our own research programme about community involvement in primary healthcare. Normalisation Process Theory alone has, to date, offered useful explanations for the success or otherwise of implementation projects post-implementation. We argue that Normalisation Process Theory can also be used to prospectively support implementation journeys. Furthermore, Normalisation Process Theory and Participatory Learning and Action can be used together so that interventions to support implementation work are devised and enacted with the expertise of key stakeholders. We propose that the specific combination of this theory and methodology possesses the potential, because of their combined heuristic force, to offer a more effective means of supporting implementation projects than either one might do on its own, and of providing deeper understandings of

  5. Spin-orbit splitted excited states using explicitly-correlated equation-of-motion coupled-cluster singles and doubles eigenvectors

    Science.gov (United States)

    Bokhan, Denis; Trubnikov, Dmitrii N.; Perera, Ajith; Bartlett, Rodney J.

    2018-04-01

    An explicitly-correlated method of calculation of excited states with spin-orbit couplings, has been formulated and implemented. Developed approach utilizes left and right eigenvectors of equation-of-motion coupled-cluster model, which is based on the linearly approximated explicitly correlated coupled-cluster singles and doubles [CCSD(F12)] method. The spin-orbit interactions are introduced by using the spin-orbit mean field (SOMF) approximation of the Breit-Pauli Hamiltonian. Numerical tests for several atoms and molecules show good agreement between explicitly-correlated results and the corresponding values, calculated in complete basis set limit (CBS); the highly-accurate excitation energies can be obtained already at triple- ζ level.

  6. Disrupted Brain Network in Progressive Mild Cognitive Impairment Measured by Eigenvector Centrality Mapping is Linked to Cognition and Cerebrospinal Fluid Biomarkers.

    Science.gov (United States)

    Qiu, Tiantian; Luo, Xiao; Shen, Zhujing; Huang, Peiyu; Xu, Xiaojun; Zhou, Jiong; Zhang, Minming

    2016-10-18

    Mild cognitive impairment (MCI) is a heterogeneous condition associated with a high risk of progressing to Alzheimer's disease (AD). Although functional brain network alterations have been observed in progressive MCI (pMCI), the underlying pathological mechanisms of network alterations remain unclear. In the present study, we evaluated neuropsychological, imaging, and cerebrospinal fluid (CSF) data at baseline across a cohort of: 21 pMCI patients, 33 stable MCI (sMCI) patients, and 29 normal controls. Fast eigenvector centrality mapping (fECM) based on resting-state functional MRI (rsfMRI) was used to investigate brain network organization differences among these groups, and we further assessed its relation to cognition and AD-related pathology. Our results demonstrated that pMCI had decreased eigenvector centrality (EC) in left temporal pole and parahippocampal gyrus, and increased EC in left middle frontal gyrus compared to sMCI. In addition, compared to normal controls, patients with pMCI showed decreased EC in right hippocampus and bilateral parahippocampal gyrus, and sMCI had decreased EC in right middle frontal gyrus and superior parietal lobule. Correlation analysis showed that EC in the left temporal pole was related to Wechsler Memory Scale-Revised Logical Memory (WMS-LM) delay score (r = 0.467, p = 0.044) and total tau (t-tau) level in CSF (r = -0.509, p = 0.026) in pMCI. Our findings implicate EC changes of different brain network nodes in the prognosis of pMCI and sMCI. Importantly, the association between decreased EC of brain network node and pathological changes may provide a deeper understanding of the underlying pathophysiology of pMCI.

  7. Analysis of structural correlations in a model binary 3D liquid through the eigenvalues and eigenvectors of the atomic stress tensors

    International Nuclear Information System (INIS)

    Levashov, V. A.

    2016-01-01

    It is possible to associate with every atom or molecule in a liquid its own atomic stress tensor. These atomic stress tensors can be used to describe liquids’ structures and to investigate the connection between structural and dynamic properties. In particular, atomic stresses allow to address atomic scale correlations relevant to the Green-Kubo expression for viscosity. Previously correlations between the atomic stresses of different atoms were studied using the Cartesian representation of the stress tensors or the representation based on spherical harmonics. In this paper we address structural correlations in a 3D model binary liquid using the eigenvalues and eigenvectors of the atomic stress tensors. This approach allows to interpret correlations relevant to the Green-Kubo expression for viscosity in a simple geometric way. On decrease of temperature the changes in the relevant stress correlation function between different atoms are significantly more pronounced than the changes in the pair density function. We demonstrate that this behaviour originates from the orientational correlations between the eigenvectors of the atomic stress tensors. We also found correlations between the eigenvalues of the same atomic stress tensor. For the studied system, with purely repulsive interactions between the particles, the eigenvalues of every atomic stress tensor are positive and they can be ordered: λ 1 ≥ λ 2 ≥ λ 3 ≥ 0. We found that, for the particles of a given type, the probability distributions of the ratios (λ 2 /λ 1 ) and (λ 3 /λ 2 ) are essentially identical to each other in the liquids state. We also found that λ 2 tends to be equal to the geometric average of λ 1 and λ 3 . In our view, correlations between the eigenvalues may represent “the Poisson ratio effect” at the atomic scale.

  8. Analysis of structural correlations in a model binary 3D liquid through the eigenvalues and eigenvectors of the atomic stress tensors

    Energy Technology Data Exchange (ETDEWEB)

    Levashov, V. A. [Technological Design Institute of Scientific Instrument Engineering, Novosibirsk 630058 (Russian Federation)

    2016-03-07

    It is possible to associate with every atom or molecule in a liquid its own atomic stress tensor. These atomic stress tensors can be used to describe liquids’ structures and to investigate the connection between structural and dynamic properties. In particular, atomic stresses allow to address atomic scale correlations relevant to the Green-Kubo expression for viscosity. Previously correlations between the atomic stresses of different atoms were studied using the Cartesian representation of the stress tensors or the representation based on spherical harmonics. In this paper we address structural correlations in a 3D model binary liquid using the eigenvalues and eigenvectors of the atomic stress tensors. This approach allows to interpret correlations relevant to the Green-Kubo expression for viscosity in a simple geometric way. On decrease of temperature the changes in the relevant stress correlation function between different atoms are significantly more pronounced than the changes in the pair density function. We demonstrate that this behaviour originates from the orientational correlations between the eigenvectors of the atomic stress tensors. We also found correlations between the eigenvalues of the same atomic stress tensor. For the studied system, with purely repulsive interactions between the particles, the eigenvalues of every atomic stress tensor are positive and they can be ordered: λ{sub 1} ≥ λ{sub 2} ≥ λ{sub 3} ≥ 0. We found that, for the particles of a given type, the probability distributions of the ratios (λ{sub 2}/λ{sub 1}) and (λ{sub 3}/λ{sub 2}) are essentially identical to each other in the liquids state. We also found that λ{sub 2} tends to be equal to the geometric average of λ{sub 1} and λ{sub 3}. In our view, correlations between the eigenvalues may represent “the Poisson ratio effect” at the atomic scale.

  9. Normalisation of cerebrospinal fluid biomarkers parallels improvement of neurological symptoms following HAART in HIV dementia – case report

    Directory of Open Access Journals (Sweden)

    Blennow Kaj

    2006-09-01

    Full Text Available Abstract Background Since the introduction of HAART the incidence of HIV dementia has declined and HAART seems to improve neurocognitive function in patients with HIV dementia. Currently, HIV dementia develops mainly in patients without effective treatment, though it has also been described in patients on HAART and milder HIV-associated neuropsychological impairment is still frequent among HIV-1 infected patients regardless of HAART. Elevated cerebrospinal fluid (CSF levels of markers of neural injury and immune activation have been found in HIV dementia, but neither of those, nor CSF HIV-1 RNA levels have been proven useful as diagnostic or prognostic pseudomarkers in HIV dementia. Case presentation We report a case of HIV dementia (MSK stage 3 in a 57 year old antiretroviral naïve man who was introduced on zidovudine, lamivudine and ritonavir boosted indinavir, and followed with consecutive lumbar punctures before and after two and 15 months after initiation of HAART. Improvement of neurocognitive function was paralleled by normalisation of CSF neural markers (NFL, Tau and GFAP levels and a decline in CSF and serum neopterin and CSF and plasma HIV-1 RNA levels. Conclusion The value of these CSF markers as prognostic pseudomarkers of the effect of HAART on neurocognitive impairment in HIV dementia ought to be evaluated in longitudinal studies.

  10. Night-time restricted feeding normalises clock genes and Pai-1 gene expression in the db/db mouse liver.

    Science.gov (United States)

    Kudo, T; Akiyama, M; Kuriyama, K; Sudo, M; Moriya, T; Shibata, S

    2004-08-01

    An increase in PAI-1 activity is thought to be a key factor underlying myocardial infarction. Mouse Pai-1 (mPai-1) activity shows a daily rhythm in vivo, and its transcription seems to be controlled not only by clock genes but also by humoral factors such as insulin and triglycerides. Thus, we investigated daily clock genes and mPai-1 mRNA expression in the liver of db/db mice exhibiting high levels of glucose, insulin and triglycerides. Locomotor activity was measured using an infrared detection system. RT-PCR or in situ hybridisation methods were applied to measure gene expression. Humoral factors were measured using measurement kits. The db/ db mice showed attenuated locomotor activity rhythms. The rhythmic expression of mPer2 mRNA was severely diminished and the phase of mBmal1 oscillation was advanced in the db/db mouse liver, whereas mPai-1 mRNA was highly and constitutively expressed. Night-time restricted feeding led to a recovery not only from the diminished locomotor activity, but also from the diminished Per2 and advanced mBmal1 mRNA rhythms. Expression of mPai-1 mRNA in db/db mice was reduced to levels far below normal. Pioglitazone treatment slightly normalised glucose and insulin levels, with a slight reduction in mPai-1 gene expression. We demonstrated that Type 2 diabetes impairs the oscillation of the peripheral oscillator. Night-time restricted feeding rather than pioglitazone injection led to a recovery from the diminished locomotor activity, and altered oscillation of the peripheral clock and mPai-1 mRNA rhythm. Thus, we conclude that scheduled restricted food intake may be a useful form of treatment for diabetes.

  11. Analysis of a normalised expressed sequence tag (EST) library from a key pollinator, the bumblebee Bombus terrestris.

    Science.gov (United States)

    Sadd, Ben M; Kube, Michael; Klages, Sven; Reinhardt, Richard; Schmid-Hempel, Paul

    2010-02-15

    The bumblebee, Bombus terrestris (Order Hymenoptera), is of widespread importance. This species is extensively used for commercial pollination in Europe, and along with other Bombus spp. is a key member of natural pollinator assemblages. Furthermore, the species is studied in a wide variety of biological fields. The objective of this project was to create a B. terrestris EST resource that will prove to be valuable in obtaining a deeper understanding of this significant social insect. A normalised cDNA library was constructed from the thorax and abdomen of B. terrestris workers in order to enhance the discovery of rare genes. A total of 29'428 ESTs were sequenced. Subsequent clustering resulted in 13'333 unique sequences. Of these, 58.8 percent had significant similarities to known proteins, with 54.5 percent having a "best-hit" to existing Hymenoptera sequences. Comparisons with the honeybee and other insects allowed the identification of potential candidates for gene loss, pseudogene evolution, and possible incomplete annotation in the honeybee genome. Further, given the focus of much basic research and the perceived threat of disease to natural and commercial populations, the immune system of bumblebees is a particularly relevant component. Although the library is derived from unchallenged bees, we still uncover transcription of a number of immune genes spanning the principally described insect immune pathways. Additionally, the EST library provides a resource for the discovery of genetic markers that can be used in population level studies. Indeed, initial screens identified 589 simple sequence repeats and 854 potential single nucleotide polymorphisms. The resource that these B. terrestris ESTs represent is valuable for ongoing work. The ESTs provide direct evidence of transcriptionally active regions, but they will also facilitate further functional genomics, gene discovery and future genome annotation. These are important aspects in obtaining a greater

  12. Facilitating professional liaison in collaborative care for depression in UK primary care; a qualitative study utilising normalisation process theory.

    Science.gov (United States)

    Coupe, Nia; Anderson, Emma; Gask, Linda; Sykes, Paul; Richards, David A; Chew-Graham, Carolyn

    2014-05-01

    Collaborative care (CC) is an organisational framework which facilitates the delivery of a mental health intervention to patients by case managers in collaboration with more senior health professionals (supervisors and GPs), and is effective for the management of depression in primary care. However, there remains limited evidence on how to successfully implement this collaborative approach in UK primary care. This study aimed to explore to what extent CC impacts on professional working relationships, and if CC for depression could be implemented as routine in the primary care setting. This qualitative study explored perspectives of the 6 case managers (CMs), 5 supervisors (trial research team members) and 15 general practitioners (GPs) from practices participating in a randomised controlled trial of CC for depression. Interviews were transcribed verbatim and data was analysed using a two-step approach using an initial thematic analysis, and a secondary analysis using the Normalisation Process Theory concepts of coherence, cognitive participation, collective action and reflexive monitoring with respect to the implementation of CC in primary care. Supervisors and CMs demonstrated coherence in their understanding of CC, and consequently reported good levels of cognitive participation and collective action regarding delivering and supervising the intervention. GPs interviewed showed limited understanding of the CC framework, and reported limited collaboration with CMs: barriers to collaboration were identified. All participants identified the potential or experienced benefits of a collaborative approach to depression management and were able to discuss ways in which collaboration can be facilitated. Primary care professionals in this study valued the potential for collaboration, but GPs' understanding of CC and organisational barriers hindered opportunities for communication. Further work is needed to address these organisational barriers in order to facilitate

  13. Assessing the facilitators and barriers of interdisciplinary team working in primary care using normalisation process theory: An integrative review.

    Science.gov (United States)

    O'Reilly, Pauline; Lee, Siew Hwa; O'Sullivan, Madeleine; Cullen, Walter; Kennedy, Catriona; MacFarlane, Anne

    2017-01-01

    Interdisciplinary team working is of paramount importance in the reform of primary care in order to provide cost-effective and comprehensive care. However, international research shows that it is not routine practice in many healthcare jurisdictions. It is imperative to understand levers and barriers to the implementation process. This review examines interdisciplinary team working in practice, in primary care, from the perspective of service providers and analyses 1 barriers and facilitators to implementation of interdisciplinary teams in primary care and 2 the main research gaps. An integrative review following the PRISMA guidelines was conducted. Following a search of 10 international databases, 8,827 titles were screened for relevance and 49 met the criteria. Quality of evidence was appraised using predetermined criteria. Data were analysed following the principles of framework analysis using Normalisation Process Theory (NPT), which has four constructs: sense making, enrolment, enactment, and appraisal. The literature is dominated by a focus on interdisciplinary working between physicians and nurses. There is a dearth of evidence about all NPT constructs apart from enactment. Physicians play a key role in encouraging the enrolment of others in primary care team working and in enabling effective divisions of labour in the team. The experience of interdisciplinary working emerged as a lever for its implementation, particularly where communication and respect were strong between professionals. A key lever for interdisciplinary team working in primary care is to get professionals working together and to learn from each other in practice. However, the evidence base is limited as it does not reflect the experiences of all primary care professionals and it is primarily about the enactment of team working. We need to know much more about the experiences of the full network of primary care professionals regarding all aspects of implementation work. International

  14. Assessing the facilitators and barriers of interdisciplinary team working in primary care using normalisation process theory: An integrative review

    Science.gov (United States)

    O’Reilly, Pauline; Lee, Siew Hwa; O’Sullivan, Madeleine; Cullen, Walter; Kennedy, Catriona; MacFarlane, Anne

    2017-01-01

    Background Interdisciplinary team working is of paramount importance in the reform of primary care in order to provide cost-effective and comprehensive care. However, international research shows that it is not routine practice in many healthcare jurisdictions. It is imperative to understand levers and barriers to the implementation process. This review examines interdisciplinary team working in practice, in primary care, from the perspective of service providers and analyses 1 barriers and facilitators to implementation of interdisciplinary teams in primary care and 2 the main research gaps. Methods and findings An integrative review following the PRISMA guidelines was conducted. Following a search of 10 international databases, 8,827 titles were screened for relevance and 49 met the criteria. Quality of evidence was appraised using predetermined criteria. Data were analysed following the principles of framework analysis using Normalisation Process Theory (NPT), which has four constructs: sense making, enrolment, enactment, and appraisal. The literature is dominated by a focus on interdisciplinary working between physicians and nurses. There is a dearth of evidence about all NPT constructs apart from enactment. Physicians play a key role in encouraging the enrolment of others in primary care team working and in enabling effective divisions of labour in the team. The experience of interdisciplinary working emerged as a lever for its implementation, particularly where communication and respect were strong between professionals. Conclusion A key lever for interdisciplinary team working in primary care is to get professionals working together and to learn from each other in practice. However, the evidence base is limited as it does not reflect the experiences of all primary care professionals and it is primarily about the enactment of team working. We need to know much more about the experiences of the full network of primary care professionals regarding all aspects

  15. Encadrement des produits et des procédés : réglementation et normalisation du commerce international

    Directory of Open Access Journals (Sweden)

    Morin Odile

    2003-07-01

    Full Text Available Produits et procédés sont encadrés à la fois par des réglementations et, à un autre niveau, par des normes du commerce international. Cette présentation traite des textes réglementaires au niveau communautaire et national. On rappellera que l’entrée en vigueur d’un règlement européen est suivie d’une transposition dans le droit de chaque pays membre et que la réglementation nationale s’applique en l’absence de dispositions communautaires. En matière de commerce international, seront évoquées les actions de normalisation du Conseil Oléicole International (COI pour les huiles d’olive et de grignons d’olive et celles du Codex Alimentarius pour les huiles et graisses comestibles. L’ensemble des dispositions réglementaires constitue un cadre englobant les productions de l’amont vers l’aval, à la fois sur un plan vertical (oléagineux, huiles et corps gras, huiles d’olive, margarines, procédés de raffinage et transversalement (composés organiques volatils, OGM, solvants d’extraction, additifs, contaminants…. Le cas de l’huile d’olive est particulier en ce qu’il bénéficie d’un encadrement au niveau international (normes commerciales COI et Codex Alimentarius, européen et national (réglementation. Le Codex Alimentarius, quant à lui, établit des normes à caractère vertical (huiles végétales, graisses animales, huiles d’olive, matières grasses tartinables… et horizontal (additifs, résidus de pesticides…. L’essentiel de cet encadrement est résumé dans les tableaux qui illustrent cette contribution.

  16. ReadqPCR and NormqPCR: R packages for the reading, quality checking and normalisation of RT-qPCR quantification cycle (Cq data

    Directory of Open Access Journals (Sweden)

    Perkins James R

    2012-07-01

    Full Text Available Abstract Background Measuring gene transcription using real-time reverse transcription polymerase chain reaction (RT-qPCR technology is a mainstay of molecular biology. Technologies now exist to measure the abundance of many transcripts in parallel. The selection of the optimal reference gene for the normalisation of this data is a recurring problem, and several algorithms have been developed in order to solve it. So far nothing in R exists to unite these methods, together with other functions to read in and normalise the data using the chosen reference gene(s. Results We have developed two R/Bioconductor packages, ReadqPCR and NormqPCR, intended for a user with some experience with high-throughput data analysis using R, who wishes to use R to analyse RT-qPCR data. We illustrate their potential use in a workflow analysing a generic RT-qPCR experiment, and apply this to a real dataset. Packages are available from http://www.bioconductor.org/packages/release/bioc/html/ReadqPCR.htmland http://www.bioconductor.org/packages/release/bioc/html/NormqPCR.html Conclusions These packages increase the repetoire of RT-qPCR analysis tools available to the R user and allow them to (amongst other things read their data into R, hold it in an ExpressionSet compatible R object, choose appropriate reference genes, normalise the data and look for differential expression between samples.

  17. The contribution of online content to the promotion and normalisation of female genital cosmetic surgery: a systematic review of the literature.

    Science.gov (United States)

    Mowat, Hayley; McDonald, Karalyn; Dobson, Amy Shields; Fisher, Jane; Kirkman, Maggie

    2015-11-25

    Women considering female genital cosmetic surgery (FGCS) are likely to use the internet as a key source of information during the decision-making process. The aim of this systematic review was to determine what is known about the role of the internet in the promotion and normalisation of female genital cosmetic surgery and to identify areas for future research. Eight social science, medical, and communication databases and Google Scholar were searched for peer-reviewed papers published in English. Results from all papers were analysed to identify recurring and unique themes. Five papers met inclusion criteria. Three of the papers reported investigations of website content of FGCS providers, a fourth compared motivations for labiaplasty publicised on provider websites with those disclosed by women in online communities, and the fifth analysed visual depictions of female genitalia in online pornography. Analysis yielded five significant and interrelated patterns of representation, each functioning to promote and normalise the practice of FGCS: pathologisation of genital diversity; female genital appearance as important to wellbeing; characteristics of women's genitals are important for sex life; female body as degenerative and improvable through surgery; and FGCS as safe, easy, and effective. A significant gap was identified in the literature: the ways in which user-generated content might function to perpetuate, challenge, or subvert the normative discourses prevalent in online pornography and surgical websites. Further research is needed to contribute to knowledge of the role played by the internet in the promotion and normalisation of female genital cosmetic surgery.

  18. Bariatric surgery in morbidly obese insulin resistant humans normalises insulin signalling but not insulin-stimulated glucose disposal.

    Directory of Open Access Journals (Sweden)

    Mimi Z Chen

    Full Text Available Weight-loss after bariatric surgery improves insulin sensitivity, but the underlying molecular mechanism is not clear. To ascertain the effect of bariatric surgery on insulin signalling, we examined glucose disposal and Akt activation in morbidly obese volunteers before and after Roux-en-Y gastric bypass surgery (RYGB, and compared this to lean volunteers.The hyperinsulinaemic euglycaemic clamp, at five infusion rates, was used to determine glucose disposal rates (GDR in eight morbidly obese (body mass index, BMI=47.3 ± 2.2 kg/m(2 patients, before and after RYGB, and in eight lean volunteers (BMI=20.7 ± 0.7 kg/m2. Biopsies of brachioradialis muscle, taken at fasting and insulin concentrations that induced half-maximal (GDR50 and maximal (GDR100 GDR in each subject, were used to examine the phosphorylation of Akt-Thr308, Akt-473, and pras40, in vivo biomarkers for Akt activity.Pre-operatively, insulin-stimulated GDR was lower in the obese compared to the lean individuals (P<0.001. Weight-loss of 29.9 ± 4 kg after surgery significantly improved GDR50 (P=0.004 but not GDR100 (P=0.3. These subjects still remained significantly more insulin resistant than the lean individuals (p<0.001. Weight loss increased insulin-stimulated skeletal muscle Akt-Thr308 and Akt-Ser473 phosphorylation, P=0.02 and P=0.03 respectively (MANCOVA, and Akt activity towards the substrate PRAS40 (P=0.003, MANCOVA, and in contrast to GDR, were fully normalised after the surgery (obese vs lean, P=0.6, P=0.35, P=0.46, respectively.Our data show that although Akt activity substantially improved after surgery, it did not lead to a full restoration of insulin-stimulated glucose disposal. This suggests that a major defect downstream of, or parallel to, Akt signalling remains after significant weight-loss.

  19. Implementing online consultations in primary care: a mixed-method evaluation extending normalisation process theory through service co-production.

    Science.gov (United States)

    Farr, Michelle; Banks, Jonathan; Edwards, Hannah B; Northstone, Kate; Bernard, Elly; Salisbury, Chris; Horwood, Jeremy

    2018-03-19

    To examine patient and staff views, experiences and acceptability of a UK primary care online consultation system and ask how the system and its implementation may be improved. Mixed-method evaluation of a primary care e-consultation system. Primary care practices in South West England. Qualitative interviews with 23 practice staff in six practices. Patient survey data for 756 e-consultations from 36 practices, with free-text survey comments from 512 patients, were analysed thematically. Anonymised patients' records were abstracted for 485 e-consultations from eight practices, including consultation types and outcomes. Descriptive statistics were used to analyse quantitative data. Analysis of implementation and the usage of the e-consultation system were informed by: (1) normalisation process theory, (2) a framework that illustrates how e-consultations were co-produced and (3) patients' and staff touchpoints. We found different expectations between patients and staff on how to use e-consultations 'appropriately'. While some patients used the system to try and save time for themselves and their general practitioners (GPs), some used e-consultations when they could not get a timely face-to-face appointment. Most e-consultations resulted in either follow-on phone (32%) or face-to-face appointments (38%) and GPs felt that this duplicated their workload. Patient satisfaction of the system was high, but a minority were dissatisfied with practice communication about their e-consultation. Where both patients and staff interact with technology, it is in effect 'co-implemented'. How patients used e-consultations impacted on practice staff's experiences and appraisal of the system. Overall, the e-consultation system studied could improve access for some patients, but in its current form, it was not perceived by practices as creating sufficient efficiencies to warrant financial investment. We illustrate how this e-consultation system and its implementation can be improved

  20. Morphology of the pancreas in type 2 diabetes: effect of weight loss with or without normalisation of insulin secretory capacity.

    Science.gov (United States)

    Al-Mrabeh, Ahmad; Hollingsworth, Kieren G; Steven, Sarah; Taylor, Roy

    2016-08-01

    This study was designed to establish whether the low volume and irregular border of the pancreas in type 2 diabetes would be normalised after reversal of diabetes. A total of 29 individuals with type 2 diabetes undertook a very low energy (very low calorie) diet for 8 weeks followed by weight maintenance for 6 months. Methods were established to quantify the pancreas volume and degree of irregularity of the pancreas border. Three-dimensional volume-rendering and fractal dimension (FD) analysis of the MRI-acquired images were employed, as was three-point Dixon imaging to quantify the fat content. There was no change in pancreas volume 6 months after reversal of diabetes compared with baseline (52.0 ± 4.9 cm(3) and 51.4 ± 4.5 cm(3), respectively; p = 0.69), nor was any volumetric change observed in the non-responders. There was an inverse relationship between the volume and fat content of the pancreas in the total study population (r =-0.50, p = 0.006). Reversal of diabetes was associated with an increase in irregularity of the pancreas borders between baseline and 8 weeks (FD 1.143 ± 0.013 and 1.169 ± 0.006, respectively; p = 0.05), followed by a decrease at 6 months (1.130 ± 0.012, p = 0.006). On the other hand, no changes in FD were seen in the non-reversed group. Restoration of normal insulin secretion did not increase the subnormal pancreas volume over 6 months in the study population. A significant change in irregularity of the pancreas borders occurred after acute weight loss only after reversal of diabetes. Pancreas morphology in type 2 diabetes may be prognostically important, and its relationship to change in beta cell function requires further study.

  1. Spectral Bisection with Two Eigenvectors

    Czech Academy of Sciences Publication Activity Database

    Rocha, Israel

    2017-01-01

    Roč. 61, August (2017), s. 1019-1025 ISSN 1571-0653 Institutional support: RVO:67985807 Keywords : graph partitioning * Laplacian matrix * Fiedler vector Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics

  2. Using normalisation process theory to understand barriers and facilitators to implementing mindfulness-based stress reduction for people with multiple sclerosis.

    Science.gov (United States)

    Simpson, Robert; Simpson, Sharon; Wood, Karen; Mercer, Stewart W; Mair, Frances S

    2018-01-01

    Objectives To study barriers and facilitators to implementation of mindfulness-based stress reduction for people with multiple sclerosis. Methods Qualitative interviews were used to explore barriers and facilitators to implementation of mindfulness-based stress reduction, including 33 people with multiple sclerosis, 6 multiple sclerosis clinicians and 2 course instructors. Normalisation process theory provided the underpinning conceptual framework. Data were analysed deductively using normalisation process theory constructs (coherence, cognitive participation, collective action and reflexive monitoring). Results Key barriers included mismatched stakeholder expectations, lack of knowledge about mindfulness-based stress reduction, high levels of comorbidity and disability and skepticism about embedding mindfulness-based stress reduction in routine multiple sclerosis care. Facilitators to implementation included introducing a pre-course orientation session; adaptations to mindfulness-based stress reduction to accommodate comorbidity and disability and participants suggested smaller, shorter classes, shortened practices, exclusion of mindful-walking and more time with peers. Post-mindfulness-based stress reduction booster sessions may be required, and objective and subjective reports of benefit would increase clinician confidence in mindfulness-based stress reduction. Discussion Multiple sclerosis patients and clinicians know little about mindfulness-based stress reduction. Mismatched expectations are a barrier to participation, as is rigid application of mindfulness-based stress reduction in the context of disability. Course adaptations in response to patient needs would facilitate uptake and utilisation. Rendering access to mindfulness-based stress reduction rapid and flexible could facilitate implementation. Embedded outcome assessment is desirable.

  3. The moral experience of illness and its impact on normalisation: Examples from narratives with Punjabi women living with rheumatoid arthritis in the UK.

    Science.gov (United States)

    Sanderson, Tessa; Calnan, Michael; Kumar, Kanta

    2015-11-01

    The moral component of living with illness has been neglected in analyses of long-term illness experiences. This article attempts to fill this gap by exploring the role of the moral experience of illness in mediating the ability of those living with a long-term condition (LTC) to normalise. This is explored through an empirical study of women of Punjabi origin living with rheumatoid arthritis (RA) in the UK. Sixteen informants were recruited through three hospitals in UK cities and interviews conducted and analysed using a grounded theory approach. The intersection between moral experience and normalisation, within the broader context of ethnic, gender and socioeconomic influences, was evident in the following: disruption of a core lived value (the centrality of family duty), beliefs about illness causation affecting informants' 'moral career', and perceived discrimination in the workplace. The data illustrate the importance of considering an ethnic community's specific values and beliefs when understanding differences in adapting to LTCs and changing identities. © 2015 Foundation for the Sociology of Health & Illness.

  4. Does normalisation improve the diagnostic performance of apparent diffusion coefficient values for prostate cancer assessment? A blinded independent-observer evaluation

    International Nuclear Information System (INIS)

    Rosenkrantz, A.B.; Khalef, V.; Xu, W.; Babb, J.S.; Taneja, S.S.; Doshi, A.M.

    2015-01-01

    Aim: To evaluate the performance of normalised apparent diffusion coefficient (ADC) values for prostate cancer assessment when performed by independent observers blinded to histopathology findings. Materials and methods: Fifty-eight patients undergoing 3 T phased-array coil magnetic resonance imaging (MRI) including diffusion-weighted imaging (DWI; maximal b-value 1000 s/mm 2 ) before prostatectomy were included. Two radiologists independently evaluated the images, unaware of the histopathology findings. Regions of interest (ROIs) were drawn within areas showing visually low ADC within the peripheral zone (PZ) and transition zone (TZ) bilaterally. ROIs were also placed within regions in both lobes not suspicious for tumour, allowing computation of normalised ADC (nADC) ratios between suspicious and non-suspicious regions. The diagnostic performance of ADC and nADC were compared. Results: For PZ tumour detection, ADC achieved significantly higher area under the receiver operating characteristic curve (AUC; p=0.026) and specificity (p=0.021) than nADC for reader 1, and significantly higher AUC (p=0.025) than nADC for reader 2. For TZ tumour detection, nADC achieved significantly higher specificity (p=0.003) and accuracy (p=0.004) than ADC for reader 2. For PZ Gleason score >3+3 tumour detection, ADC achieved significantly higher AUC (p=0.003) and specificity (p=0.005) than nADC for reader 1, and significantly higher AUC (p=0.023) than nADC for reader 2. For TZ Gleason score >3+3 tumour detection, ADC achieved significantly higher specificity (p=0.019) than nADC for reader 1. Conclusion: In contrast to prior studies performing unblinded evaluations, ADC was observed to outperform nADC overall for two independent observers blinded to the histopathology findings. Therefore, although strategies to improve the utility of ADC measurements in prostate cancer assessment merit continued investigation, caution is warranted when applying normalisation to improve diagnostic

  5. Selection of reference genes for normalisation of real-time RT-PCR in brain-stem death injury in Ovis aries

    Directory of Open Access Journals (Sweden)

    Fraser John F

    2009-07-01

    Full Text Available Abstract Background Heart and lung transplantation is frequently the only therapeutic option for patients with end stage cardio respiratory disease. Organ donation post brain stem death (BSD is a pre-requisite, yet BSD itself causes such severe damage that many organs offered for donation are unusable, with lung being the organ most affected by BSD. In Australia and New Zealand, less than 50% of lungs offered for donation post BSD are suitable for transplantation, as compared with over 90% of kidneys, resulting in patients dying for lack of suitable lungs. Our group has developed a novel 24 h sheep BSD model to mimic the physiological milieu of the typical human organ donor. Characterisation of the gene expression changes associated with BSD is critical and will assist in determining the aetiology of lung damage post BSD. Real-time PCR is a highly sensitive method involving multiple steps from extraction to processing RNA so the choice of housekeeping genes is important in obtaining reliable results. Little information however, is available on the expression stability of reference genes in the sheep pulmonary artery and lung. We aimed to establish a set of stably expressed reference genes for use as a standard for analysis of gene expression changes in BSD. Results We evaluated the expression stability of 6 candidate normalisation genes (ACTB, GAPDH, HGPRT, PGK1, PPIA and RPLP0 using real time quantitative PCR. There was a wide range of Ct-values within each tissue for pulmonary artery (15–24 and lung (16–25 but the expression pattern for each gene was similar across the two tissues. After geNorm analysis, ACTB and PPIA were shown to be the most stably expressed in the pulmonary artery and ACTB and PGK1 in the lung tissue of BSD sheep. Conclusion Accurate normalisation is critical in obtaining reliable and reproducible results in gene expression studies. This study demonstrates tissue associated variability in the selection of these

  6. Long-term performance of grid-connected photovoltaic plant - Appendix 2: normalised monthly statistics; Langzeitverhalten von netzgekoppelten Photovoltaikanlagen 2 (LZPV2). Anhang 2: Normierte Monatsstatistiken

    Energy Technology Data Exchange (ETDEWEB)

    Renken, C.; Haeberlin, H.

    2003-07-01

    This is the third part of a four-part final report for the Swiss Federal Office of Energy (SFOE) made by the University of Applied Sciences in Burgdorf, Switzerland. This report presents the findings of a project begun in 1992 that monitored the performance of around 40 photovoltaic (PV) installations in Switzerland. This extensive second appendix to the report describes the eight installations that were monitored in detail, including - amongst others - the demonstration installations on Mont Soleil in the Jura mountains and on the Jungfraujoch in the Alps as well as three test installations using modern thin-film technologies in Burgdorf. The normalised monthly specific performance of these installations was monitored. The report presents the various performance figures in graphical form.

  7. Attention training normalises combat-related post-traumatic stress disorder effects on emotional Stroop performance using lexically matched word lists.

    Science.gov (United States)

    Khanna, Maya M; Badura-Brack, Amy S; McDermott, Timothy J; Shepherd, Alex; Heinrichs-Graham, Elizabeth; Pine, Daniel S; Bar-Haim, Yair; Wilson, Tony W

    2015-08-26

    We examined two groups of combat veterans, one with post-traumatic stress disorder (PTSD) (n = 27) and another without PTSD (n = 16), using an emotional Stroop task (EST) with word lists matched across a series of lexical variables (e.g. length, frequency, neighbourhood size, etc.). Participants with PTSD exhibited a strong EST effect (longer colour-naming latencies for combat-relevant words as compared to neutral words). Veterans without PTSD produced no such effect, t  .37. Participants with PTSD then completed eight sessions of attention training (Attention Control Training or Attention Bias Modification Training) with a dot-probe task utilising threatening and neutral faces. After training, participants-especially those undergoing Attention Control Training-no longer produced longer colour-naming latencies for combat-related words as compared to other words, indicating normalised attention allocation processes after treatment.

  8. The economic costs of natural disasters globally from 1900-2015: historical and normalised floods, storms, earthquakes, volcanoes, bushfires, drought and other disasters

    Science.gov (United States)

    Daniell, James; Wenzel, Friedemann; Schaefer, Andreas

    2016-04-01

    For the first time, a breakdown of natural disaster losses from 1900-2015 based on over 30,000 event economic losses globally is given based on increased analysis within the CATDAT Damaging Natural Disaster databases. Using country-CPI and GDP deflator adjustments, over 7 trillion (2015-adjusted) in losses have occurred; over 40% due to flood/rainfall, 26% due to earthquake, 19% due to storm effects, 12% due to drought, 2% due to wildfire and under 1% due to volcano. Using construction cost indices, higher percentages of flood losses are seen. Depending on how the adjustment of dollars are made to 2015 terms (CPI vs. construction cost indices), between 6.5 and 14.0 trillion USD (2015-adjusted) of natural disaster losses have been seen from 1900-2015 globally. Significant reductions in economic losses have been seen in China and Japan from 1950 onwards. An AAL of around 200 billion in the last 16 years has been seen equating to around 0.25% of Global GDP or around 0.1% of Net Capital Stock per year. Normalised losses have also been calculated to examine the trends in vulnerability through time for economic losses. The normalisation methodology globally using the exposure databases within CATDAT that were undertaken previously in papers for the earthquake and volcano databases, are used for this study. The original event year losses are adjusted directly by capital stock change, very high losses are observed with respect to floods over time (however with improved flood control structures). This shows clear trends in the improvement of building stock towards natural disasters and a decreasing trend in most perils for most countries.

  9. The applicability of normalisation process theory to speech and language therapy: a review of qualitative research on a speech and language intervention.

    Science.gov (United States)

    James, Deborah M

    2011-08-12

    The Bercow review found a high level of public dissatisfaction with speech and language services for children. Children with speech, language, and communication needs (SLCN) often have chronic complex conditions that require provision from health, education, and community services. Speech and language therapists are a small group of Allied Health Professionals with a specialist skill-set that equips them to work with children with SLCN. They work within and across the diverse range of public service providers. The aim of this review was to explore the applicability of Normalisation Process Theory (NPT) to the case of speech and language therapy. A review of qualitative research on a successfully embedded speech and language therapy intervention was undertaken to test the applicability of NPT. The review focused on two of the collective action elements of NPT (relational integration and interaction workability) using all previously published qualitative data from both parents and practitioners' perspectives on the intervention. The synthesis of the data based on the Normalisation Process Model (NPM) uncovered strengths in the interpersonal processes between the practitioners and parents, and weaknesses in how the accountability of the intervention is distributed in the health system. The analysis based on the NPM uncovered interpersonal processes between the practitioners and parents that were likely to have given rise to successful implementation of the intervention. In previous qualitative research on this intervention where the Medical Research Council's guidance on developing a design for a complex intervention had been used as a framework, the interpersonal work within the intervention had emerged as a barrier to implementation of the intervention. It is suggested that the design of services for children and families needs to extend beyond the consideration of benefits and barriers to embrace the social processes that appear to afford success in embedding

  10. Identifying Stable Reference Genes for qRT-PCR Normalisation in Gene Expression Studies of Narrow-Leafed Lupin (Lupinus angustifolius L..

    Directory of Open Access Journals (Sweden)

    Candy M Taylor

    Full Text Available Quantitative Reverse Transcription PCR (qRT-PCR is currently one of the most popular, high-throughput and sensitive technologies available for quantifying gene expression. Its accurate application depends heavily upon normalisation of gene-of-interest data with reference genes that are uniformly expressed under experimental conditions. The aim of this study was to provide the first validation of reference genes for Lupinus angustifolius (narrow-leafed lupin, a significant grain legume crop using a selection of seven genes previously trialed as reference genes for the model legume, Medicago truncatula. In a preliminary evaluation, the seven candidate reference genes were assessed on the basis of primer specificity for their respective targeted region, PCR amplification efficiency, and ability to discriminate between cDNA and gDNA. Following this assessment, expression of the three most promising candidates [Ubiquitin C (UBC, Helicase (HEL, and Polypyrimidine tract-binding protein (PTB] was evaluated using the NormFinder and RefFinder statistical algorithms in two narrow-leafed lupin lines, both with and without vernalisation treatment, and across seven organ types (cotyledons, stem, leaves, shoot apical meristem, flowers, pods and roots encompassing three developmental stages. UBC was consistently identified as the most stable candidate and has sufficiently uniform expression that it may be used as a sole reference gene under the experimental conditions tested here. However, as organ type and developmental stage were associated with greater variability in relative expression, it is recommended using UBC and HEL as a pair to achieve optimal normalisation. These results highlight the importance of rigorously assessing candidate reference genes for each species across a diverse range of organs and developmental stages. With emerging technologies, such as RNAseq, and the completion of valuable transcriptome data sets, it is possible that other

  11. An application of Extended Normalisation Process Theory in a randomised controlled trial of a complex social intervention: Process evaluation of the Strengthening Families Programme (10–14 in Wales, UK

    Directory of Open Access Journals (Sweden)

    Jeremy Segrott

    2017-12-01

    Conclusions: Extended Normalisation Process Theory provided a useful framework for assessing implementation and explaining variation by examining intervention-context interactions. Findings highlight the need for process evaluations to consider both the structural and process components of implementation to explain whether programme activities are delivered as intended and why.

  12. Global Earthquake and Volcanic Eruption Economic losses and costs from 1900-2014: 115 years of the CATDAT database - Trends, Normalisation and Visualisation

    Science.gov (United States)

    Daniell, James; Skapski, Jens-Udo; Vervaeck, Armand; Wenzel, Friedemann; Schaefer, Andreas

    2015-04-01

    Over the past 12 years, an in-depth database has been constructed for socio-economic losses from earthquakes and volcanoes. The effects of earthquakes and volcanic eruptions have been documented in many databases, however, many errors and incorrect details are often encountered. To combat this, the database was formed with socioeconomic checks of GDP, capital stock, population and other elements, as well as providing upper and lower bounds to each available event loss. The definition of economic losses within the CATDAT Damaging Earthquakes Database (Daniell et al., 2011a) as of v6.1 has now been redefined to provide three options of natural disaster loss pricing, including reconstruction cost, replacement cost and actual loss, in order to better define the impact of historical disasters. Similarly for volcanoes as for earthquakes, a reassessment has been undertaken looking at the historical net and gross capital stock and GDP at the time of the event, including the depreciated stock, in order to calculate the actual loss. A normalisation has then been undertaken using updated population, GDP and capital stock. The difference between depreciated and gross capital can be removed from the historical loss estimates which have been all calculated without taking depreciation of the building stock into account. The culmination of time series from 1900-2014 of net and gross capital stock, GDP, direct economic loss data, use of detailed studies of infrastructure age, and existing damage surveys, has allowed the first estimate of this nature. The death tolls in earthquakes from 1900-2014 are presented in various forms, showing around 2.32 million deaths due to earthquakes (with a range of 2.18 to 2.63 million) and around 59% due to masonry buildings and 28% from secondary effects. For the death tolls from the volcanic eruption database, 98000 deaths with a range from around 83000 to 107000 is seen from 1900-2014. The application of VSL life costing from death and injury

  13. Symmorphosis through dietary regulation: a combinatorial role for proteolysis, autophagy and protein synthesis in normalising muscle metabolism and function of hypertrophic mice after acute starvation.

    Directory of Open Access Journals (Sweden)

    Henry Collins-Hooper

    Full Text Available Animals are imbued with adaptive mechanisms spanning from the tissue/organ to the cellular scale which insure that processes of homeostasis are preserved in the landscape of size change. However we and others have postulated that the degree of adaptation is limited and that once outside the normal levels of size fluctuations, cells and tissues function in an aberant manner. In this study we examine the function of muscle in the myostatin null mouse which is an excellent model for hypertrophy beyond levels of normal growth and consequeces of acute starvation to restore mass. We show that muscle growth is sustained through protein synthesis driven by Serum/Glucocorticoid Kinase 1 (SGK1 rather than Akt1. Furthermore our metabonomic profiling of hypertrophic muscle shows that carbon from nutrient sources is being channelled for the production of biomass rather than ATP production. However the muscle displays elevated levels of autophagy and decreased levels of muscle tension. We demonstrate the myostatin null muscle is acutely sensitive to changes in diet and activates both the proteolytic and autophagy programmes and shutting down protein synthesis more extensively than is the case for wild-types. Poignantly we show that acute starvation which is detrimental to wild-type animals is beneficial in terms of metabolism and muscle function in the myostatin null mice by normalising tension production.

  14. Understanding clinician attitudes towards implementation of guided self-help cognitive behaviour therapy for those who hear distressing voices: using factor analysis to test normalisation process theory.

    Science.gov (United States)

    Hazell, Cassie M; Strauss, Clara; Hayward, Mark; Cavanagh, Kate

    2017-07-24

    The Normalisation Process Theory (NPT) has been used to understand the implementation of physical health care interventions. The current study aims to apply the NPT model to a secondary mental health context, and test the model using exploratory factor analysis. This study will consider the implementation of a brief cognitive behaviour therapy for psychosis (CBTp) intervention. Mental health clinicians were asked to complete a NPT-based questionnaire on the implementation of a brief CBTp intervention. All clinicians had experience of either working with the target client group or were able to deliver psychological therapies. In total, 201 clinicians completed the questionnaire. The results of the exploratory factor analysis found partial support for the NPT model, as three of the NPT factors were extracted: (1) coherence, (2) cognitive participation, and (3) reflexive monitoring. We did not find support for the fourth NPT factor (collective action). All scales showed strong internal consistency. Secondary analysis of these factors showed clinicians to generally support the implementation of the brief CBTp intervention. This study provides strong evidence for the validity of the three NPT factors extracted. Further research is needed to determine whether participants' level of seniority moderates factor extraction, whether this factor structure can be generalised to other healthcare settings, and whether pre-implementation attitudes predict actual implementation outcomes.

  15. Assessment of the efficacy of a novel tailored vitamin K dosing regimen in lowering the International Normalised Ratio in over-anticoagulated patients: a randomised clinical trial.

    Science.gov (United States)

    Kampouraki, Emmanouela; Avery, Peter J; Wynne, Hilary; Biss, Tina; Hanley, John; Talks, Kate; Kamali, Farhad

    2017-09-01

    Current guidelines advocate using fixed-doses of oral vitamin K to reverse excessive anticoagulation in warfarinised patients who are either asymptomatic or have minor bleeds. Over-anticoagulated patients present with a wide range of International Normalised Ratio (INR) values and response to fixed doses of vitamin K varies. Consequently a significant proportion of patients remain outside their target INR after vitamin K administration, making them prone to either haemorrhage or thromboembolism. We compared the performance of a novel tailored vitamin K dosing regimen to that of a fixed-dose regimen with the primary measure being the proportion of over-anticoagulated patients returning to their target INR within 24 h. One hundred and eighty-one patients with an index INR > 6·0 (asymptomatic or with minor bleeding) were randomly allocated to receive oral administration of either a tailored dose (based upon index INR and body surface area) or a fixed-dose (1 or 2 mg) of vitamin K. A greater proportion of patients treated with the tailored dose returned to within target INR range compared to the fixed-dose regimen (68·9% vs. 52·8%; P = 0·026), whilst a smaller proportion of patients remained above target INR range (12·2% vs. 34·0%; P vitamin K dosing is more accurate than fixed-dose regimen in lowering INR to within target range in excessively anticoagulated patients. © 2017 John Wiley & Sons Ltd.

  16. Clinical, immunological and treatment-related factors associated with normalised CD4+/CD8+ T-cell ratio: effect of naïve and memory T-cell subsets.

    LENUS (Irish Health Repository)

    Tinago, Willard

    2014-01-01

    Although effective antiretroviral therapy(ART) increases CD4+ T-cell count, responses to ART vary considerably and only a minority of patients normalise their CD4+\\/CD8+ ratio. Although retention of naïve CD4+ T-cells is thought to predict better immune responses, relationships between CD4+ and CD8+ T-cell subsets and CD4+\\/CD8+ ratio have not been well described.

  17. A formative evaluation of the implementation of a medication safety data collection tool in English healthcare settings: A qualitative interview study using normalisation process theory.

    Science.gov (United States)

    Rostami, Paryaneh; Ashcroft, Darren M; Tully, Mary P

    2018-01-01

    Reducing medication-related harm is a global priority; however, impetus for improvement is impeded as routine medication safety data are seldom available. Therefore, the Medication Safety Thermometer was developed within England's National Health Service. This study aimed to explore the implementation of the tool into routine practice from users' perspectives. Fifteen semi-structured interviews were conducted with purposely sampled National Health Service staff from primary and secondary care settings. Interview data were analysed using an initial thematic analysis, and subsequent analysis using Normalisation Process Theory. Secondary care staff understood that the Medication Safety Thermometer's purpose was to measure medication safety and improvement. However, other uses were reported, such as pinpointing poor practice. Confusion about its purpose existed in primary care, despite further training, suggesting unsuitability of the tool. Decreased engagement was displayed by staff less involved with medication use, who displayed less ownership. Nonetheless, these advocates often lacked support from management and frontline levels, leading to an overall lack of engagement. Many participants reported efforts to drive scale-up of the use of the tool, for example, by securing funding, despite uncertainty around how to use data. Successful improvement was often at ward-level and went unrecognised within the wider organisation. There was mixed feedback regarding the value of the tool, often due to a perceived lack of "capacity". However, participants demonstrated interest in learning how to use their data and unexpected applications of data were reported. Routine medication safety data collection is complex, but achievable and facilitates improvements. However, collected data must be analysed, understood and used for further work to achieve improvement, which often does not happen. The national roll-out of the tool has accelerated shared learning; however, a number of

  18. Repeated lysergic acid diethylamide in an animal model of depression: Normalisation of learning behaviour and hippocampal serotonin 5-HT2 signalling.

    Science.gov (United States)

    Buchborn, Tobias; Schröder, Helmut; Höllt, Volker; Grecksch, Gisela

    2014-06-01

    A re-balance of postsynaptic serotonin (5-HT) receptor signalling, with an increase in 5-HT1A and a decrease in 5-HT2A signalling, is a final common pathway multiple antidepressants share. Given that the 5-HT1A/2A agonist lysergic acid diethylamide (LSD), when repeatedly applied, selectively downregulates 5-HT2A, but not 5-HT1A receptors, one might expect LSD to similarly re-balance the postsynaptic 5-HT signalling. Challenging this idea, we use an animal model of depression specifically responding to repeated antidepressant treatment (olfactory bulbectomy), and test the antidepressant-like properties of repeated LSD treatment (0.13 mg/kg/d, 11 d). In line with former findings, we observe that bulbectomised rats show marked deficits in active avoidance learning. These deficits, similarly as we earlier noted with imipramine, are largely reversed by repeated LSD administration. Additionally, bulbectomised rats exhibit distinct anomalies of monoamine receptor signalling in hippocampus and/or frontal cortex; from these, only the hippocampal decrease in 5-HT2 related [(35)S]-GTP-gamma-S binding is normalised by LSD. Importantly, the sham-operated rats do not profit from LSD, and exhibit reduced hippocampal 5-HT2 signalling. As behavioural deficits after bulbectomy respond to agents classified as antidepressants only, we conclude that the effect of LSD in this model can be considered antidepressant-like, and discuss it in terms of a re-balance of hippocampal 5-HT2/5-HT1A signalling. © The Author(s) 2014.

  19. A formative evaluation of the implementation of a medication safety data collection tool in English healthcare settings: A qualitative interview study using normalisation process theory.

    Directory of Open Access Journals (Sweden)

    Paryaneh Rostami

    Full Text Available Reducing medication-related harm is a global priority; however, impetus for improvement is impeded as routine medication safety data are seldom available. Therefore, the Medication Safety Thermometer was developed within England's National Health Service. This study aimed to explore the implementation of the tool into routine practice from users' perspectives.Fifteen semi-structured interviews were conducted with purposely sampled National Health Service staff from primary and secondary care settings. Interview data were analysed using an initial thematic analysis, and subsequent analysis using Normalisation Process Theory.Secondary care staff understood that the Medication Safety Thermometer's purpose was to measure medication safety and improvement. However, other uses were reported, such as pinpointing poor practice. Confusion about its purpose existed in primary care, despite further training, suggesting unsuitability of the tool. Decreased engagement was displayed by staff less involved with medication use, who displayed less ownership. Nonetheless, these advocates often lacked support from management and frontline levels, leading to an overall lack of engagement. Many participants reported efforts to drive scale-up of the use of the tool, for example, by securing funding, despite uncertainty around how to use data. Successful improvement was often at ward-level and went unrecognised within the wider organisation. There was mixed feedback regarding the value of the tool, often due to a perceived lack of "capacity". However, participants demonstrated interest in learning how to use their data and unexpected applications of data were reported.Routine medication safety data collection is complex, but achievable and facilitates improvements. However, collected data must be analysed, understood and used for further work to achieve improvement, which often does not happen. The national roll-out of the tool has accelerated shared learning; however

  20. Four weeks of near-normalisation of blood glucose improves the insulin response to glucagon-like peptide-1 and glucose-dependent insulinotropic polypeptide in patients with type 2 diabetes

    DEFF Research Database (Denmark)

    Højberg, P V; Vilsbøll, T; Rabøl, R

    2008-01-01

    of near-normalisation of the blood glucose level could improve insulin responses to GIP and GLP-1 in patients with type 2 diabetes. METHODS: Eight obese patients with type 2 diabetes with poor glycaemic control (HbA(1c) 8.6 +/- 1.3%), were investigated before and after 4 weeks of near......-normalisation of blood glucose (mean blood glucose 7.4 +/- 1.2 mmol/l) using insulin treatment. Before and after insulin treatment the participants underwent three hyperglycaemic clamps (15 mmol/l) with infusion of GLP-1, GIP or saline. Insulin responses were evaluated as the incremental area under the plasma C......-peptide curve. RESULTS: Before and after near-normalisation of blood glucose, the C-peptide responses did not differ during the early phase of insulin secretion (0-10 min). The late phase C-peptide response (10-120 min) increased during GIP infusion from 33.0 +/- 8.5 to 103.9 +/- 24.2 (nmol/l) x (110 min)(-1...

  1. Symmetric normalisation for intuitionistic logic

    DEFF Research Database (Denmark)

    Guenot, Nicolas; Straßburger, Lutz

    2014-01-01

    We present two proof systems for implication-only intuitionistic logic in the calculus of structures. The first is a direct adaptation of the standard sequent calculus to the deep inference setting, and we describe a procedure for cut elimination, similar to the one from the sequent calculus......, but using a non-local rewriting. The second system is the symmetric completion of the first, as normally given in deep inference for logics with a DeMorgan duality: all inference rules have duals, as cut is dual to the identity axiom. We prove a generalisation of cut elimination, that we call symmetric...

  2. Forced normalisation precipitated by lamotrigine.

    Science.gov (United States)

    Clemens, Béla

    2005-10-01

    To report two patients with lamotrigine-induced forced normalization (FN). Evaluation of the patient files, EEG, and video-EEG records, with special reference to the parallel clinical and EEG changes before, during, and after FN. This is the first documented report of lamotrigine-induced FN. The two epileptic patients (one of them was a 10-year-old girl) were successfully treated with lamotrigine. Their seizures ceased and interictal epileptiform events disappeared from the EEG record. Simultaneously, the patients displayed de novo occurrence of psychopathologic manifestations and disturbed behaviour. Reduction of the daily dose of LTG led to disappearance of the psychopathological symptoms and reappearance of the spikes but not the seizures. Lamotrigine may precipitate FN in adults and children. Analysis of the cases showed that lamotrigine-induced FN is a dose-dependent phenomenon and can be treated by reduction of the daily dose of the drug.

  3. Supporting the use of theory in cross-country health services research: a participatory qualitative approach using Normalisation Process Theory as an example.

    Science.gov (United States)

    O'Donnell, Catherine A; Mair, Frances S; Dowrick, Christopher; Brún, Mary O'Reilly-de; Brún, Tomas de; Burns, Nicola; Lionis, Christos; Saridaki, Aristoula; Papadakaki, Maria; Muijsenbergh, Maria van den; Weel-Baumgarten, Evelyn van; Gravenhorst, Katja; Cooper, Lucy; Princz, Christine; Teunissen, Erik; Mareeuw, Francine van den Driessen; Vlahadi, Maria; Spiegel, Wolfgang; MacFarlane, Anne

    2017-08-21

    To describe and reflect on the process of designing and delivering a training programme supporting the use of theory, in this case Normalisation Process Theory (NPT), in a multisite cross-country health services research study. Participatory research approach using qualitative methods. Six European primary care settings involving research teams from Austria, England, Greece, Ireland, The Netherlands and Scotland. RESTORE research team consisting of 8 project applicants, all senior primary care academics, and 10 researchers. Professional backgrounds included general practitioners/family doctors, social/cultural anthropologists, sociologists and health services/primary care researchers. Views of all research team members (n=18) were assessed using qualitative evaluation methods, analysed qualitatively by the trainers after each session. Most of the team had no experience of using NPT and many had not applied theory to prospective, qualitative research projects. Early training proved didactic and overloaded participants with information. Drawing on RESTORE's methodological approach of Participatory Learning and Action, workshops using role play, experiential interactive exercises and light-hearted examples not directly related to the study subject matter were developed. Evaluation showed the study team quickly grew in knowledge and confidence in applying theory to fieldwork.Recommendations applicable to other studies include: accepting that theory application is not a linear process, that time is needed to address researcher concerns with the process, and that experiential, interactive learning is a key device in building conceptual and practical knowledge. An unanticipated benefit was the smooth transition to cross-country qualitative coding of study data. A structured programme of training enhanced and supported the prospective application of a theory, NPT, to our work but raised challenges. These were not unique to NPT but could arise with the application of any

  4. Drawing Space: Mathematicians' Kinetic Conceptions of Eigenvectors

    Science.gov (United States)

    Sinclair, Nathalie; Gol Tabaghi, Shiva

    2010-01-01

    This paper explores how mathematicians build meaning through communicative activity involving talk, gesture and diagram. In the course of describing mathematical concepts, mathematicians use these semiotic resources in ways that blur the distinction between the mathematical and physical world. We shall argue that mathematical meaning of…

  5. An application of Extended Normalisation Process Theory in a randomised controlled trial of a complex social intervention: Process evaluation of the Strengthening Families Programme (10-14) in Wales, UK.

    Science.gov (United States)

    Segrott, Jeremy; Murphy, Simon; Rothwell, Heather; Scourfield, Jonathan; Foxcroft, David; Gillespie, David; Holliday, Jo; Hood, Kerenza; Hurlow, Claire; Morgan-Trimmer, Sarah; Phillips, Ceri; Reed, Hayley; Roberts, Zoe; Moore, Laurence

    2017-12-01

    Process evaluations generate important data on the extent to which interventions are delivered as intended. However, the tendency to focus only on assessment of pre-specified structural aspects of fidelity has been criticised for paying insufficient attention to implementation processes and how intervention-context interactions influence programme delivery. This paper reports findings from a process evaluation nested within a randomised controlled trial of the Strengthening Families Programme 10-14 (SFP 10-14) in Wales, UK. It uses Extended Normalisation Process Theory to theorise how interaction between SFP 10-14 and local delivery systems - particularly practitioner commitment/capability and organisational capacity - influenced delivery of intended programme activities: fidelity (adherence to SFP 10-14 content and implementation requirements); dose delivered; dose received (participant engagement); participant recruitment and reach (intervention attendance). A mixed methods design was utilised. Fidelity assessment sheets (completed by practitioners), structured observation by researchers, and routine data were used to assess: adherence to programme content; staffing numbers and consistency; recruitment/retention; and group size and composition. Interviews with practitioners explored implementation processes and context. Adherence to programme content was high - with some variation, linked to practitioner commitment to, and understanding of, the intervention's content and mechanisms. Variation in adherence rates was associated with the extent to which multi-agency delivery team planning meetings were held. Recruitment challenges meant that targets for group size/composition were not always met, but did not affect adherence levels or family engagement. Targets for staffing numbers and consistency were achieved, though capacity within multi-agency networks reduced over time. Extended Normalisation Process Theory provided a useful framework for assessing

  6. The dynamics of the oesophageal squamous epithelium 'normalisation' process in patients with gastro-oesophageal reflux disease treated with long-term acid suppression or anti-reflux surgery.

    Science.gov (United States)

    Mastracci, L; Fiocca, R; Engström, C; Attwood, S; Ell, C; Galmiche, J P; Hatlebakk, J G; Långström, G; Eklund, S; Lind, T; Lundell, L

    2017-05-01

    Proton pump inhibitors and laparoscopic anti-reflux surgery (LARS) offer long-term symptom control to patients with gastro-oesophageal reflux disease (GERD). To evaluate the process of 'normalisation' of the squamous epithelium morphology of the distal oesophagus on these therapies. In the LOTUS trial, 554 patients with chronic GERD were randomised to receive either esomeprazole (20-40 mg daily) or LARS. After 5 years, 372 patients remained in the study (esomeprazole, 192; LARS, 180). Biopsies were taken at the Z-line and 2 cm above, at baseline, 1, 3 and 5 years. A severity score was calculated based on: papillae elongation, basal cell hyperplasia, intercellular space dilatations and eosinophilic infiltration. The epithelial proliferative activity was assessed by Ki-67 immunohistochemistry. A gradual improvement in all variables over 5 years was noted in both groups, at both the Z-line and 2 cm above. The severity score decreased from baseline at each subsequent time point in both groups (P refluxate seems to play the predominant role in restoring tissue morphology. © 2017 John Wiley & Sons Ltd.

  7. Effect of food matrix and thermal processing on the performance of a normalised quantitative real-time PCR approach for lupine (Lupinus albus) detection as a potential allergenic food.

    Science.gov (United States)

    Villa, Caterina; Costa, Joana; Gondar, Cristina; Oliveira, M Beatriz P P; Mafra, Isabel

    2018-10-01

    Lupine is widely used as an ingredient in diverse food products, but it is also a source of allergens. This work aimed at proposing a method to detect/quantify lupine as an allergen in processed foods based on a normalised real-time PCR assay targeting the Lup a 4 allergen-encoding gene of Lupinus albus. Sensitivities down to 0.0005%, 0.01% and 0.05% (w/w) of lupine in rice flour, wheat flour and bread, respectively, and 1 pg of L. albus DNA were obtained, with adequate real-time PCR performance parameters using the ΔCt method. Both food matrix and processing affected negatively the quantitative performance of the assay. The method was successfully validated with blind samples and applied to processed foods. Lupine was estimated between 4.12 and 22.9% in foods, with some results suggesting the common practice of precautionary labelling. In this work, useful and effective tools were proposed for the detection/quantification of lupine in food products. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Comparison of the CoaguChek XS handheld coagulation analyzer and conventional laboratory methods measuring international normalised ratio (INR) values during the time to therapeutic range after mechanical valve surgery.

    Science.gov (United States)

    Bardakci, Hasmet; Altıntaş, Garip; Çiçek, Omer Faruk; Kervan, Umit; Yilmaz, Sevinc; Kaplan, Sadi; Birincioglu, Cemal Levent

    2013-05-01

    To compare the international normalised ratio (INR) value of patients evaluated using the CoaguChek XS versus conventional laboratory methods, in the period after open-heart surgery for mechanical valve replacement until a therapeutic range is achieved using vitamin K antagonists (VKA) together with low molecular weight heparin (LMWH). One hundred and five patients undergoing open-heart surgery for mechanical valve replacement were enrolled. Blood samples were collected from patients before surgery, and on the second and fifth postoperative days, simultaneously for both the point of care device and conventional laboratory techniques. Patients were administered VKA together with LMWH at therapeutic doses (enoxaparin 100 IU/kg twice daily) subcutaneously, until an effective range was achieved on approximately the fifth day after surgery. The mean INR values using the CoaguChek XS preoperatively and on the second and fifth days postoperatively were 1.20 (SD ± 0.09), 1.82 (SD ± 0.45), and 2.55 (SD ± 0.55), respectively. Corresponding results obtained using conventional laboratory techniques were 1.18 (SD ± 0.1), 1.81 (SD ± 0.43), and 2.51 (SD ± 0.58). The correlation coefficient was r = 0.77 preoperatively, r = 0.981 on postoperative day 2, and r = 0.983 on postoperative day 5. Results using the CoaguChek XS Handheld Coagulation Analyzer correlated strongly with conventional laboratory methods, in the bridging period between open-heart surgery for mechanical valve replacement and the achievement of a therapeutic range on warfarin and LMWH. © 2013 Wiley Periodicals, Inc.

  9. Factors associated with failure to correct the international normalised ratio following fresh frozen plasma administration among patients treated for warfarin-related major bleeding. An analysis of electronic health records.

    Science.gov (United States)

    Menzin, J; White, L A; Friedman, M; Nichols, C; Menzin, J; Hoesche, J; Bergman, G E; Jones, C

    2012-04-01

    This study assessed the frequency and factors associated with failure to correct international normalised ratio (INR) in patients administered fresh frozen plasma (FFP) for warfarin-related major bleeding. This retrospective database analysis used electronic health records from an integrated health system. Patients who received FFP between 01/01/2004 and 01/31/2010, and who met the following criteria were selected: major haemorrhage diagnosis the day before to the day after initial FFP administration; INR ≥2 on the day before or the day of FFP and another INR result available; warfarin prescription within 90 days. INR correction (defined as INR ≤1.3) was evaluated at the last available test up to one day following FFP. A total of 414 patients met selection criteria (mean age 75 years, 53% male, mean Charlson score 2.5). Patients presented with gastrointestinal bleeding (58%), intracranial haemorrhage (38%) and other bleed types (4%). The INR of 67% of patients remained uncorrected at the last available test up to one day following receipt of FFP. In logistic regression analysis, the INR of patients who were older, those with a Charlson score of 4 or greater, and those with non-ICH bleeds (odds ratio vs. intracranial bleeding 0.48; 95% confidence interval 0.31-0.76) were more likely to remain uncorrected within one day following FFP administration. In an alternative definition of correction, (INR ≤1.5), 39% of patients' INRs remained uncorrected. For a substantial proportion of patients, the INRs remain inadequately or uncorrected following FFP administration, with estimates varying depending on the INR threshold used.

  10. Maternal supplementation with conjugated linoleic acid in the setting of diet-induced obesity normalises the inflammatory phenotype in mothers and reverses metabolic dysfunction and impaired insulin sensitivity in offspring.

    Science.gov (United States)

    Segovia, Stephanie A; Vickers, Mark H; Zhang, Xiaoyuan D; Gray, Clint; Reynolds, Clare M

    2015-12-01

    Maternal consumption of a high-fat diet significantly impacts the fetal environment and predisposes offspring to obesity and metabolic dysfunction during adulthood. We examined the effects of a high-fat diet during pregnancy and lactation on metabolic and inflammatory profiles and whether maternal supplementation with the anti-inflammatory lipid conjugated linoleic acid (CLA) could have beneficial effects on mothers and offspring. Sprague-Dawley rats were fed a control (CD; 10% kcal from fat), CLA (CLA; 10% kcal from fat, 1% total fat as CLA), high-fat (HF; 45% kcal from fat) or high fat with CLA (HFCLA; 45% kcal from fat, 1% total fat as CLA) diet ad libitum 10days prior to and throughout gestation and lactation. Dams and offspring were culled at either late gestation (fetal day 20, F20) or early postweaning (postnatal day 24, P24). CLA, HF and HFCLA dams were heavier than CD throughout gestation. Plasma concentrations of proinflammatory cytokines interleukin-1β and tumour necrosis factor-α were elevated in HF dams, with restoration in HFCLA dams. Male and female fetuses from HF dams were smaller at F20 but displayed catch-up growth and impaired insulin sensitivity at P24, which was reversed in HFCLA offspring. HFCLA dams at P24 were protected from impaired insulin sensitivity as compared to HF dams. Maternal CLA supplementation normalised inflammation associated with consumption of a high-fat diet and reversed associated programming of metabolic dysfunction in offspring. This demonstrates that there are critical windows of developmental plasticity in which the effects of an adverse early-life environment can be reversed by maternal dietary interventions. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. An additional bolus of rapid-acting insulin to normalise postprandial cardiovascular risk factors following a high-carbohydrate high-fat meal in patients with type 1 diabetes: A randomised controlled trial.

    Science.gov (United States)

    Campbell, Matthew D; Walker, Mark; Ajjan, Ramzi A; Birch, Karen M; Gonzalez, Javier T; West, Daniel J

    2017-07-01

    To evaluate an additional rapid-acting insulin bolus on postprandial lipaemia, inflammation and pro-coagulation following high-carbohydrate high-fat feeding in people with type 1 diabetes. A total of 10 males with type 1 diabetes [HbA 1c 52.5 ± 5.9 mmol/mol (7.0% ± 0.5%)] underwent three conditions: (1) a low-fat (LF) meal with normal bolus insulin, (2), a high-fat (HF) meal with normal bolus insulin and (3) a high-fat meal with normal bolus insulin with an additional 30% insulin bolus administered 3-h post-meal (HFA). Meals had identical carbohydrate and protein content and bolus insulin dose determined by carbohydrate-counting. Blood was sampled periodically for 6-h post-meal and analysed for triglyceride, non-esterified-fatty acids, apolipoprotein B48, glucagon, tumour necrosis factor alpha, fibrinogen, human tissue factor activity and plasminogen activator inhibitor-1. Continuous glucose monitoring captured interstitial glucose responses. Triglyceride concentrations following LF remained similar to baseline, whereas triglyceride levels following HF were significantly greater throughout the 6-h observation period. The additional insulin bolus (HFA) normalised triglyceride similarly to low fat 3-6 h following the meal. HF was associated with late postprandial elevations in tumour necrosis factor alpha, whereas LF and HFA was not. Fibrinogen, plasminogen activator inhibitor-1 and tissue factor pathway levels were similar between conditions. Additional bolus insulin 3 h following a high-carbohydrate high-fat meal prevents late rises in postprandial triglycerides and tumour necrosis factor alpha, thus improving cardiovascular risk profile.

  12. Alternative psychosis (forced normalisation) in epilepsy

    African Journals Online (AJOL)

    changed, this should always be considered as a potential cause of a new or ... psychosis with thought disorder, delusions, hallucinations. • significant .... On mental status examination, the patient's behaviour was .... appeared for the first time.

  13. Alternative psychosis (forced normalisation in epilepsy

    Directory of Open Access Journals (Sweden)

    Vongani Titi Raymond Ntsanwisi

    2011-06-01

    Full Text Available Abstract Forced normalization is a paradoxical relationship between seizure activity and behavioural problems. A 20 year old male with recurrent refractory tonic clonic epilepsy experienced forced normalization, whilst on medication with multiple anti- epileptic drugs (AEDs.(Valproate Sodium, Carbamazepine, and Topiramate. A reduction in the seizure burden correlated with sudden behavioural changes manifesting with aggressive outbursts and violence.. The present case may help clarify the mechanism of forced normalization whilst providing some helpful hints regarding the diagnosis and treatment of symptoms observed in recurrent refractory seizures.

  14. Efficacy and safety of dabigatran compared with warfarin at different levels of international normalised ratio control for stroke prevention in atrial fibrillation: an analysis of the RE-LY trial.

    Science.gov (United States)

    Wallentin, Lars; Yusuf, Salim; Ezekowitz, Michael D; Alings, Marco; Flather, Marcus; Franzosi, Maria Grazia; Pais, Prem; Dans, Antonio; Eikelboom, John; Oldgren, Jonas; Pogue, Janice; Reilly, Paul A; Yang, Sean; Connolly, Stuart J

    2010-09-18

    Effectiveness and safety of warfarin is associated with the time in therapeutic range (TTR) with an international normalised ratio (INR) of 2·0-3·0. In the Randomised Evaluation of Long-term Anticoagulation Therapy (RE-LY) trial, dabigatran versus warfarin reduced both stroke and haemorrhage. We aimed to investigate the primary and secondary outcomes of the RE-LY trial in relation to each centre's mean TTR (cTTR) in the warfarin population. In the RE-LY trial, 18 113 patients at 951 sites were randomly assigned to 110 mg or 150 mg dabigatran twice daily versus warfarin dose adjusted to INR 2·0-3·0. Median follow-up was 2·0 years. For 18 024 patients at 906 sites, the cTTR was estimated by averaging TTR for individual warfarin-treated patients calculated by the Rosendaal method. We compared the outcomes of RE-LY across the three treatment groups within four groups defined by the quartiles of cTTR. RE-LY is registered with ClinicalTrials.gov, number NCT00262600. The quartiles of cTTR for patients in the warfarin group were: less than 57·1%, 57·1-65·5%, 65·5-72·6%, and greater than 72·6%. There were no significant interactions between cTTR and prevention of stroke and systemic embolism with either 110 mg dabigatran (interaction p=0·89) or 150 mg dabigatran (interaction p=0·20) versus warfarin. Neither were any significant interactions recorded with cTTR with regards to intracranial bleeding with 110 mg dabigatran (interaction p=0·71) or 150 mg dabigatran (interaction p=0·89) versus warfarin. There was a significant interaction between cTTR and major bleeding when comparing 150 mg dabigatran with warfarin (interaction p=0·03), with less bleeding events at lower cTTR but similar events at higher cTTR, whereas rates of major bleeding were lower with 110 mg dabigatran than with warfarin irrespective of cTTR. There were significant interactions between cTTR and effects of both 110 mg and 150 mg dabigatran versus warfarin on the composite of all

  15. Use of eigenvectors in the solution of the flutter equation

    CSIR Research Space (South Africa)

    Van Zyl, Lourens H

    1993-07-01

    Full Text Available stream_source_info Van Zyl_1993.pdf.txt stream_content_type text/plain stream_size 2 Content-Encoding ISO-8859-1 stream_name Van Zyl_1993.pdf.txt Content-Type text/plain; charset=ISO-8859-1 ...

  16. Spatio-temporal Eigenvector Filtering: Application on Bioenergy Crop Impacts

    Science.gov (United States)

    Wang, M.; Kamarianakis, Y.; Georgescu, M.

    2017-12-01

    A suite of 10-year ensemble-based simulations was conducted to investigate the hydroclimatic impacts due to large-scale deployment of perennial bioenergy crops across the continental United States. Given the large size of the simulated dataset (about 60Tb), traditional hierarchical spatio-temporal statistical modelling cannot be implemented for the evaluation of physics parameterizations and biofuel impacts. In this work, we propose a filtering algorithm that takes into account the spatio-temporal autocorrelation structure of the data while avoiding spatial confounding. This method is used to quantify the robustness of simulated hydroclimatic impacts associated with bioenergy crops to alternative physics parameterizations and observational datasets. Results are evaluated against those obtained from three alternative Bayesian spatio-temporal specifications.

  17. Semi-supervised Eigenvectors for Locally-biased Learning

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Mahoney, Michael W.

    2012-01-01

    In many applications, one has side information, e.g., labels that are provided in a semi-supervised manner, about a specific target region of a large data set, and one wants to perform machine learning and data analysis tasks "nearby" that pre-specified target region. Locally-biased problems of t...

  18. (NDSI) and Normalised Difference Principal Component Snow Index

    African Journals Online (AJOL)

    Phila Sibandze

    According to Bonan (2002), snow plays a significant role in influencing heat regimes and local, regional ... sensitive indicator to climate change. In South Africa, snow is .... This image was captured on the earliest cloud free day after a snow fall.

  19. Normalised radionuclide measures of left ventricular diastolic function

    International Nuclear Information System (INIS)

    Lee, K.J.; Southee, A.E.; Bautovich, G.J.; Freedman, B.; McLaughlin, A.F.; Rossleigh, M.A.; Hutton, B.F.; Morris, J.G.; Royal Prince Alfred Hospital, Sydney

    1989-01-01

    Abnormal left ventricular diastolic function is being increasingly recognised in patients with clinical heart failure and normal systolic function. A simple routine radionuclide measure of diastolic function would therefore be useful. To establish, the relationship of peak diastolic filling rate (normalized for either end diastolic volume, stroke volume, or peak systolic emptying rate), and heart rate, age, and left ventricular ejection fraction was studied in 64 subjects with normal cardiovascular systems using routine gated heart pool studies. The peak filling rate when normalized to end diastolic volume correlated significantly with heart rate, age and left ventricular ejection fraction, whereas normalization to stroke volume correlated significantly to heart rate and age but not to left ventricular ejection fraction. Peak filling rate normalized for peak systolic emptying rate correlated with age only. Multiple regression equations were determined for each of the normalized peak filling rates in order to establish normal ranges for each parameter. When using peak filling rate normalized for end diastolic volume or stroke volume, appropriate allowance must be made for heart rate, age and ejection fraction. Peak filling rate normalized to peak ejection rate is a heart rate independent parameter which allows the performance of the patient's ventricle in diastole to be compared with its systolic function. It may be used in patients with normal systolic function to serially follow diastolic function, or if age corrected to screen for diastolic dysfunction. (orig.)

  20. Elevated international normalised ratios correlate with severity of ...

    African Journals Online (AJOL)

    Methods. Study design. The study was approved by the local ethics review board (Biomedical ... 1 Department of General Surgery, School of Clinical Medicine, College of Health Sciences, Nelson R ..... identifying optimal overall cut-off values for ... epidemiology, clinical presentations, and therapeutic considerations.

  1. A comparison of Normalised Difference Snow Index (NDSI) and ...

    African Journals Online (AJOL)

    As an alternative, thematic cover–types based on remotely sensed data-sets are becoming popular. In this study we hypothesise that the reduced dimensionality using Principal Components Analysis (PCA) in concert Normalized Difference Snow Index (NDSI) is valuable for improving the accuracy of snow cover maps.

  2. Using Normalised Sections for the Design of all optical Networks

    DEFF Research Database (Denmark)

    Caspar, C.; Freund, Ronald; Hanik, Norbert

    2000-01-01

    A novel concept for transparent link design is presented, and evaluated numerically and experimentally. 10 Gbit/s single channel transmission over more than 4000 km of Standard Single Mode Fibre is demonstrated. At reduced transmission distances, the systems show a high robustness against variati...

  3. From Being Non-Judgemental to Deconstructing Normalising Judgement

    Science.gov (United States)

    Winslade, John M.

    2013-01-01

    Beginning with Carl Rogers' exhortation for counsellors to be non-judgemental of their clients, this article explores the rationale for withholding judgement in therapy, including diagnostic judgement. It traces Rogers' incipient sociopolitical analysis as a foundation for this ethic and argues that Michel Foucault provides a stronger…

  4. Spatially varying coefficient models in real estate: Eigenvector spatial filtering and alternative approaches

    NARCIS (Netherlands)

    Helbich, M; Griffith, D

    2016-01-01

    Real estate policies in urban areas require the recognition of spatial heterogeneity in housing prices to account for local settings. In response to the growing number of spatially varying coefficient models in housing applications, this study evaluated four models in terms of their spatial patterns

  5. Using CUDA Technology for Defining the Stiffness Matrix in the Subspace of Eigenvectors

    Directory of Open Access Journals (Sweden)

    Yu. V. Berchun

    2015-01-01

    Full Text Available The aim is to improve the performance of solving a problem of deformable solid mechanics through the use of GPGPU. The paper describes technologies for computing systems using both a central and a graphics processor and provides motivation for using CUDA technology as the efficient one.The paper also analyses methods to solve the problem of defining natural frequencies and design waveforms, i.e. an iteration method in the subspace. The method includes several stages. The paper considers the most resource-hungry stage, which defines the stiffness matrix in the subspace of eigenforms and gives the mathematical interpretation of this stage.The GPU choice as a computing device is justified. The paper presents an algorithm for calculating the stiffness matrix in the subspace of eigenforms taking into consideration the features of input data. The global stiffness matrix is very sparse, and its size can reach tens of millions. Therefore, it is represented as a set of the stiffness matrices of the single elements of a model. The paper analyses methods of data representation in the software and selects the best practices for GPU computing.It describes the software implementation using CUDA technology to calculate the stiffness matrix in the subspace of eigenforms. Due to the input data nature, it is impossible to use the universal libraries of matrix computations (cuSPARSE and cuBLAS for loading the GPU. For efficient use of GPU resources in the software implementation, the stiffness matrices of elements are built in the block matrices of a special form. The advantages of using shared memory in GPU calculations are described.The transfer to the GPU computations allowed a twentyfold increase in performance (as compared to the multithreaded CPU-implementation on the model of middle dimensions (degrees of freedom about 2 million. Such an acceleration of one stage speeds up defining the natural frequencies and waveforms by the iteration method in a subspace up to times.

  6. Twisting gravitational waves and eigenvector fields for SL(2,C on an infinite jet

    Directory of Open Access Journals (Sweden)

    J. D. Finley III

    2000-07-01

    Full Text Available A system of coupled vector-field-valued partial differential equations is presented, the solutions to which would determine two coupled, infinite-dimensional vector-field realizations of the group SL(2,C. While the general solution is (partially presented, the complicated nature of that solution is deplored, and the hope expressed that someone can replace it by something much more natural. The physical origins of the problem are briefly described. The problem arises out of searches for Backlund transforms of a system of PDE's that describe twisting, Petrov type N solutions of Einstein's vacuum field equations.

  7. Optimal Inconsistency Repairing of Pairwise Comparison Matrices Using Integrated Linear Programming and Eigenvector Methods

    Directory of Open Access Journals (Sweden)

    Haiqing Zhang

    2014-01-01

    Full Text Available Satisfying consistency requirements of pairwise comparison matrix (PCM is a critical step in decision making methodologies. An algorithm has been proposed to find a new modified consistent PCM in which it can replace the original inconsistent PCM in analytic hierarchy process (AHP or in fuzzy AHP. This paper defines the modified consistent PCM by the original inconsistent PCM and an adjustable consistent PCM combined. The algorithm adopts a segment tree to gradually approach the greatest lower bound of the distance with the original PCM to obtain the middle value of an adjustable PCM. It also proposes a theorem to obtain the lower value and the upper value of an adjustable PCM based on two constraints. The experiments for crisp elements show that the proposed approach can preserve more of the original information than previous works of the same consistent value. The convergence rate of our algorithm is significantly faster than previous works with respect to different parameters. The experiments for fuzzy elements show that our method could obtain suitable modified fuzzy PCMs.

  8. Finite Element-Galerkin Approximation of the Eigenvalues of Eigenvectors of Selfadjoint Problems

    Science.gov (United States)

    1988-07-01

    l’ "k, + 1. Combining (3.20), (3.22), and the fact that I-Eh(Ak ) and Ph are orthogonal projections we have I(I-Eh(Xk,)) PhUB 5 Si (I-Eh(xk)) PhT(Ph-I...Its adjoint are equal. (3.23) implies Hf(I-Eh(1kI )Ph)u{1B - P(IPh)UIBI 5 I(I-Eh(Ak )) PhuB -< d i ii ( Ph- I )T II H B_--H,3 1(P h- I ) u liB , and

  9. Application of covariance clouds for estimating the anisotropy ellipsoid eigenvectors, with case study in uranium deposit

    International Nuclear Information System (INIS)

    Jamali Esfahlan, D.; Madani, H.; Tahmaseb Nazemi, M. T.; Mahdavi, F.; Ghaderi, M. R.; Najafi, M.

    2010-01-01

    Various methods of Kriging and nonlinear geostatistical methods considered as acceptable methods for resource and reserve estimations have characters such as the least estimation variance in their nature, and accurate results in the acceptable confidence levels range could be achieved if the required parameters for the estimation are determined accurately. If the determined parameters don't have the sufficient accuracy, 3-D geostatistical estimations will not be reliable any more, and by this, all the quantitative parameters of the mineral deposit (e.g. grade-tonnage variations) will be misinterpreted. One of the most significant parameters for 3-D geostatistical estimation is the anisotropy ellipsoid. The anisotropy ellipsoid is important for geostatistical estimations because it determines the samples in different directions required for accomplishing the estimation. The aim of this paper is to illustrate a more simple and time preserving analytical method that can apply geophysical or geochemical analysis data from the core-length of boreholes for modeling the anisotropy ellipsoid. By this method which is based on the distribution of covariance clouds in a 3-D sampling space of a deposit, quantities, ratios, azimuth and plunge of the major-axis, semi-major axis and the minor-axis determine the ore-grade continuity within the deposit and finally the anisotropy ellipsoid of the deposit will be constructed. A case study of an uranium deposit is also analytically discussed for illustrating the application of this method.

  10. Eigenvector decomposition of full-spectrum x-ray computed tomography.

    Science.gov (United States)

    Gonzales, Brian J; Lalush, David S

    2012-03-07

    Energy-discriminated x-ray computed tomography (CT) data were projected onto a set of basis functions to suppress the noise in filtered back-projection (FBP) reconstructions. The x-ray CT data were acquired using a novel x-ray system which incorporated a single-pixel photon-counting x-ray detector to measure the x-ray spectrum for each projection ray. A matrix of the spectral response of different materials was decomposed using eigenvalue decomposition to form the basis functions. Projection of FBP onto basis functions created a de facto image segmentation of multiple contrast agents. Final reconstructions showed significant noise suppression while preserving important energy-axis data. The noise suppression was demonstrated by a marked improvement in the signal-to-noise ratio (SNR) along the energy axis for multiple regions of interest in the reconstructed images. Basis functions used on a more coarsely sampled energy axis still showed an improved SNR. We conclude that the noise-resolution trade off along the energy axis was significantly improved using the eigenvalue decomposition basis functions.

  11. Efficient GW calculations using eigenvalue-eigenvector decomposition of the dielectric matrix

    Science.gov (United States)

    Nguyen, Huy-Viet; Pham, T. Anh; Rocca, Dario; Galli, Giulia

    2011-03-01

    During the past 25 years, the GW method has been successfully used to compute electronic quasi-particle excitation spectra of a variety of materials. It is however a computationally intensive technique, as it involves summations over occupied and empty electronic states, to evaluate both the Green function (G) and the dielectric matrix (DM) entering the expression of the screened Coulomb interaction (W). Recent developments have shown that eigenpotentials of DMs can be efficiently calculated without any explicit evaluation of empty states. In this work, we will present a computationally efficient approach to the calculations of GW spectra by combining a representation of DMs in terms of its eigenpotentials and a recently developed iterative algorithm. As a demonstration of the efficiency of the method, we will present calculations of the vertical ionization potentials of several systems. Work was funnded by SciDAC-e DE-FC02-06ER25777.

  12. Conjugacy classes in the Weyl group admitting a regular eigenvector and integrable hierarchies

    International Nuclear Information System (INIS)

    Delduc, F.; Feher, L.

    1994-10-01

    The classification of the integrable hierarchies in the Drinfeld-Sokolov (DS) approach is studied. The DS construction, originally based on the principal Heisenberg subalgebra of an affine Lie algebra, has been recently generalized to arbitrary graded Heisenberg subalgebras. The graded Heisenberg subalgebras of an untwisted loop algebra l(G) are classified by the conjugacy classes in the Weyl group of G, but a complete classification of the hierarchies obtained from generalized DS reductions is still missing. The main result presented here is the complete list of the graded regular elements of l(G) for G a classical Lie algebra or G 2 , extending previous results on the gl n case. (author). 9 refs., 4 tabs

  13. An EEG-Based Biometric System Using Eigenvector Centrality in Resting State Brain Networks

    NARCIS (Netherlands)

    Fraschini, M.; Hillebrand, A.; Demuru, M.; Didaci, L.; Marcialis, G.L.

    2015-01-01

    Recently, there has been a growing interest in the use of brain activity for biometric systems. However, so far these studies have focused mainly on basic features of the Electroencephalography. In this study we propose an approach based on phase synchronization, to investigate personal distinctive

  14. Eigenvalues and eigenvectors of the translation matrices of spherical waves of multiple-scattering theory

    International Nuclear Information System (INIS)

    Torrini, M.

    1983-01-01

    The exponential nature of the translation matrix G of spherical free waves has been set forth in a previous paper.The explicit expression of the exponential form of the translation matrix is given here, once the eigenvectros and the eigenvalues of G have been found. In addition, the eigenproblem relative to the matrix which transforms outgoing waves scattered by a centre in a set of spherical free waves centered at a different point is solved

  15. Eigenvector localization as a tool to study small communities in online social networks

    Czech Academy of Sciences Publication Activity Database

    Slanina, František; Konopásek, Z.

    2010-01-01

    Roč. 13, č. 6 (2010), s. 699-723 ISSN 0219-5259 R&D Projects: GA MŠk OC09078 Institutional research plan: CEZ:AV0Z10100520 Keywords : networks * localization Subject RIV: BE - Theoretical Physics Impact factor: 1.213, year: 2010 http://www.worldscinet.com/acs/13/1306/S0219525910002840.html

  16. Resolution function normalisation and secondary extinction in neutron triple-axis spectrometry

    International Nuclear Information System (INIS)

    Tindle, G.L.

    1987-01-01

    The theory of resolution correction in triple-axis spectrometry is developed from first principles. It is demonstrated that for ideally imperfect thin crystals the formulation coincides with that introduced initially by Cooper and Nathans and subsequently considered by Dorner. The predicted energy variation of peak Bragg reflectivities of monochromator and analyser crystals in Bragg case scattering is such as to confirm experimental data. In the Laue case to obtain results compatible with experiment one has to invoke theories of secondary extinction. In an attempt to accommodate these observations a new finite threshold model of secondary extinction is proposed which interpolates thin crystals formulas and conventional secondary extinction formulas obtained in the zero threshold limit. (orig.)

  17. Narrative of certitude for uncertainty normalisation regarding biotechnology in international organisations

    OpenAIRE

    Heath , Robert; Proutheau , Stéphanie

    2012-01-01

    International audience; Narrative theory has gained prominence especially as a companion to social construction of reality In matters of regulation and normalization, narratives socially and culturallyconstruct relevant contingencies, uncertainties, values, and decision. Here, decision dynamics pit risk generators, bearers, bearers' advocates, arbiters, researchers and informers as advocates and counter advocates (Palmlund, 2009). the decision-relevant narrative components (actors, themes, sc...

  18. Kupffer cells are activated in cirrhotic portal hypertension and not normalised by TIPS.

    Science.gov (United States)

    Holland-Fischer, Peter; Grønbæk, Henning; Sandahl, Thomas Damgaard; Moestrup, Søren K; Riggio, Oliviero; Ridola, Lorenzo; Aagaard, Niels Kristian; Møller, Holger Jon; Vilstrup, Hendrik

    2011-10-01

    Hepatic macrophages (Kupffer cells) undergo inflammatory activation during the development of portal hypertension in experimental cirrhosis; this activation may play a pathogenic role or be an epiphenomenon. Our objective was to study serum soluble CD163 (sCD163), a sensitive marker of macrophage activation, before and after reduction of portal venous pressure gradient by insertion of a transjugular intrahepatic portosystemic shunt (TIPS) in patients with cirrhosis. sCD163 was measured in 11 controls and 36 patients before and 1, 4 and 26 weeks after TIPS. We used lipopolysaccharide binding protein (LBP) levels as a marker of endotoxinaemia. Liver function and clinical status of the patients were assessed by galactose elimination capacity and Model for End Stage Liver Disease score. The sCD163 concentration was more than threefold higher in the patients than in the controls (median 5.22 mg/l vs 1.45 mg/l, pportal venous pressure gradient (r(2)=0.24, pportal vein (pportal hypertension. The activation was not alleviated by the mechanical reduction of portal hypertension and the decreasing signs of endotoxinaemia. The findings suggest that Kupffer cell activation is a constitutive event that may play a pathogenic role for portal hypertension.

  19. Validation of the CoaguChek XS international normalised ratio point ...

    African Journals Online (AJOL)

    laboratory automated coagulation analysers up to INR values of. 3.0. ... To evaluate the clinical utility of the CoaguChek XS for monitoring of patients on standard warfarin therapy (INR 2 - 3) as well .... quality control system with satisfactory.

  20. Semi-supervised probabilistics approach for normalising informal short text messages

    CSIR Research Space (South Africa)

    Modupe, A

    2017-03-01

    Full Text Available The growing use of informal social text messages on Twitter is one of the known sources of big data. These type of messages are noisy and frequently rife with acronyms, slangs, grammatical errors and non-standard words causing grief for natural...

  1. Kupffer cells are activated in cirrhotic portal hypertension and not normalised by TIPS

    DEFF Research Database (Denmark)

    Holland-Fischer, Peter; Grønbæk, Henning; Sandahl, Thomas Damgaard

    2011-01-01

    INTRODUCTION: Hepatic macrophages (Kupffer cells) undergo inflammatory activation during the development of portal hypertension in experimental cirrhosis; this activation may play a pathogenic role or be an epiphenomenon. Our objective was to study serum soluble CD163 (sCD163), a sensitive marker...... in the patients (52.2 vs 30.4 μg/l, pportal hypertension. The activation was not alleviated by the mechanical...... reduction of portal hypertension and the decreasing signs of endotoxinaemia. The findings suggest that Kupffer cell activation is a constitutive event that may play a pathogenic role for portal hypertension....

  2. The normalisation of cannabis use among young people: Symbolic boundary work in focus groups

    DEFF Research Database (Denmark)

    Jarvinen, Margaretha; Demant, Jakob Johan

    2011-01-01

    This paper analyses ‘techniques of neutralisation’ among young people discussing cannabis in focus group interviews. The paper is based on data from focus group interviews with young Danes followed from when they were 14–15 years old in 2004 until they were 18–19 years old in 2008. In this period......, the participants’ attitudes towards cannabis undergo a radical change from being negative and sceptical into being predominantly positive and accepting; a change we describe as a ‘normalisation’ of cannabis use. Four techniques of neutralisation are identified in this process. First, the participants redefine...... the setting of cannabis use, simultaneously creating a new type of togetherness: relaxed social intoxication. Second, the effects of cannabis use are transformed from being ‘strange’ and ‘unpredictable’ to being ‘controllable’ by the individual user. Third, participants change their classification of cannabis...

  3. An Outdoor and Environmental Education Community of Practice: Self Stylisation or Normalisation?

    Science.gov (United States)

    Preston, Lou

    2012-01-01

    In this article, I draw on a qualitative longitudinal study to explore the influence of a tertiary Outdoor and Environmental Education (OEE) course on the formation of environmental ethics among students. In this task, I bring together Lave & Wenger (1991) and Wenger's (1998) concept of "communities of practice" and Michel Foucault's later work on…

  4. Consistent haul road condition monitoring by means of vehicle response normalisation with Gaussian processes

    CSIR Research Space (South Africa)

    Heyns, T

    2012-12-01

    Full Text Available Suboptimal haul road management policies such as routine, periodic and urgent maintenance may result in unnecessary cost, both to roads and vehicles. A recent idea is to continually access haul road condition based on measured vehicle response...

  5. Le processus de normalisation comptable par l'IASB : le cas du résultat

    OpenAIRE

    Le Manh-Béna, Anne

    2009-01-01

    This research aims to contribute to the understanding of the IASB's standard-setting process through a single topic, the definition of income and its presentation in financial statements. Two research questions are addressed: what is the position expressed by the participants to the due process concerning the IASB's project on the definition and the presentation of income? How can be explained the pugnacity of the IASB to impose a new definition of income? The theoretical framework of this re...

  6. The Normalising Power of Marriage Law: An Irish Genealogy, 1945 – 2010

    OpenAIRE

    McGowan, Deirdre

    2015-01-01

    Marriage law is often conceptualised as an instrument of power that illegitimately imposes the will of the State on its citizens. Paradoxically, marriage law is also offered as a route to liberation. In this thesis, I question the efficacy of this type of analysis by investigating the actual power effects of marriage law. Using Michel Foucault’s concepts of bio-power and government, and his genealogical approach to history, I identify the role played by marriage law in governing the social do...

  7. Public contracts in the Dutch energy sector. A strategic investigation with regard to normalisation

    International Nuclear Information System (INIS)

    Van der Feen, E.J.; Maas, P.J.J.

    1995-01-01

    A number of strategic investigations is carried out to determine if and to what extent normalization of public contracts can support the position of the Dutch businesses and industry in the European market. The strategic investigation in this report is limited to clusters within the Dutch energy utilities' sector concerning the production, transportation and distribution of electricity and heat, and the distribution of natural gas in the Netherlands. The results of this report can support those companies that will acquire orders via public contracts in the future; companies that wish to continue existing relations with tender services, if they will change to public contracts; and tender services that will have to put their orders via public contracts.Relevant European guidelines and accompanying procedures are outlined. The economic interest of the total Dutch energy sector and the different energy clusters in the Netherlands is discussed. Also attention is paid to the process of normalization, the role of standards and other technical documents regarding the guidelines Public Contracts. An inventory of available standards and conceptual standards is given for each energy cluster. Finally, an indication is given of the actual compliance of the guidelines. 5 figs., 4 tabs., 16 appendices

  8. Standardization in dust emission measurement; Mesure des emissions de poussieres normalisation

    Energy Technology Data Exchange (ETDEWEB)

    Perret, R. [INERIS, 60 - Verneuil-en-Halatte, (France)

    1996-12-31

    The European Standardization Committee (CEN TC 264WG5) is developing a new reference method for measuring particulate emissions, suitable for concentrations inferior to 20 mg/m{sup 3} and especially for concentrations around 5 mg/m{sup 3}; the measuring method should be applicable to waste incinerator effluents and more generally to industrial effluents. Testing protocols and data analysis have been examined and repeatability and reproducibility issues are discussed

  9. A Bayesian MCMC method for point process models with intractable normalising constants

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2004-01-01

    to simulate from the "unknown distribution", perfect simulation algorithms become useful. We illustrate the method in cases whre the likelihood is given by a Markov point process model. Particularly, we consider semi-parametric Bayesian inference in connection to both inhomogeneous Markov point process models...... and pairwise interaction point processes....

  10. Regimes of Performance: Practices of the Normalised Self in the Neoliberal University

    Science.gov (United States)

    Morrissey, John

    2015-01-01

    Universities today inescapably find themselves part of nationally and globally competitive networks that appear firmly inflected by neoliberal concerns of rankings, benchmarking and productivity. This, of course, has in turn led to progressively anticipated and regulated forms of academic subjectivity that many fear are overly econo-centric in…

  11. Am I a Woman? The Normalisation of Woman in US History

    Science.gov (United States)

    Schmidt, Sandra J.

    2012-01-01

    The curriculum of US History has improved substantially in its presentation of women over the 40 years since Trecker's 1971 study of US History textbooks. While studies show increased inclusions, they also suggest that women have not yet claimed their own place in the school curriculum. This paper seeks to better understand the woman who is…

  12. Social Cognitions that Normalise Sexual Harassment of Women at Work: The Role of Moral Disengagement

    OpenAIRE

    Page, Thomas Edward

    2015-01-01

    Sexual harassment against women represents aggressive behaviour that is often enacted instrumentally, in response to a threatened sense of masculinity and male identity (cf. Maass & Cadinu, 2006). To date, however, empirical and theoretical attention to the social-cognitive processes that regulate workplace harassment is scant. Drawing on Social Cognitive Theory (Bandura, 1986), the current thesis utilises the theoretical concept of moral disengagement in order to address this important gap i...

  13. HIV scale-up in Mozambique: Exceptionalism, normalisation and global health

    Science.gov (United States)

    Høg, Erling

    2014-01-01

    The large-scale introduction of HIV and AIDS services in Mozambique from 2000 onwards occurred in the context of deep political commitment to sovereign nation-building and an important transition in the nation's health system. Simultaneously, the international community encountered a willing state partner that recognised the need to take action against the HIV epidemic. This article examines two critical policy shifts: sustained international funding and public health system integration (the move from parallel to integrated HIV services). The Mozambican government struggles to support its national health system against privatisation, NGO competition and internal brain drain. This is a sovereignty issue. However, the dominant discourse on self-determination shows a contradictory twist: it is part of the political rhetoric to keep the sovereignty discourse alive, while the real challenge is coordination, not partnerships. Nevertheless, we need more anthropological studies to understand the political implications of global health funding and governance. Other studies need to examine the consequences of public health system integration for the quality of access to health care. PMID:24499102

  14. CCTV Surveillance in Primary Schools: Normalisation, Resistance, and Children's Privacy Consciousness

    Science.gov (United States)

    Birnhack, Michael; Perry-Hazan, Lotem; German Ben-Hayun, Shiran

    2018-01-01

    This study explored how primary school children perceive school surveillance by Closed Circuit TV systems (CCTVs) and how their perceptions relate to their privacy consciousness. It drew on interviews with 57 children, aged 9-12, who were enrolled in three Israeli public schools that had installed CCTVs, and on information gathered from members of…

  15. THE INSTITUTION OF ACCOUNTING NORMALISATION IN ROMANIA – HISTORY AND PRESENT

    Directory of Open Access Journals (Sweden)

    Aristita Rotila

    2014-07-01

    Full Text Available The institution of accounting normalization at a national level can essentially be as public, private and mixed. On its nature depend the way of accepting/imposing the accounting norms and also the character of these norms, character which can be more or less restrictive. The present article is a study regarding the institution of normalization of accounting in Romania from the beginning (when the process of normalizing the Romanian accounting began to present, following its changes through two stages which have marked the evolution of our country in the second half of the 20th century and beginning of the 21st century: the stage of socialism, having a centralized economy, and the stage of transition to a market economy, which started right after the 1989 Revolution. Within post-revolutionary stage, under the Ministry of Finances, the institution of accounting normalization in Romania, a mixed organism was created, which sums up a large series of “actors” interested in the accounting information and has the role of allowing those actors to involve into the process of normalization, which would let the Romanian accounting normalization pass from an exclusively public approach to a mixed one.

  16. The normalisation of substance abuse among young travellers in Ireland: implications for practice.

    Science.gov (United States)

    Van Hout, Marie Claire; Connor, Sean

    2008-01-01

    This report presents the findings of an exploratory study aimed at assessing the nature and extent of drug use amongst a group of young Travellers (aged twelve to eighteen years) in the South Eastern Region of Ireland. The results are intended to inform the Irish policy debate by providing data on patterns of youth drug use, drug-related risk behaviours, the impact of drug use on the Traveller community and issues regarding access to services. The young Travellers exhibited similar trends to "settled" adolescents with regard to drug use trends and attitudes but reported poor levels of health awareness and knowledge of drug services. The social exclusion of young Travellers puts them at risk of problematic drug use due to issues of poor literacy levels, family crisis, discrimination, poor knowledge of service provision relating to drug education and treatment, and the location of halting sites in areas of high drug usage.

  17. Normalisation et réglementation des matériaux recyclés

    OpenAIRE

    Turtschy, J.-C.; Kellenberger, M.

    2004-01-01

    La volonté politique qui va dans le sens du développement durable et de la protection de l'environnement s'est concrétisée au travers des lois dès 1983. Il aura fallu presque 20 ans pour que ces principes soient appliqués globalement jusqu’au niveau cantonal; deux décennies pour passer d'une idée théorique aux applications réelles dans le domaine de la construction. Cependant, au vu des nombreuses lois, ordonnances, prescriptions, directives, règlements, normes, cas pa...

  18. The Effects of Normalisation of the Satisfaction of Novice End-User Querying Databases

    Directory of Open Access Journals (Sweden)

    Conrad Benedict

    1997-05-01

    Full Text Available This paper reports the results of an experiment that investigated the effects different structural characteristics of relational databases have on information satisfaction of end-users querying databases. The results show that unnormalised tables adversely affect end-user satisfaction. The adverse affect on end-user satisfaction is attributable primarily to the use of non atomic data. In this study, the affect on end user satisfaction of repeating fields was not significant. The study contributes to the further development of theories of individual adjustment to information technology in the workplace by alerting organisations and, in particular, database designers to the ways in which the structural characteristics of relational databases may affect end-user satisfaction. More importantly, the results suggest that database designers need to clearly identify the domains for each item appearing in their databases. These issues are of increasing importance because of the growth in the amount of data available to end-users in relational databases.

  19. Computing Principal Eigenvectors of Large Web Graphs: Algorithms and Accelerations Related to PageRank and HITS

    Science.gov (United States)

    Nagasinghe, Iranga

    2010-01-01

    This thesis investigates and develops a few acceleration techniques for the search engine algorithms used in PageRank and HITS computations. PageRank and HITS methods are two highly successful applications of modern Linear Algebra in computer science and engineering. They constitute the essential technologies accounted for the immense growth and…

  20. An adjoint-based scheme for eigenvalue error improvement

    International Nuclear Information System (INIS)

    Merton, S.R.; Smedley-Stevenson, R.P.; Pain, C.C.; El-Sheikh, A.H.; Buchan, A.G.

    2011-01-01

    A scheme for improving the accuracy and reducing the error in eigenvalue calculations is presented. Using a rst order Taylor series expansion of both the eigenvalue solution and the residual of the governing equation, an approximation to the error in the eigenvalue is derived. This is done using a convolution of the equation residual and adjoint solution, which is calculated in-line with the primal solution. A defect correction on the solution is then performed in which the approximation to the error is used to apply a correction to the eigenvalue. The method is shown to dramatically improve convergence of the eigenvalue. The equation for the eigenvalue is shown to simplify when certain normalizations are applied to the eigenvector. Two such normalizations are considered; the rst of these is a fission-source type of normalisation and the second is an eigenvector normalisation. Results are demonstrated on a number of demanding elliptic problems using continuous Galerkin weighted nite elements. Moreover, the correction scheme may also be applied to hyperbolic problems and arbitrary discretization. This is not limited to spatial corrections and may be used throughout the phase space of the discrete equation. The applied correction not only improves fidelity of the calculation, it allows assessment of the reliability of numerical schemes to be made and could be used to guide mesh adaption algorithms or to automate mesh generation schemes. (author)

  1. Energy engineering. Standardization in the domain of coldness production; Genie energetique. Normalisation dans le domaine du froid

    Energy Technology Data Exchange (ETDEWEB)

    Legent, N. [Association Francaise de Normalisation (AFNOR), 92 - Paris-la-Defense (France)

    2000-10-01

    This paper deals with the recent advances in the domain of standardization and regulation of refrigeration and cold production systems used in the industry and in the commercial sector: 1 - international standardization; 2 - European standardization; 3 - international standardization in the domain of cold production (ISO/TC 86 'coldness and air conditioning', CEI 61C 'domestic refrigerating appliances', CEI 61D 'air-conditioning appliances for domestic and similar use', ISO/TC 104 'goods transportation containers'); 4 - European standardization in the domain of cold production (CEN/TC 182 'refrigerating systems - safety and environment requirements', CEN/TC 44 'household refrigerating appliances and refrigerating furniture for stores', CEN/TC 113 'heat pumps and air-conditioners', CEN/TC 141 'manometers and thermometers, measuring and recording means', CENELEC 61 'safety of domestic electrical appliances'); 5 - French standardization (international file, European file, French file, French standardization system); 6 - national standardization in other countries; 7 - regulations in the domain of cold production. (J.S.)

  2. Validation of reference genes for real-time quantitative PCR normalisation in non-heading Chinese cabbage

    NARCIS (Netherlands)

    Xiao, D.; Zhang, N.; Jianjun Zhao, Jianjun; Bonnema, A.B.; Hou, X.L.

    2012-01-01

    Non-heading Chinese cabbage is an important vegetable crop that includes pak choi, caixin and several Japanese vegetables like mizuna, mibuna and komatsuna. Gene expression studies are frequently used to unravel the genetics of complex traits and in such studies the proper selection of reference

  3. Standards related to harmonic disturbances and its evolution; La normalisation relative aux perturbations harmoniques et son evolution

    Energy Technology Data Exchange (ETDEWEB)

    Deflandre, T. [Electricite de France (EDF), 92 - Clamart (France). Direction des Etudes et Recherches

    1996-09-01

    The objective of this document is to provide an overview of the various normative documents and regulations that may be useful in the field of harmonics as references to a network user, fitter, manufacturer or operator. Several documents that are presently developed at Electricite de France are also presented, together with the connecting regulation that is included in the Emeraude contract. The following points are discussed: hierarchy of normative documents, harmonic emission standards and more especially the NF EN 610003-2 standards ``Limits for the emission of harmonic currents for equipment having an inrush current inferior or equal to 16 A per phase``; document status, immunity standards, compatibility and environment standards, documents under study and the Emeraude contract

  4. From playfulness and self-centredness via grand expectations to normalisation: a psychoanalytical rereading of the history of molecular genetics.

    Science.gov (United States)

    Zwart, H A E

    2013-11-01

    In this paper, I will reread the history of molecular genetics from a psychoanalytical angle, analysing it as a case history. Building on the developmental theories of Freud and his followers, I will distinguish four stages, namely: (1) oedipal childhood, notably the epoch of model building (1943-1953); (2) the latency period, with a focus on the development of basic skills (1953-1989); (3) adolescence, exemplified by the Human Genome Project, with its fierce conflicts, great expectations and grandiose claims (1989-2003) and (4) adulthood (2003-present) during which revolutionary research areas such as molecular biology and genomics have achieved a certain level of normalcy--have evolved into a normal science. I will indicate how a psychoanalytical assessment conducted in this manner may help us to interpret and address some of the key normative issues that have been raised with regard to molecular genetics over the years, such as 'relevance', 'responsible innovation' and 'promise management'.

  5. The Positive Influence of Active Learning in a Lecture Hall: An Analysis of Normalised Gain Scores in Introductory Environmental Engineering

    Science.gov (United States)

    Kinoshita, Timothy J.; Knight, David B.; Gibbes, Badin

    2017-01-01

    Burgeoning college enrolments and insufficient funding to higher education have expanded the use of large lecture courses. As this trend continues, it is important to ensure that students can still learn in those challenging learning environments. Within education broadly and undergraduate engineering specifically, active learning pedagogies have…

  6. Normalising corporate counterinsurgency: Engineering consent, managing resistance and greening destruction around the Hambach coal mine and beyond

    NARCIS (Netherlands)

    Dunlap, A.A.

    2018-01-01

    The German Rhineland is home to the world's largest opencast lignite coal mine and human-made hole – the Hambach mine. Over the last seven years, RWE, the mine operator, has faced an increase in militant resistance, culminating in the occupation of the Hambacher Forest and acts of civil disobedience

  7. Seaweed supplements normalise metabolic, cardiovascular and liver responses in high-carbohydrate, high-fat fed rats.

    Science.gov (United States)

    Kumar, Senthil Arun; Magnusson, Marie; Ward, Leigh C; Paul, Nicholas A; Brown, Lindsay

    2015-02-02

    Increased seaweed consumption may be linked to the lower incidence of metabolic syndrome in eastern Asia. This study investigated the responses to two tropical green seaweeds, Ulva ohnoi (UO) and Derbesia tenuissima (DT), in a rat model of human metabolic syndrome. Male Wistar rats (330-340 g) were fed either a corn starch-rich diet or a high-carbohydrate, high-fat diet with 25% fructose in drinking water, for 16 weeks. High-carbohydrate, high-fat diet-fed rats showed the signs of metabolic syndrome leading to abdominal obesity, cardiovascular remodelling and non-alcoholic fatty liver disease. Food was supplemented with 5% dried UO or DT for the final 8 weeks only. UO lowered total final body fat mass by 24%, systolic blood pressure by 29 mmHg, and improved glucose utilisation and insulin sensitivity. In contrast, DT did not change total body fat mass but decreased plasma triglycerides by 38% and total cholesterol by 17%. UO contained 18.1% soluble fibre as part of 40.9% total fibre, and increased magnesium, while DT contained 23.4% total fibre, essentially as insoluble fibre. UO was more effective in reducing metabolic syndrome than DT, possibly due to the increased intake of soluble fibre and magnesium.

  8. Airborne particulates. European directives and standardization; Matieres particulaires dans l`air ambiant directives europeennes et normalisation

    Energy Technology Data Exchange (ETDEWEB)

    Houdret, J.L. [Ecole Nationale Superieure des Mines, 59 - Douai (France)

    1996-12-31

    The development of future European directives concerning atmospheric dusts and particulates, organization of the in-charge committee, measurement requirements and limit value determination processes are presented. Various measuring methods and instruments used for particulate and aerosol measurements are reviewed

  9. The EDF catalogue of technical specifications (reference HN), standardization center; Catalogue des specifications techniques EDF (reference HN) centre de normalisation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-31

    A list of EDF technical specifications, valid at the 01/01/1996 date, is presented. Specifications domains such as electrical installations, equipment and materials, uninsulated and insulated conductors, measurement, control and command, electric power generating or transforming equipment, electrical appliances, telecommunications, electronic and computer systems, are covered

  10. Standardised Radon Index (SRI: a normalisation of radon data-sets in terms of standard normal variables

    Directory of Open Access Journals (Sweden)

    R. G. M. Crockett

    2011-07-01

    Full Text Available During the second half of 2002, from late June to mid December, the University of Northampton Radon Research Group operated two continuous hourly-sampling radon detectors 2.25 km apart in the English East Midlands. This period included the Dudley earthquake (ML = 5, 22 September 2002 and also a smaller earthquake in the English Channel (ML = 3, 26 August 2002. Rolling/sliding windowed cross-correlation of the paired radon time-series revealed periods of simultaneous similar radon anomalies which occurred at the time of these earthquakes but at no other times during the overall radon monitoring period. Standardising the radon data in terms of probability of magnitude, analogous to the Standardised Precipitation Indices (SPIs used in drought modelling, which effectively equalises different non-linear responses, reveals that the dissimilar relative magnitudes of the anomalies are in fact closely equiprobabilistic. Such methods could help in identifying anomalous signals in radon – and other – time-series and in evaluating their statistical significance in terms of earthquake precursory behaviour.

  11. The thick left ventricular wall of the giraffe heart normalises wall tension, but limits stroke volume and cardiac output

    DEFF Research Database (Denmark)

    Smerup, Morten Holdgaard; Damkjær, Mads; Brøndum, Emil

    2016-01-01

    Giraffes - the tallest extant animals on Earth - are renowned for their high central arterial blood pressure, which is necessary to secure brain perfusion. The pressure which may exceed 300 mmHg has historically been attributed to an exceptionally large heart. Recently, this has been refuted...... by several studies demonstrating that the mass of giraffe heart is similar to that of other mammals when expressed relative to body mass. It remains enigmatic, however, how the normal-sized giraffe heart generates such massive arterial pressures.We hypothesized that giraffe hearts have a small...... intraventricular cavity and a relatively thick ventricular wall, allowing for generation of high arterial pressures at normal left ventricular wall tension. In nine anaesthetized giraffes (495±38 kg), we determined in vivo ventricular dimensions using echocardiography along with intraventricular and aortic...

  12. Seaweed Supplements Normalise Metabolic, Cardiovascular and Liver Responses in High-Carbohydrate, High-Fat Fed Rats

    Directory of Open Access Journals (Sweden)

    Senthil Arun Kumar

    2015-02-01

    Full Text Available Increased seaweed consumption may be linked to the lower incidence of metabolic syndrome in eastern Asia. This study investigated the responses to two tropical green seaweeds, Ulva ohnoi (UO and Derbesia tenuissima (DT, in a rat model of human metabolic syndrome. Male Wistar rats (330–340 g were fed either a corn starch-rich diet or a high-carbohydrate, high-fat diet with 25% fructose in drinking water, for 16 weeks. High-carbohydrate, high-fat diet-fed rats showed the signs of metabolic syndrome leading to abdominal obesity, cardiovascular remodelling and non-alcoholic fatty liver disease. Food was supplemented with 5% dried UO or DT for the final 8 weeks only. UO lowered total final body fat mass by 24%, systolic blood pressure by 29 mmHg, and improved glucose utilisation and insulin sensitivity. In contrast, DT did not change total body fat mass but decreased plasma triglycerides by 38% and total cholesterol by 17%. UO contained 18.1% soluble fibre as part of 40.9% total fibre, and increased magnesium, while DT contained 23.4% total fibre, essentially as insoluble fibre. UO was more effective in reducing metabolic syndrome than DT, possibly due to the increased intake of soluble fibre and magnesium.

  13. Seaweed Supplements Normalise Metabolic, Cardiovascular and Liver Responses in High-Carbohydrate, High-Fat Fed Rats

    Science.gov (United States)

    Kumar, Senthil Arun; Magnusson, Marie; Ward, Leigh C.; Paul, Nicholas A.; Brown, Lindsay

    2015-01-01

    Increased seaweed consumption may be linked to the lower incidence of metabolic syndrome in eastern Asia. This study investigated the responses to two tropical green seaweeds, Ulva ohnoi (UO) and Derbesia tenuissima (DT), in a rat model of human metabolic syndrome. Male Wistar rats (330–340 g) were fed either a corn starch-rich diet or a high-carbohydrate, high-fat diet with 25% fructose in drinking water, for 16 weeks. High-carbohydrate, high-fat diet-fed rats showed the signs of metabolic syndrome leading to abdominal obesity, cardiovascular remodelling and non-alcoholic fatty liver disease. Food was supplemented with 5% dried UO or DT for the final 8 weeks only. UO lowered total final body fat mass by 24%, systolic blood pressure by 29 mmHg, and improved glucose utilisation and insulin sensitivity. In contrast, DT did not change total body fat mass but decreased plasma triglycerides by 38% and total cholesterol by 17%. UO contained 18.1% soluble fibre as part of 40.9% total fibre, and increased magnesium, while DT contained 23.4% total fibre, essentially as insoluble fibre. UO was more effective in reducing metabolic syndrome than DT, possibly due to the increased intake of soluble fibre and magnesium. PMID:25648511

  14. Mechanisms of Normalisation of Bone Metabolism during Recovery from Hyperthyroidism: Potential Role for Sclerostin and Parathyroid Hormone

    Directory of Open Access Journals (Sweden)

    Elżbieta Skowrońska-Jóźwiak

    2015-01-01

    Full Text Available Sclerostin, a protein expressed by osteocytes, is a negative regulator of bone formation. The aim of the study was to investigate the relationship between parathyroid hormone (PTH and markers of bone metabolism and changes of sclerostin concentrations before and after treatment of hyperthyroidism. Patients and Methods. The study involved 33 patients (26 women, age (mean ± SD 48 ± 15 years, with hyperthyroidism. Serum sclerostin, PTH, calcium, and bone markers [osteocalcin (OC and collagen type I cross-linked C-telopeptide I (CTX] were measured at diagnosis of hyperthyroidism and after treatment with thiamazole. Results. After treatment of hyperthyroidism a significant decrease in free T3 (FT3 and free T4 (FT4 concentrations was accompanied by marked decrease of serum sclerostin (from 43.7 ± 29.3 to 28.1 ± 18.4 pmol/L; p<0.001, OC (from 35.6 ± 22.0 to 27.0 ± 14.3 ng/mL; p<0.001, and CTX (from 0.49 ± 0.35 to 0.35 ± 0.23 ng/dL; p<0.005, accompanied by an increase of PTH (from 29.3 ± 14.9 to 39.8 ± 19.8; p<0.001. During hyperthyroidism there was a positive correlation between sclerostin and CTX (rs=0.41, p<0.05 and between OC and thyroid hormones (with FT3  rs=0.42, with FT4  rs=0.45, p<0.05. Conclusions. Successful treatment of hyperthyroidism results in a significant decrease in serum sclerostin and bone markers concentrations, accompanied by an increase of PTH.

  15. Normalisation of surfactant protein -A and -B expression in the lungs of low birth weight lambs by 21 days old.

    Directory of Open Access Journals (Sweden)

    Jia Yin Soo

    Full Text Available Intrauterine growth restriction (IUGR induced by placental restriction (PR in the sheep negatively impacts lung and pulmonary surfactant development during fetal life. Using a sheep model of low birth weight (LBW, we found that there was an increase in mRNA expression of surfactant protein (SP-A, -B and -C in the lung of LBW lambs but no difference in the protein expression of SP-A or -B. LBW also resulted in increased lysosome-associated membrane glycoprotein (LAMP-3 mRNA expression, which may indicate an increase in either the density of type II Alveolar epithelial cells (AEC or maturity of type II AECs. Although there was an increase in glucocorticoid receptor (GR and 11β-hydroxysteroid dehydrogenase (11βHSD-1 mRNA expression in the lung of LBW lambs, we found no change in the protein expression of these factors, suggesting that the increase in SP mRNA expression is not mediated by increased GC signalling in the lung. The increase in SP mRNA expression may, in part, be mediated by persistent alterations in hypoxia signalling as there was an increase in lung HIF-2α mRNA expression in the LBW lamb. The changes in the hypoxia signalling pathway that persist within the lung after birth may be involved in maintaining SP production in the LBW lamb.

  16. Nicorandil prevents endothelial dysfunction due to antioxidative effects via normalisation of NADPH oxidase and nitric oxide synthase in streptozotocin diabetic rats

    Directory of Open Access Journals (Sweden)

    Serizawa Ken-ichi

    2011-11-01

    Full Text Available Abstract Background Nicorandil, an anti-angina agent, reportedly improves outcomes even in angina patients with diabetes. However, the precise mechanism underlying the beneficial effect of nicorandil on diabetic patients has not been examined. We investigated the protective effect of nicorandil on endothelial function in diabetic rats because endothelial dysfunction is a major risk factor for cardiovascular disease in diabetes. Methods Male Sprague-Dawley rats (6 weeks old were intraperitoneally injected with streptozotocin (STZ, 40 mg/kg, once a day for 3 days to induce diabetes. Nicorandil (15 mg/kg/day and tempol (20 mg/kg/day, superoxide dismutase mimetic were administered in drinking water for one week, starting 3 weeks after STZ injection. Endothelial function was evaluated by measuring flow-mediated dilation (FMD in the femoral arteries of anaesthetised rats. Cultured human coronary artery endothelial cells (HCAECs were treated with high glucose (35.6 mM, 24 h and reactive oxygen species (ROS production with or without L-NAME (300 μM, apocynin (100 μM or nicorandil (100 μM was measured using fluorescent probes. Results Endothelial function as evaluated by FMD was significantly reduced in diabetic as compared with normal rats (diabetes, 9.7 ± 1.4%; normal, 19.5 ± 1.7%; n = 6-7. There was a 2.4-fold increase in p47phox expression, a subunit of NADPH oxidase, and a 1.8-fold increase in total eNOS expression in diabetic rat femoral arteries. Nicorandil and tempol significantly improved FMD in diabetic rats (nicorandil, 17.7 ± 2.6%; tempol, 13.3 ± 1.4%; n = 6. Nicorandil significantly inhibited the increased expressions of p47phox and total eNOS in diabetic rat femoral arteries. Furthermore, nicorandil significantly inhibited the decreased expression of GTP cyclohydrolase I and the decreased dimer/monomer ratio of eNOS. ROS production in HCAECs was increased by high-glucose treatment, which was prevented by L-NAME and nicorandil suggesting that eNOS itself might serve as a superoxide source under high-glucose conditions and that nicorandil might prevent ROS production from eNOS. Conclusions These results suggest that nicorandil improved diabetes-induced endothelial dysfunction through antioxidative effects by inhibiting NADPH oxidase and eNOS uncoupling.

  17. Long-term performance of grid-connected photovoltaic plant - Appendix 1: normalised annual statistics; Langzeitverhalten von netzgekoppelten Photovoltaikanlagen 2 (LZPV2). Anhang 1: Normierte Jahresstatistiken

    Energy Technology Data Exchange (ETDEWEB)

    Renken, C.; Haeberlin, H.

    2003-07-01

    This is second part of a four-part final report for the Swiss Federal Office of Energy (SFOE) made by the University of Applied Sciences in Burgdorf, Switzerland. This report presents the findings of a project begun in 1992 that monitored the performance of around 40 photovoltaic (PV) installations in Switzerland, including the demonstration installation on Mont Soleil and three test installations using modern thin-film technologies. The specific performance of the plant and reductions in yield caused mostly by increasing soiling of the modules over the years were monitored. This extensive first appendix to the report describes the plant monitored in detail, presents the results of various performance measurements made and discusses the two monitoring concepts used. The specific yields over the years are presented in graphical form. Also, the meteorological equipment installed at the University of Applied Science in Burgdorf that was used to provide reference values is described.

  18. The combination of nutraceutical and simvastatin enhances the effect of simvastatin alone in normalising lipid profile without side effects in patients with ischemic heart disease

    Directory of Open Access Journals (Sweden)

    Giuseppe Campolongo

    2016-06-01

    Conclusions: The association of a nutraceutical and simvastatin 20 mg may be a valid therapeutic option for the treatment of hyperlipidemia in patients with ischemic heart disease intolerant to statin at high doses, in the absence of side effects. Further studies are needed to clarify the mechanisms of action of nutraceuticals.

  19. Are scabies and impetigo "normalised"? A cross-sectional comparative study of hospitalised children in northern Australia assessing clinical recognition and treatment of skin infections.

    Directory of Open Access Journals (Sweden)

    Daniel K Yeoh

    2017-07-01

    Full Text Available Complications of scabies and impetigo such as glomerulonephritis and invasive bacterial infection in Australian Aboriginal children remain significant problems and the overall global burden of disease attributable to these skin infections remains high despite the availability of effective treatment. We hypothesised that one factor contributing to this high burden is that skin infection is under-recognised and hence under-treated, in settings where prevalence is high.We conducted a prospective, cross-sectional study to assess the burden of scabies, impetigo, tinea and pediculosis in children admitted to two regional Australian hospitals from October 2015 to January 2016. A retrospective chart review of patients admitted in November 2014 (mid-point of the prospective data collection in the preceding year was performed. Prevalence of documented skin infection was compared in the prospective and retrospective population to assess clinician recognition and treatment of skin infections.158 patients with median age 3.6 years, 74% Aboriginal, were prospectively recruited. 77 patient records were retrospectively reviewed. Scabies (8.2% vs 0.0%, OR N/A, p = 0.006 and impetigo (49.4% vs 19.5%, OR 4.0 (95% confidence interval [CI 2.1-7.7 were more prevalent in the prospective analysis. Skin examination was only documented in 45.5% of cases in the retrospective review. Patients in the prospective analysis were more likely to be prescribed specific treatment for skin infection compared with those in the retrospective review (31.6% vs 5.2%, OR 8.5 (95% CI 2.9-24.4.Scabies and impetigo infections are under-recognised and hence under-treated by clinicians. Improving the recognition and treatment of skin infections by clinicians is a priority to reduce the high burden of skin infection and subsequent sequelae in paediatric populations where scabies and impetigo are endemic.

  20. Are scabies and impetigo "normalised"? A cross-sectional comparative study of hospitalised children in northern Australia assessing clinical recognition and treatment of skin infections.

    Science.gov (United States)

    Yeoh, Daniel K; Anderson, Aleisha; Cleland, Gavin; Bowen, Asha C

    2017-07-01

    Complications of scabies and impetigo such as glomerulonephritis and invasive bacterial infection in Australian Aboriginal children remain significant problems and the overall global burden of disease attributable to these skin infections remains high despite the availability of effective treatment. We hypothesised that one factor contributing to this high burden is that skin infection is under-recognised and hence under-treated, in settings where prevalence is high. We conducted a prospective, cross-sectional study to assess the burden of scabies, impetigo, tinea and pediculosis in children admitted to two regional Australian hospitals from October 2015 to January 2016. A retrospective chart review of patients admitted in November 2014 (mid-point of the prospective data collection in the preceding year) was performed. Prevalence of documented skin infection was compared in the prospective and retrospective population to assess clinician recognition and treatment of skin infections. 158 patients with median age 3.6 years, 74% Aboriginal, were prospectively recruited. 77 patient records were retrospectively reviewed. Scabies (8.2% vs 0.0%, OR N/A, p = 0.006) and impetigo (49.4% vs 19.5%, OR 4.0 (95% confidence interval [CI 2.1-7.7) were more prevalent in the prospective analysis. Skin examination was only documented in 45.5% of cases in the retrospective review. Patients in the prospective analysis were more likely to be prescribed specific treatment for skin infection compared with those in the retrospective review (31.6% vs 5.2%, OR 8.5 (95% CI 2.9-24.4). Scabies and impetigo infections are under-recognised and hence under-treated by clinicians. Improving the recognition and treatment of skin infections by clinicians is a priority to reduce the high burden of skin infection and subsequent sequelae in paediatric populations where scabies and impetigo are endemic.

  1. Catalogue of EDF`s technical specifications (HN reference). Centre of standardization; Catalogue des specifications techniques EDF (reference HN). Centre de normalisation

    Energy Technology Data Exchange (ETDEWEB)

    1998-12-31

    This document edited by Electricite de France (EdF), is a catalogue of the French standard documents relative to any type of electrical material and equipment and which contain the technical specifications of these materials and equipments. A brief description of these specifications is given for each type of material or equipment listed. (J.S.)

  2. Netazepide, a gastrin receptor antagonist, normalises tumour biomarkers and causes regression of type 1 gastric neuroendocrine tumours in a nonrandomised trial of patients with chronic atrophic gastritis.

    Directory of Open Access Journals (Sweden)

    Andrew R Moore

    Full Text Available Autoimmune chronic atrophic gastritis (CAG causes hypochlorhydria and hypergastrinaemia, which can lead to enterochromaffin-like (ECL cell hyperplasia and gastric neuroendocrine tumours (type 1 gastric NETs. Most behave indolently, but some larger tumours metastasise. Antrectomy, which removes the source of the hypergastrinaemia, usually causes tumour regression. Non-clinical and healthy-subject studies have shown that netazepide (YF476 is a potent, highly selective and orally-active gastrin/CCK-2 receptor antagonist. Also, it is effective in animal models of ECL-cell tumours induced by hypergastrinaemia.To assess the effect of netazepide on tumour biomarkers, number and size in patients with type I gastric NETs.We studied 8 patients with multiple tumours and raised circulating gastrin and chromogranin A (CgA concentrations in an open trial of oral netazepide for 12 weeks, with follow-up 12 weeks later. At 0, 6, 12 and 24 weeks, we carried out gastroscopy, counted and measured tumours, and took biopsies to assess abundances of several ECL-cell constituents. At 0, 3, 6, 9, 12 and 24 weeks, we measured circulating gastrin and CgA and assessed safety and tolerability.Netazepide was safe and well tolerated. Abundances of CgA (p<0.05, histidine decarboxylase (p<0.05 and matrix metalloproteinase-7(p<0.10 were reduced at 6 and 12 weeks, but were raised again at follow-up. Likewise, plasma CgA was reduced at 3 weeks (p<0.01, remained so until 12 weeks, but was raised again at follow-up. Tumours were fewer and the size of the largest one was smaller (p<0.05 at 12 weeks, and remained so at follow-up. Serum gastrin was unaffected.The reduction in abundances, plasma CgA, and tumour number and size by netazepide show that type 1 NETs are gastrin-dependent tumours. Failure of netazepide to increase serum gastrin further is consistent with achlorhydria. Netazepide is a potential new treatment for type 1 NETs. Longer, controlled trials are justified.European Union EudraCT database 2007-002916-24 https://www.clinicaltrialsregister.eu/ctr-search/search?query=2007-002916-24ClinicalTrials.gov NCT01339169 http://clinicaltrials.gov/ct2/show/NCT01339169?term=yf476&rank=5.

  3. Characterisation and normalisation factors for life cycle impact assessment mined abiotic resources categories in South Africa - The manufacturing of catalytic converter exhaust systems as a case study

    CSIR Research Space (South Africa)

    Strauss, K

    2006-05-01

    Full Text Available of the most commonly produced minerals in South Africa is used as basis to determine characterisation factors for a non-renewable mineral resources category. The average production of these minerals from 1991 to 2000 is compared to economically Demonstrated...

  4. The prothrombin time/international normalised ratio (PT/INR) line: derivation of local INR with commercial thromboplastins and coagulometers – two independent studies

    DEFF Research Database (Denmark)

    Poller, Leon; Ibrahim, S.; Keown, M.

    2011-01-01

    reagents, and from 7.0% to 2.6% with rabbit reagents. In the second study, deviation was reduced from 11.2% to 0.4% with human reagents by both local ISI calibration and the PT/INR Line. With rabbit reagents, 10.4% deviation was reduced to 1.1% with both procedures; 4.9% deviation was reduced to 0.5......Background: The WHO scheme for prothrombin time (PT) standardization has been limited in application, because of its difficulties in implementation, particularly the need for mandatory manual PT testing and for local provision of thromboplastin international reference preparations (IRP). Methods...... thromboplastins and coagulometers. INRs were compared with manual certified values with thromboplastin IRP from expert centres and in the second study also with INRs from local ISI calibrations. Results: In the first study with the PT/INR Line, 8.7% deviation from certified INRs was reduced to 1.1% with human...

  5. Iron concentration in breast milk normalised within one week of a single high-dose infusion of iron isomaltoside in randomised controlled trial

    DEFF Research Database (Denmark)

    Holm, Charlotte; Thomsen, Lars Lykke; Nørgaard, Astrid

    2017-01-01

    AIM: We compared the iron concentration in breast milk after a single high-dose of intravenous iron isomaltoside or daily oral iron for postpartum haemorrhage. METHODS: In this randomised controlled trial, the women were allocated a single dose of intravenous 1,200mg iron isomaltoside or oral iron...... deviation) iron concentration in breast milk in the intravenous and oral groups were 0.72 ± 0.27 mg/L and 0.40 ± 0.18 mg/L at three days (p birth. CONCLUSION: A single high...

  6. THE CONTROL OF INTERNATIONAL NORMALISED RATIO IN PATIENTS WITH ATRIAL FIBRILLATION TREATED WITH WARFARIN IN OUTPATIENT AND HOSPITAL SETTINGS: DATA FROM RECVASA REGISTRIES

    Directory of Open Access Journals (Sweden)

    M. M. Loukianov

    2018-01-01

    Full Text Available Am. To study in the RECVASA registers the availability of data about the international normalized ratio (INR indicator and achievement of its target values in outpatient and hospital practice in patients with atrial fibrillation (AF receiving anticoagulant therapy with warfarin.Material and methods. Data about the INR control and the frequency of achievement of its target values at the outpatient and hospital stages were analyzed in RECVASA (Ryazan and RECVASA FP – Yaroslavl outpatient registries, as well as in the hospital registers RECVASA FP (Moscow, Kursk, Tula in 817 patients (46.9% of men, age 68.5±9.6 years with AF and the prescribed anticoagulant therapy with warfarin.Results. INR was determined in 689 (84.3% of 817 patients. The values of INR were monitored during therapy with warfarin in RECVASA (Ryazan and RECVASA FP –Yaroslavl outpatient registries in 73.7% and 77.7% of patients, respectively, and in RECVASA FP hospital registers: 95.8% (Moscow; 81.3% (Tula and 93.5% (Kursk. The target level of INR (2.0-3.0 was achieved in a minority of patients with AF during treatment with warfarin: inRyazan – in 26.3% of cases;Yaroslavl – 38.3%;Kursk – 34.8%;Moscow – 39.5%; Tule – 26.3%. Control of INR in hospital registries during warfarin therapy in patients with AF significantly more often (p<0.05 was performed at the hospital stage, compared with prehospital (in Kursk –2.3 times more often in Moscow – 2.6 times, in Tula – in 1,8 times. The target level of INR in the hospital was achieved significantly more often (p<0.05 than before hospitalization (Moscow andKursk, but no significant differences were found in the RECVASA FP –Tula register (p=0.08. The INR was monitored by 94.9% of the patients; however, the target values of this indicator were achieved only in 33% of cases in the sample study in the RECVASA FP –Moscow registry according to a survey of 39 patients with AF who continued to receive warfarin after 2.6±0.8 years after discharge from the hospital.Conclusion. INR was monitored in 74-96% of patients with AF treated with warfarin and included in the RECVASA and RECVASA FP registries. Target levels of INR were achieved only in 26-39% of patients. INR was monitored with achievement of its target levels more often at the hospital stage of treatment than before hospitalization and more often than in outpatient registries. In practical public health in patients with AF treated with warfarin, it is fundamentally important to monitor INR and increase the frequency of achieving its target values, at which the risk of cardioembolic stroke and other thromboembolic complications is proven to be reduced.

  7. Compressed air injection technique to standardize block injection pressures : [La technique d'injection d'air comprimé pour normaliser les pressions d'injection d'un blocage nerveux].

    Science.gov (United States)

    Tsui, Ban C H; Li, Lisa X Y; Pillay, Jennifer J

    2006-11-01

    Presently, no standardized technique exists to monitor injection pressures during peripheral nerve blocks. Our objective was to determine if a compressed air injection technique, using an in vitro model based on Boyle's law and typical regional anesthesia equipment, could consistently maintain injection pressures below a 1293 mmHg level associated with clinically significant nerve injury. Injection pressures for 20 and 30 mL syringes with various needle sizes ( 18G, 20G, 21 G, 22G, and 24G) were measured in a closed system. A set volume of air was aspirated into a saline-filled syringe and then compressed and maintained at various percentages while pressure was measured. The needle was inserted into the injection port of a pressure sensor, which had attached extension tubing with an injection plug clamped "off". Using linear regression with all data points, the pressure value and 99% confidence interval (CI) at 50% air compression was estimated. The linearity of Boyle's law was demonstrated with a high correlation, r = 0.99, and a slope of 0.984 (99% CI: 0.967-1.001). The net pressure generated at 50% compression was estimated as 744.8 mmHg, with the 99% CI between 729.6 and 760.0 mmHg. The various syringe/needle combinations had similar results. By creating and maintaining syringe air compression at 50% or less, injection pressures will be substantially below the 1293 mmHg threshold considered to be an associated risk factor for clinically significant nerve injury. This technique may allow simple, real-time and objective monitoring during local anesthetic injections while inherently reducing injection speed. Présentement, aucune technique normalisée ne permet de vérifier les pressions d'injection pendant les blocages nerveux périphériques. Nous voulions vérifier si une technique d'injection d'air comprimé, utilisant un modèle in vitro fondé sur la loi de Boyle et du matériel propre à l'anesthésie régionale, pouvait maintenir avec régularité les pressions d'injection sous les 1293 mmHg, pression associée à une lésion nerveuse cliniquement significative. MéTHODE: Les pressions d'injection pour des seringues de 20 et 30 mL et diverses tailles d'aiguilles (18G, 20G, 21G, 22G et 24G) ont été mesurées dans un système fermé. Un volume défini d'air a été aspiré dans une seringue rempli de solution saline, puis comprimé et maintenu à des pourcentages variés pendant la mesure de la pression. L'aiguille a été insérée dans l'ouverture à injection d'un détecteur de pression muni d'une extension avec un bouchon d'injection en position fermée. La valeur de la pression et l'intervalle de confiance de 99 % (IC) pour une compression d'air à 50 % ont été évalués en utilisant une régression linéaire avec tous les points de données. RéSULTATS: La linéarité de la loi de Boyle a été démontrée avec une forte corrélation, r = 0,99 et une pente de 0,984 (IC de 99 % : 0,967-1,001) La pression nette générée sous une compression de 50% a été de 744,8 mmHg avec un IC de 99 % entre 729,6 et 760,0 mmHg. Les diverses combinaisons de seringues et d'aiguilles ont présenté des résultats similaires. En créant et en maintenant dans la seringue une compression d'air à 50% ou moins, les pressions d'injection seront dans l'ensemble sous le seuil des 1293 mmHg associé à un facteur de risque de lésion nerveuse cliniquement significative. Cette technique peut permettre une surveillance simple, objective et en temps réel pendant les injections d'anesthésiques locaux tout en réduisant fondamentalement la vitesse d'injection.

  8. Specificity of the Acute Tryptophan and Tyrosine Plus Phenylalanine Depletion and Loading Tests Part II: Normalisation of the Tryptophan and the Tyrosine Plus Phenylalanine to Competing Amino Acid Ratios in a New Control Formulation

    Directory of Open Access Journals (Sweden)

    Abdulla A.-B. Badawy

    2010-06-01

    Full Text Available Current formulations for acute tryptophan (Trp or tyrosine (Tyr plus phenylalanine (Phe depletion and loading cause undesirable decreases in ratios of Trp or Tyr + Phe to competing amino acids (CAA, thus undermining the specificities of these tests. Branched-chain amino acids (BCAA cause these unintended decreases, and lowering their content in a new balanced control formulation in the present study led to normalization of all ratios. Four groups (n = 12 each of adults each received one of four 50 g control formulations, with 0% (traditional, 20%, 30%, or 40% less of the BCAA. The free and total [Trp]/[CAA] and [Phe + Tyr]/[BCAA + Trp] ratios all decreased significantly during the first 5 h following the traditional formulation, but were fully normalized by the formulation containing 40% less of the BCAA. We recommend the latter as a balanced control formulation and propose adjustments in the depletion and loading formulations to enhance their specificities for 5-HT and the catecholamines.

  9. Electric power industrial applications comply with electromagnetic field thresholds set by the CENELEC standards; Les applications industrielles de l`electricite respectent les seuils de champ electromagnetique imposes par la normalisation Cenelec

    Energy Technology Data Exchange (ETDEWEB)

    Hutzler, B.; Deschamps, F. [Laboratoires de Genie Electrique (France)

    1996-12-31

    Electricite de France (EDF) has carried out theoretical modeling and in-situ measurements in its own plants and in industrial plants, in order to verify the compliance of electromagnetic field exposure to the future european CENELEC standard concerning electromagnetic exposure for the employees. Except in the immediate proximity of certain equipment, which are generally not working places, the exposure values are lower than the specified limitations. Field reduction techniques are proposed for critical equipment

  10. Elevated Plasma Soluble CD14 and Skewed CD16+ Monocyte Distribution Persist despite Normalisation of Soluble CD163 and CXCL10 by Effective HIV Therapy: A Changing Paradigm for Routine HIV Laboratory Monitoring?

    Science.gov (United States)

    Castley, Alison; Berry, Cassandra; French, Martyn; Fernandez, Sonia; Krueger, Romano; Nolan, David

    2014-01-01

    Objective We investigated plasma and flow cytometric biomarkers of monocyte status that have been associated with prognostic utility in HIV infection and other chronic inflammatory diseases, comparing 81 HIV+ individuals with a range of treatment outcomes to a group of 21 healthy control blood donors. Our aim is to develop and optimise monocyte assays that combine biological relevance, clinical utility, and ease of adoption into routine HIV laboratory practice. Design Cross-sectional evaluation of concurrent plasma and whole blood samples. Methods A flow cytometry protocol was developed comprising single-tube CD45, CD14, CD16, CD64, CD163, CD143 analysis with appropriately matched isotype controls. Plasma levels of soluble CD14 (sCD14), soluble CD163 (sCD163) and CXCL10 were measured by ELISA. Results HIV status was associated with significantly increased expression of CD64, CD143 and CD163 on CD16+ monocytes, irrespective of the virological response to HIV therapy. Plasma levels of sCD14, sCD163 and CXCL10 were also significantly elevated in association with viremic HIV infection. Plasma sCD163 and CXCL10 levels were restored to healthy control levels by effective antiretroviral therapy while sCD14 levels remained elevated despite virological suppression (p<0.001). Conclusions Flow cytometric and plasma biomarkers of monocyte activation indicate an ongoing systemic inflammatory response to HIV infection, characterised by persistent alterations of CD16+ monocyte expression profiles and elevated sCD14 levels, that are not corrected by antiretroviral therapy and likely to be prognostically significant. In contrast, sCD163 and CXCL10 levels declined on antiretroviral therapy, suggesting multiple activation pathways revealed by these biomarkers. Incorporation of these assays into routine clinical care is feasible and warrants further consideration, particularly in light of emerging therapeutic strategies that specifically target innate immune activation in HIV infection. PMID:25544986

  11. Elevated plasma soluble CD14 and skewed CD16+ monocyte distribution persist despite normalisation of soluble CD163 and CXCL10 by effective HIV therapy: a changing paradigm for routine HIV laboratory monitoring?

    Directory of Open Access Journals (Sweden)

    Alison Castley

    Full Text Available OBJECTIVE: We investigated plasma and flow cytometric biomarkers of monocyte status that have been associated with prognostic utility in HIV infection and other chronic inflammatory diseases, comparing 81 HIV+ individuals with a range of treatment outcomes to a group of 21 healthy control blood donors. Our aim is to develop and optimise monocyte assays that combine biological relevance, clinical utility, and ease of adoption into routine HIV laboratory practice. DESIGN: Cross-sectional evaluation of concurrent plasma and whole blood samples. METHODS: A flow cytometry protocol was developed comprising single-tube CD45, CD14, CD16, CD64, CD163, CD143 analysis with appropriately matched isotype controls. Plasma levels of soluble CD14 (sCD14, soluble CD163 (sCD163 and CXCL10 were measured by ELISA. RESULTS: HIV status was associated with significantly increased expression of CD64, CD143 and CD163 on CD16+ monocytes, irrespective of the virological response to HIV therapy. Plasma levels of sCD14, sCD163 and CXCL10 were also significantly elevated in association with viremic HIV infection. Plasma sCD163 and CXCL10 levels were restored to healthy control levels by effective antiretroviral therapy while sCD14 levels remained elevated despite virological suppression (p<0.001. CONCLUSIONS: Flow cytometric and plasma biomarkers of monocyte activation indicate an ongoing systemic inflammatory response to HIV infection, characterised by persistent alterations of CD16+ monocyte expression profiles and elevated sCD14 levels, that are not corrected by antiretroviral therapy and likely to be prognostically significant. In contrast, sCD163 and CXCL10 levels declined on antiretroviral therapy, suggesting multiple activation pathways revealed by these biomarkers. Incorporation of these assays into routine clinical care is feasible and warrants further consideration, particularly in light of emerging therapeutic strategies that specifically target innate immune activation in HIV infection.

  12. Optimisation of hardness and tensile strength of friction stir welded ...

    African Journals Online (AJOL)

    DR OKE

    adopted to develop mathematical model between the response and process parameters. .... Table 3 Normalised values and Deviational Sequence ... If the expectancy is the smaller the better, then the original sequence should be normalised ...

  13. Algebraic Bethe ansatz for the quantum group invariant open XXZ chain at roots of unity

    Directory of Open Access Journals (Sweden)

    Azat M. Gainutdinov

    2016-08-01

    Full Text Available For generic values of q, all the eigenvectors of the transfer matrix of the Uqsl(2-invariant open spin-1/2 XXZ chain with finite length N can be constructed using the algebraic Bethe ansatz (ABA formalism of Sklyanin. However, when q is a root of unity (q=eiπ/p with integer p≥2, the Bethe equations acquire continuous solutions, and the transfer matrix develops Jordan cells. Hence, there appear eigenvectors of two new types: eigenvectors corresponding to continuous solutions (exact complete p-strings, and generalized eigenvectors. We propose general ABA constructions for these two new types of eigenvectors. We present many explicit examples, and we construct complete sets of (generalized eigenvectors for various values of p and N.

  14. Normalizing urban inequality : Cinematic imaginaries of difference in postcolonial Amsterdam = Normalisation de l’inégalité urbaine: imaginaires cinématiques de la différence dans l’Amsterdam postcolonial = La normalización de la desigualdad urbana: imaginarios cinematográficos de diferencia en la Ámsterdam poscolonial

    NARCIS (Netherlands)

    van Gent, W.; Jaffe, R.

    2017-01-01

    Combining insights from critical urban studies with geographies of race and racism, this article examines the role of spatial imaginaries in normalizing urban inequalities, showing how such imaginaries make the associations between places and populations appear natural. We extend analyses of the

  15. Riesz basis for strongly continuous groups.

    NARCIS (Netherlands)

    Zwart, Heiko J.

    Given a Hilbert space and the generator of a strongly continuous group on this Hilbert space. If the eigenvalues of the generator have a uniform gap, and if the span of the corresponding eigenvectors is dense, then these eigenvectors form a Riesz basis (or unconditional basis) of the Hilbert space.

  16. Deflation of Eigenvalues for GMRES in Lattice QCD

    International Nuclear Information System (INIS)

    Morgan, Ronald B.; Wilcox, Walter

    2002-01-01

    Versions of GMRES with deflation of eigenvalues are applied to lattice QCD problems. Approximate eigenvectors corresponding to the smallest eigenvalues are generated at the same time that linear equations are solved. The eigenvectors improve convergence for the linear equations, and they help solve other right-hand sides

  17. Perfect observables for the hierarchical non-linear O(N)-invariant σ-model

    International Nuclear Information System (INIS)

    Wieczerkowski, C.; Xylander, Y.

    1995-05-01

    We compute moving eigenvalues and the eigenvectors of the linear renormalization group transformation for observables along the renormalized trajectory of the hierarchical non-linear O(N)-invariant σ-model by means of perturbation theory in the running coupling constant. Moving eigenvectors are defined as solutions to a Callan-Symanzik type equation. (orig.)

  18. An object recognition method based on fuzzy theory and BP networks

    Science.gov (United States)

    Wu, Chuan; Zhu, Ming; Yang, Dong

    2006-01-01

    It is difficult to choose eigenvectors when neural network recognizes object. It is possible that the different object eigenvectors is similar or the same object eigenvectors is different under scaling, shifting, rotation if eigenvectors can not be chosen appropriately. In order to solve this problem, the image is edged, the membership function is reconstructed and a new threshold segmentation method based on fuzzy theory is proposed to get the binary image. Moment invariant of binary image is extracted and normalized. Some time moment invariant is too small to calculate effectively so logarithm of moment invariant is taken as input eigenvectors of BP network. The experimental results demonstrate that the proposed approach could recognize the object effectively, correctly and quickly.

  19. Resonances, scattering theory and rigged Hilbert spaces

    International Nuclear Information System (INIS)

    Parravicini, G.; Gorini, V.; Sudarshan, E.C.G.

    1979-01-01

    The problem of decaying states and resonances is examined within the framework of scattering theory in a rigged Hilbert space formalism. The stationary free, in, and out eigenvectors of formal scattering theory, which have a rigorous setting in rigged Hilbert space, are considered to be analytic functions of the energy eigenvalue. The value of these analytic functions at any point of regularity, real or complex, is an eigenvector with eigenvalue equal to the position of the point. The poles of the eigenvector families give origin to other eigenvectors of the Hamiltonian; the singularities of the out eigenvector family are the same as those of the continued S matrix, so that resonances are seen as eigenvectors of the Hamiltonian with eigenvalue equal to their location in the complex energy plane. Cauchy theorem then provides for expansions in terms of complete sets of eigenvectors with complex eigenvalues of the Hamiltonian. Applying such expansions to the survival amplitude of a decaying state, one finds that resonances give discrete contributions with purely exponential time behavior; the background is of course present, but explicitly separated. The resolvent of the Hamiltonian, restricted to the nuclear space appearing in the rigged Hilbert space, can be continued across the absolutely continuous spectrum; the singularities of the continuation are the same as those of the out eigenvectors. The free, in and out eigenvectors with complex eigenvalues and those corresponding to resonances can be approximated by physical vectors in the Hilbert space, as plane waves can. The need for having some further physical information in addition to the specification of the total Hamiltonian is apparent in the proposed framework. The formalism is applied to the Lee-Friedrichs model. 48 references

  20. (measured as NDVI) over mine tailings at Mhangura Copper Mine

    African Journals Online (AJOL)

    chari

    Remote sensing techniques are increasingly being employed in monitoring environmental ... normalised difference vegetation index (NDVI), remote sensing, tailings ..... rehabilitation monitoring by adding landscape function characteristics.

  1. Low complexity non-iterative coordinated beamforming in 2-user broadcast channels

    KAUST Repository

    Park, Kihong

    2010-10-01

    We propose a new non-iterative coordinated beamforming scheme to obtain full multiplexing gain in 2-user MIMO systems. In order to find the beamforming and combining matrices, we solve a generalized eigenvector problem and describe how to find generalized eigenvectors according to the Gaussian broadcast channels. Selected simulation results show that the proposed method yields the same sum-rate performance as the iterative coordinated beamforming method, while maintaining lower complexity by non-iterative computation of the beamforming and combining matrices. We also show that the proposed method can easily exploit selective gain by choosing the best combination of generalized eigenvectors. © 2006 IEEE.

  2. Low complexity non-iterative coordinated beamforming in 2-user broadcast channels

    KAUST Repository

    Park, Kihong; Ko, Youngchai; Alouini, Mohamed-Slim

    2010-01-01

    We propose a new non-iterative coordinated beamforming scheme to obtain full multiplexing gain in 2-user MIMO systems. In order to find the beamforming and combining matrices, we solve a generalized eigenvector problem and describe how to find generalized eigenvectors according to the Gaussian broadcast channels. Selected simulation results show that the proposed method yields the same sum-rate performance as the iterative coordinated beamforming method, while maintaining lower complexity by non-iterative computation of the beamforming and combining matrices. We also show that the proposed method can easily exploit selective gain by choosing the best combination of generalized eigenvectors. © 2006 IEEE.

  3. (WRFDA) for WRF non-hydrostatic mesoscale model

    Indian Academy of Sciences (India)

    Sujata Pattanayak

    2018-05-22

    May 22, 2018 ... Keywords. WRF-NMM; WRFDA; single observation test; eigenvalues; eigenvector; correlation; tropical .... The per- turbation variables here are defined as deviations ..... Synop, Sound, Metar, Pilot, Buoy, Ships, Airep,. Geoamv ...

  4. Matrices and transformations

    CERN Document Server

    Pettofrezzo, Anthony J

    1978-01-01

    Elementary, concrete approach: fundamentals of matrix algebra, linear transformation of the plane, application of properties of eigenvalues and eigenvectors to study of conics. Includes proofs of most theorems. Answers to odd-numbered exercises.

  5. Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery

    Science.gov (United States)

    Ochilov, S.; Alam, M. S.; Bal, A.

    2006-05-01

    Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.

  6. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    Science.gov (United States)

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  7. Linear algebra

    CERN Document Server

    Berberian, Sterling K

    2014-01-01

    Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.

  8. Perron–Frobenius theorem for nonnegative multilinear forms and extensions

    OpenAIRE

    Friedland, S.; Gaubert, S.; Han, L.

    2013-01-01

    We prove an analog of Perron-Frobenius theorem for multilinear forms with nonnegative coefficients, and more generally, for polynomial maps with nonnegative coefficients. We determine the geometric convergence rate of the power algorithm to the unique normalized eigenvector.

  9. Computational analysis of chain flexibility and fluctuations in Rhizomucor miehei lipase

    DEFF Research Database (Denmark)

    Peters, Günther H.J.; Bywater, R. P.

    1999-01-01

    We have performed molecular dynamics simulation of Rhizomucor miehei lipase (Rml) with explicit water molecules present. The simulation was carried out in periodic boundary conditions and conducted for 1.2 ns in order to determine the concerted protein dynamics and to examine how well the essential...... motions are preserved along the trajectory. Protein motions are extracted by means of the essential dynamics analysis method for different lengths of the trajectory. Motions described by eigenvector 1 converge after approximately 200 ps and only small changes are observed with increasing simulation time....... Protein dynamics along eigenvectors with larger indices, however, change with simulation time and generally, with increasing eigenvector index, longer simulation times are required for observing similar protein motions (along a particular eigenvector). Several regions in the protein show relatively large...

  10. Acceleration techniques for the discrete ordinate method

    International Nuclear Information System (INIS)

    Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego; Trautmann, Thomas

    2013-01-01

    In this paper we analyze several acceleration techniques for the discrete ordinate method with matrix exponential and the small-angle modification of the radiative transfer equation. These techniques include the left eigenvectors matrix approach for computing the inverse of the right eigenvectors matrix, the telescoping technique, and the method of false discrete ordinate. The numerical simulations have shown that on average, the relative speedup of the left eigenvector matrix approach and the telescoping technique are of about 15% and 30%, respectively. -- Highlights: ► We presented the left eigenvector matrix approach. ► We analyzed the method of false discrete ordinate. ► The telescoping technique is applied for matrix operator method. ► Considered techniques accelerate the computations by 20% in average.

  11. Spectral segmentation of polygonized images with normalized cuts

    Energy Technology Data Exchange (ETDEWEB)

    Matsekh, Anna [Los Alamos National Laboratory; Skurikhin, Alexei [Los Alamos National Laboratory; Rosten, Edward [UNIV OF CAMBRIDGE

    2009-01-01

    We analyze numerical behavior of the eigenvectors corresponding to the lowest eigenvalues of the generalized graph Laplacians arising in the Normalized Cuts formulations of the image segmentation problem on coarse polygonal grids.

  12. SU-C-BRA-07: Variability of Patient-Specific Motion Models Derived Using Different Deformable Image Registration Algorithms for Lung Cancer Stereotactic Body Radiotherapy (SBRT) Patients

    Energy Technology Data Exchange (ETDEWEB)

    Dhou, S; Williams, C [Brigham and Women’s Hospital / Harvard Medical School, Boston, MA (United States); Ionascu, D [William Beaumont Hospital, Royal Oak, MI (United States); Lewis, J [University of California at Los Angeles, Los Angeles, CA (United States)

    2016-06-15

    Purpose: To study the variability of patient-specific motion models derived from 4-dimensional CT (4DCT) images using different deformable image registration (DIR) algorithms for lung cancer stereotactic body radiotherapy (SBRT) patients. Methods: Motion models are derived by 1) applying DIR between each 4DCT image and a reference image, resulting in a set of displacement vector fields (DVFs), and 2) performing principal component analysis (PCA) on the DVFs, resulting in a motion model (a set of eigenvectors capturing the variations in the DVFs). Three DIR algorithms were used: 1) Demons, 2) Horn-Schunck, and 3) iterative optical flow. The motion models derived were compared using patient 4DCT scans. Results: Motion models were derived and the variations were evaluated according to three criteria: 1) the average root mean square (RMS) difference which measures the absolute difference between the components of the eigenvectors, 2) the dot product between the eigenvectors which measures the angular difference between the eigenvectors in space, and 3) the Euclidean Model Norm (EMN), which is calculated by summing the dot products of an eigenvector with the first three eigenvectors from the reference motion model in quadrature. EMN measures how well an eigenvector can be reconstructed using another motion model derived using a different DIR algorithm. Results showed that comparing to a reference motion model (derived using the Demons algorithm), the eigenvectors of the motion model derived using the iterative optical flow algorithm has smaller RMS, larger dot product, and larger EMN values than those of the motion model derived using Horn-Schunck algorithm. Conclusion: The study showed that motion models vary depending on which DIR algorithms were used to derive them. The choice of a DIR algorithm may affect the accuracy of the resulting model, and it is important to assess the suitability of the algorithm chosen for a particular application. This project was supported

  13. SU-C-BRA-07: Variability of Patient-Specific Motion Models Derived Using Different Deformable Image Registration Algorithms for Lung Cancer Stereotactic Body Radiotherapy (SBRT) Patients

    International Nuclear Information System (INIS)

    Dhou, S; Williams, C; Ionascu, D; Lewis, J

    2016-01-01

    Purpose: To study the variability of patient-specific motion models derived from 4-dimensional CT (4DCT) images using different deformable image registration (DIR) algorithms for lung cancer stereotactic body radiotherapy (SBRT) patients. Methods: Motion models are derived by 1) applying DIR between each 4DCT image and a reference image, resulting in a set of displacement vector fields (DVFs), and 2) performing principal component analysis (PCA) on the DVFs, resulting in a motion model (a set of eigenvectors capturing the variations in the DVFs). Three DIR algorithms were used: 1) Demons, 2) Horn-Schunck, and 3) iterative optical flow. The motion models derived were compared using patient 4DCT scans. Results: Motion models were derived and the variations were evaluated according to three criteria: 1) the average root mean square (RMS) difference which measures the absolute difference between the components of the eigenvectors, 2) the dot product between the eigenvectors which measures the angular difference between the eigenvectors in space, and 3) the Euclidean Model Norm (EMN), which is calculated by summing the dot products of an eigenvector with the first three eigenvectors from the reference motion model in quadrature. EMN measures how well an eigenvector can be reconstructed using another motion model derived using a different DIR algorithm. Results showed that comparing to a reference motion model (derived using the Demons algorithm), the eigenvectors of the motion model derived using the iterative optical flow algorithm has smaller RMS, larger dot product, and larger EMN values than those of the motion model derived using Horn-Schunck algorithm. Conclusion: The study showed that motion models vary depending on which DIR algorithms were used to derive them. The choice of a DIR algorithm may affect the accuracy of the resulting model, and it is important to assess the suitability of the algorithm chosen for a particular application. This project was supported

  14. Mathematical methods linear algebra normed spaces distributions integration

    CERN Document Server

    Korevaar, Jacob

    1968-01-01

    Mathematical Methods, Volume I: Linear Algebra, Normed Spaces, Distributions, Integration focuses on advanced mathematical tools used in applications and the basic concepts of algebra, normed spaces, integration, and distributions.The publication first offers information on algebraic theory of vector spaces and introduction to functional analysis. Discussions focus on linear transformations and functionals, rectangular matrices, systems of linear equations, eigenvalue problems, use of eigenvectors and generalized eigenvectors in the representation of linear operators, metric and normed vector

  15. Hints on the Broad Line Region Structure of Quasars at High and Low Luminosities

    Directory of Open Access Journals (Sweden)

    Marziani Paola

    2011-09-01

    Full Text Available Quasars show a considerable spectroscopic diversity. However, the variety of quasar spectra at low redshifts is non-random: a principal component analysis applied to large samples customarily identifies two main eigenvectors. In this contribution we show that the range of quasar optical spectral properties observed at low-z and associated with the first eigenvector is preserved up to z ≈ 2 in a sample of high luminosity quasars. We also describe two major luminosity effects.

  16. Quantum damped oscillator I: Dissipation and resonances

    International Nuclear Information System (INIS)

    Chruscinski, Dariusz; Jurkowski, Jacek

    2006-01-01

    Quantization of a damped harmonic oscillator leads to so called Bateman's dual system. The corresponding Bateman's Hamiltonian, being a self-adjoint operator, displays the discrete family of complex eigenvalues. We show that they correspond to the poles of energy eigenvectors and the corresponding resolvent operator when continued to the complex energy plane. Therefore, the corresponding generalized eigenvectors may be interpreted as resonant states which are responsible for the irreversible quantum dynamics of a damped harmonic oscillator

  17. The Perron-Frobenius Theorem for Markov Semigroups

    OpenAIRE

    Hijab, Omar

    2014-01-01

    Let $P^V_t$, $t\\ge0$, be the Schrodinger semigroup associated to a potential $V$ and Markov semigroup $P_t$, $t\\ge0$, on $C(X)$. Existence is established of a left eigenvector and right eigenvector corresponding to the spectral radius $e^{\\lambda_0t}$ of $P^V_t$, simultaneously for all $t\\ge0$. This is derived with no compactness assumption on the semigroup operators.

  18. A Quantum Implementation Model for Artificial Neural Networks

    OpenAIRE

    Daskin, Ammar

    2016-01-01

    The learning process for multi layered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow-Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, this iterative formulas result in terms formed by the principal components of the weight matrix: i.e., the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the phase...

  19. A Quantum Implementation Model for Artificial Neural Networks

    OpenAIRE

    Ammar Daskin

    2018-01-01

    The learning process for multilayered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow–Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, these iterative formulas result in terms formed by the principal components of the weight matrix, namely, the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the pha...

  20. Multibaseline Observations of the Occultation of Crab Nebula by the ...

    Indian Academy of Sciences (India)

    tribpo

    Observations of the radio source Crab Nebula were made at the time of transit during. June 1986 and 1987. The fringe amplitude V(S) for a baseline S was calibrated using the corresponding baseline fringe amplitude of radio source 3C123 or 3C134 and normalised to the preoccultation value V(O). Normalised fringe ...

  1. Fresh frozen plasma versus prothrombin complex concentrate in patients with intracranial haemorrhage related to vitamin K antagonists (INCH)

    DEFF Research Database (Denmark)

    Steiner, Thorsten; Poli, Sven; Griebe, Martin

    2016-01-01

    BACKGROUND: Haematoma expansion is a major cause of mortality in intracranial haemorrhage related to vitamin K antagonists (VKA-ICH). Normalisation of the international normalised ratio (INR) is recommended, but optimum haemostatic management is controversial. We assessed the safety and efficacy ...

  2. Volatility Determination in an Ambit Process Setting

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole; Graversen, Svend-Erik

    The probability limit behaviour of normalised quadratic variation is studied for a simple tempo-spatial ambit process, with particular regard to the question of volatility memorylessness.......The probability limit behaviour of normalised quadratic variation is studied for a simple tempo-spatial ambit process, with particular regard to the question of volatility memorylessness....

  3. Treatment of dry age-related macular degeneration with dobesilate

    OpenAIRE

    Cuevas, P; Outeiriño, L A; Angulo, J; Giménez-Gallego, G

    2012-01-01

    The authors present anatomical and functional evidences of dry age-macular degeneration improvement, after intravitreal treatment with dobesilate. Main outcomes measures were normalisation of retinal structure and function, assessed by optical coherence tomography, fundus-monitored microperimetry, electrophysiology and visual acuity. The effect might be related to the normalisation of the outer retinal architecture.

  4. Download this PDF file

    African Journals Online (AJOL)

    Owner

    mRNA levels were expressed in relative copy number normalised against GADPH mRNA.This normalisation against the housekeeping gene is possible if both PCR (HO-1 gene +housekeeping gene) present in the same efficiency. All data are expressed as the mean value and its standard deviation. Kolmogorov smirnov to ...

  5. J/$\\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\\sqrt{s_{\\rm NN}} = 5.02$ TeV

    OpenAIRE

    Adamová, D.; Aggarwal, Madan Mohan; Alam, Sk Noor; Biswas, Rathijit; Zardoshti, Nima; Zarochentsev, Andrey; Zavada, Petr; Zavyalov, Nikolay; Zbroszczyk, Hanna Paulina; Zhalov, Mikhail; Zhang, Haitao; Zhang, Xiaoming; Zhang, Yonghong; Chunhui, Zhang; Biswas, Saikat

    2018-01-01

    We report measurements of the inclusive J/ ψ yield and average transverse momentum as a function of charged-particle pseudorapidity density dNch/dη in p–Pb collisions at sNN=5.02TeV with ALICE at the LHC. The observables are normalised to their corresponding averages in non-single diffractive events. An increase of the normalised J/ ψ yield with normalised dNch/dη , measured at mid-rapidity, is observed at mid-rapidity and backward rapidity. At forward rapidity, a saturation of the relative y...

  6. Multiscale finite element methods for high-contrast problems using local spectral basis functions

    KAUST Repository

    Efendiev, Yalchin

    2011-02-01

    In this paper we study multiscale finite element methods (MsFEMs) using spectral multiscale basis functions that are designed for high-contrast problems. Multiscale basis functions are constructed using eigenvectors of a carefully selected local spectral problem. This local spectral problem strongly depends on the choice of initial partition of unity functions. The resulting space enriches the initial multiscale space using eigenvectors of local spectral problem. The eigenvectors corresponding to small, asymptotically vanishing, eigenvalues detect important features of the solutions that are not captured by initial multiscale basis functions. Multiscale basis functions are constructed such that they span these eigenfunctions that correspond to small, asymptotically vanishing, eigenvalues. We present a convergence study that shows that the convergence rate (in energy norm) is proportional to (H/Λ*)1/2, where Λ* is proportional to the minimum of the eigenvalues that the corresponding eigenvectors are not included in the coarse space. Thus, we would like to reach to a larger eigenvalue with a smaller coarse space. This is accomplished with a careful choice of initial multiscale basis functions and the setup of the eigenvalue problems. Numerical results are presented to back-up our theoretical results and to show higher accuracy of MsFEMs with spectral multiscale basis functions. We also present a hierarchical construction of the eigenvectors that provides CPU savings. © 2010.

  7. Polycytemie bij een patiënte met een uterusmyoom

    NARCIS (Netherlands)

    De Boer, Jolien P.; Velders, Gerjo; Aliredjo, Riena; Scheenjes, Eduard; Flinsenberg, Thijs W.H.

    2017-01-01

    Myomatous erythrocytosis syndrome (MES) is characterised by a combination of polycythaemia, uterus myomatosus and the normalisation of erythrocyte count after hysterectomy. Case description A 58yearold postmenopausal woman was referred to the gynaecologist with symptoms of vaginal blood loss,

  8. Ossobennosti mezhetnitsheskoi sotsialno-polititsheskoi situatsii v Estonii i osnovnõje puti jejo normalizatsii / Vladimir Parol

    Index Scriptorium Estoniae

    Parol, Vladimir

    2000-01-01

    Kokkuvõte: Etniliste rühmade vahelise sotsiaal-poliitilise situatsiooni iseärasused Eestis ja selle normaliseerimise teed. Summary: Inter-ethnic social-political situation in Estonia, its peculiarities and ways of normalisation

  9. Technological Forum

    CERN Multimedia

    Thievent; Zürrer; Hekimi; Cortesy; Reymond; Lecomte

    1988-01-01

    Partie 1: M. Thievent de l'association suisse de normalisation, ainsi que M.Alleyn, responsable de l'enseignement technique au Cern prennent la parole suivi d'une discussion (questions pas audibles, sifflements...) Partie 2: Exposé de M.Zürrer, président du comité européen de la normalisation, suivi de discussion. Partie 3: Groupes de travail (table ronde) avec 3 animateurs: M.Hekimi, sécrétaire générale de "l'european computer manufacturing association", M.Corthesy, chef du bureau de normalisation de Lausanne, M.Reymond, chef du bureau de normalisation EBC Secheron à Genève, suivi de discussion.

  10. Levelling-out and register variation in the translations of ...

    African Journals Online (AJOL)

    Kate H

    Explicitation, simplification, normalisation and levelling-out, the four features of translation .... limited amount of attention levelling-out has received, there is consequently an ..... The subcorpus of medical translations is divided into two divisions: ...

  11. Myocardial hypertrophy in the recipient with twin-to-twin transfusion syndrome

    DEFF Research Database (Denmark)

    Jeppesen, D.L.; Jorgensen, F.S.; Pryds, O.A.

    2008-01-01

    pressure measurements revealed persistent systemic hypertension. Biventricular hypertrophy was demonstrated by echocardiography. Blood pressure normalised after treatment with Nifedipine and the cardiac hypertrophy subsided over the following weeks. A potential contributing mechanism is intrauterine...

  12. Coalgebraising subsequential transducers

    NARCIS (Netherlands)

    H.H. Hansen (Helle); J. Adamek; C.A. Kupke (Clemens)

    2008-01-01

    htmlabstractSubsequential transducers generalise both classic deterministic automata and Mealy/Moore type state machines by combining (input) language recognition with transduction. In this paper we show that normalisation and taking differentials of subsequential transducers and their underlying

  13. Coalgebraising Subsequential Transducers

    NARCIS (Netherlands)

    Hansen, H.H.

    2008-01-01

    Subsequential transducers generalise both classic deterministic automata and Mealy/Moore type state machines by combining (input) language recognition with transduction. In this paper we show that normalisation and taking differentials of subsequential transducers and their underlying structures can

  14. Homogenization of the critically spectral equation in neutron transport

    Energy Technology Data Exchange (ETDEWEB)

    Allaire, G. [CEA Saclay, 91 - Gif-sur-Yvette (France). Dept. de Mecanique et de Technologie]|[Paris-6 Univ., 75 (France). Lab. d' Analyse Numerique; Bal, G. [Electricite de France (EDF), 92 - Clamart (France). Direction des Etudes et Recherches

    1998-07-01

    We address the homogenization of an eigenvalue problem for the neutron transport equation in a periodic heterogeneous domain, modeling the criticality study of nuclear reactor cores. We prove that the neutron flux, corresponding to the first and unique positive eigenvector, can be factorized in the product of two terms, up to a remainder which goes strongly to zero with the period. On terms is the first eigenvector of the transport equation in the periodicity cell. The other term is the first eigenvector of a diffusion equation in the homogenized domain. Furthermore, the corresponding eigenvalue gives a second order corrector for the eigenvalue of the heterogeneous transport problem. This result justifies and improves the engineering procedure used in practice for nuclear reactor cores computations. (author)

  15. Molecular Mechanics and Quantum Chemistry Based Study of Nickel-N-Allyl Urea and N-Allyl Thiourea Complexes

    Directory of Open Access Journals (Sweden)

    P. D. Sharma

    2009-01-01

    Full Text Available Eigenvalue, eigenvector and overlap matrix of nickel halide complex of N-allyl urea and N-allyl thiourea have been evaluated. Our results indicate that ligand field parameters (Dq, B’ and β evaluated earlier by electronic spectra are very close to values evaluated with the help of eigenvalues and eigenvectors. Eigenvector analysis and population analysis shows that in bonding 4s, 4p, and 3dx2-y2, 3dyz orbitals of nickel are involved but the coefficient values differ in different complexes. Out of 4px, 4py, 4pz the involvement of either 4pz or 4py, is noticeable. The theoretically evaluated positions of infrared bands indicate that N-allyl urea is coordinated to nickel through its oxygen and N-allyl thiourea is coordinated to nickel through its sulphur which is in conformity with the experimental results.

  16. A Spectral Algorithm for Envelope Reduction of Sparse Matrices

    Science.gov (United States)

    Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.

    1993-01-01

    The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.

  17. Algebraic structure of general electromagnetic fields and energy flow

    International Nuclear Information System (INIS)

    Hacyan, Shahen

    2011-01-01

    Highlights: → Algebraic structure of general electromagnetic fields in stationary spacetime. → Eigenvalues and eigenvectors of the electomagnetic field tensor. → Energy-momentum in terms of eigenvectors and Killing vector. → Explicit form of reference frame with vanishing Poynting vector. → Application of formalism to Bessel beams. - Abstract: The algebraic structures of a general electromagnetic field and its energy-momentum tensor in a stationary space-time are analyzed. The explicit form of the reference frame in which the energy of the field appears at rest is obtained in terms of the eigenvectors of the electromagnetic tensor and the existing Killing vector. The case of a stationary electromagnetic field is also studied and a comparison is made with the standard short-wave approximation. The results can be applied to the general case of a structured light beams, in flat or curved spaces. Bessel beams are worked out as example.

  18. Collective Correlations of Brodmann Areas fMRI Study with RMT-Denoising

    Science.gov (United States)

    Burda, Z.; Kornelsen, J.; Nowak, M. A.; Porebski, B.; Sboto-Frankenstein, U.; Tomanek, B.; Tyburczyk, J.

    We study collective behavior of Brodmann regions of human cerebral cortex using functional Magnetic Resonance Imaging (fMRI) and Random Matrix Theory (RMT). The raw fMRI data is mapped onto the cortex regions corresponding to the Brodmann areas with the aid of the Talairach coordinates. Principal Component Analysis (PCA) of the Pearson correlation matrix for 41 different Brodmann regions is carried out to determine their collective activity in the idle state and in the active state stimulated by tapping. The collective brain activity is identified through the statistical analysis of the eigenvectors to the largest eigenvalues of the Pearson correlation matrix. The leading eigenvectors have a large participation ratio. This indicates that several Broadmann regions collectively give rise to the brain activity associated with these eigenvectors. We apply random matrix theory to interpret the underlying multivariate data.

  19. Homogenization of the critically spectral equation in neutron transport

    International Nuclear Information System (INIS)

    Allaire, G.; Paris-6 Univ., 75; Bal, G.

    1998-01-01

    We address the homogenization of an eigenvalue problem for the neutron transport equation in a periodic heterogeneous domain, modeling the criticality study of nuclear reactor cores. We prove that the neutron flux, corresponding to the first and unique positive eigenvector, can be factorized in the product of two terms, up to a remainder which goes strongly to zero with the period. On terms is the first eigenvector of the transport equation in the periodicity cell. The other term is the first eigenvector of a diffusion equation in the homogenized domain. Furthermore, the corresponding eigenvalue gives a second order corrector for the eigenvalue of the heterogeneous transport problem. This result justifies and improves the engineering procedure used in practice for nuclear reactor cores computations. (author)

  20. Functional brain connectivity is predictable from anatomic network's Laplacian eigen-structure.

    Science.gov (United States)

    Abdelnour, Farras; Dayan, Michael; Devinsky, Orrin; Thesen, Thomas; Raj, Ashish

    2018-05-15

    How structural connectivity (SC) gives rise to functional connectivity (FC) is not fully understood. Here we mathematically derive a simple relationship between SC measured from diffusion tensor imaging, and FC from resting state fMRI. We establish that SC and FC are related via (structural) Laplacian spectra, whereby FC and SC share eigenvectors and their eigenvalues are exponentially related. This gives, for the first time, a simple and analytical relationship between the graph spectra of structural and functional networks. Laplacian eigenvectors are shown to be good predictors of functional eigenvectors and networks based on independent component analysis of functional time series. A small number of Laplacian eigenmodes are shown to be sufficient to reconstruct FC matrices, serving as basis functions. This approach is fast, and requires no time-consuming simulations. It was tested on two empirical SC/FC datasets, and was found to significantly outperform generative model simulations of coupled neural masses. Copyright © 2018. Published by Elsevier Inc.

  1. Blood glucose control in healthy subject and patients receiving intravenous glucose infusion or total parenteral nutrition using glucagon-like peptide 1

    DEFF Research Database (Denmark)

    Nauck, Michael A; Walberg, Jörg; Vethacke, Arndt

    2004-01-01

    It was the aim of the study to examine whether the insulinotropic gut hormone GLP-1 is able to control or even normalise glycaemia in healthy subjects receiving intravenous glucose infusions and in severely ill patients hyperglycaemic during total parenteral nutrition.......It was the aim of the study to examine whether the insulinotropic gut hormone GLP-1 is able to control or even normalise glycaemia in healthy subjects receiving intravenous glucose infusions and in severely ill patients hyperglycaemic during total parenteral nutrition....

  2. Matrix product solution to multi-species ASEP with open boundaries

    Science.gov (United States)

    Finn, C.; Ragoucy, E.; Vanicat, M.

    2018-04-01

    We study a class of multi-species ASEP with open boundaries. The boundaries are chosen in such a way that all species of particles interact non-trivially with the boundaries, and are present in the stationary state. We give the exact expression of the stationary state in a matrix product form, and compute its normalisation. Densities and currents for the different species are then computed in terms of this normalisation.

  3. A robust multilevel simultaneous eigenvalue solver

    Science.gov (United States)

    Costiner, Sorin; Taasan, Shlomo

    1993-01-01

    Multilevel (ML) algorithms for eigenvalue problems are often faced with several types of difficulties such as: the mixing of approximated eigenvectors by the solution process, the approximation of incomplete clusters of eigenvectors, the poor representation of solution on coarse levels, and the existence of close or equal eigenvalues. Algorithms that do not treat appropriately these difficulties usually fail, or their performance degrades when facing them. These issues motivated the development of a robust adaptive ML algorithm which treats these difficulties, for the calculation of a few eigenvectors and their corresponding eigenvalues. The main techniques used in the new algorithm include: the adaptive completion and separation of the relevant clusters on different levels, the simultaneous treatment of solutions within each cluster, and the robustness tests which monitor the algorithm's efficiency and convergence. The eigenvectors' separation efficiency is based on a new ML projection technique generalizing the Rayleigh Ritz projection, combined with a technique, the backrotations. These separation techniques, when combined with an FMG formulation, in many cases lead to algorithms of O(qN) complexity, for q eigenvectors of size N on the finest level. Previously developed ML algorithms are less focused on the mentioned difficulties. Moreover, algorithms which employ fine level separation techniques are of O(q(sub 2)N) complexity and usually do not overcome all these difficulties. Computational examples are presented where Schrodinger type eigenvalue problems in 2-D and 3-D, having equal and closely clustered eigenvalues, are solved with the efficiency of the Poisson multigrid solver. A second order approximation is obtained in O(qN) work, where the total computational work is equivalent to only a few fine level relaxations per eigenvector.

  4. A note on the eigensystem of the covariance matrix of dichotomous Guttman items

    Directory of Open Access Journals (Sweden)

    Clintin P Davis-Stober

    2015-12-01

    Full Text Available We consider the sample covariance matrix for dichotomous Guttman items under a set of uniformity conditions, and obtain closed-form expressions for the eigenvalues and eigenvectors of the matrix. In particular, we describe the eigenvalues and eigenvectors of the matrix in terms of trigonometric functions of the number of items. Our results parallel those of Zwick (1987 for the correlation matrix under the same uniformity conditions. We provide an explanation for certain properties of principal components under Guttman scalability which have been first reported by Guttman (1950.

  5. A Note on the Eigensystem of the Covariance Matrix of Dichotomous Guttman Items.

    Science.gov (United States)

    Davis-Stober, Clintin P; Doignon, Jean-Paul; Suck, Reinhard

    2015-01-01

    We consider the covariance matrix for dichotomous Guttman items under a set of uniformity conditions, and obtain closed-form expressions for the eigenvalues and eigenvectors of the matrix. In particular, we describe the eigenvalues and eigenvectors of the matrix in terms of trigonometric functions of the number of items. Our results parallel those of Zwick (1987) for the correlation matrix under the same uniformity conditions. We provide an explanation for certain properties of principal components under Guttman scalability which have been first reported by Guttman (1950).

  6. The Rabi Oscillation in Subdynamic System for Quantum Computing

    Directory of Open Access Journals (Sweden)

    Bi Qiao

    2015-01-01

    Full Text Available A quantum computation for the Rabi oscillation based on quantum dots in the subdynamic system is presented. The working states of the original Rabi oscillation are transformed to the eigenvectors of subdynamic system. Then the dissipation and decoherence of the system are only shown in the change of the eigenvalues as phase errors since the eigenvectors are fixed. This allows both dissipation and decoherence controlling to be easier by only correcting relevant phase errors. This method can be extended to general quantum computation systems.

  7. Unstable quantum states and rigged Hilbert spaces

    International Nuclear Information System (INIS)

    Gorini, V.; Parravicini, G.

    1978-10-01

    Rigged Hilbert space techniques are applied to the quantum mechanical treatment of unstable states in nonrelativistic scattering theory. A method is discussed which is based on representations of decay amplitudes in terms of expansions over complete sets of generalized eigenvectors of the interacting Hamiltonian, corresponding to complex eigenvalues. These expansions contain both a discrete and a continuum contribution. The former corresponds to eigenvalues located at the second sheet poles of the S matrix, and yields the exponential terms in the survival amplitude. The latter arises from generalized eigenvectors associated to complex eigenvalues on background contours in the complex plane, and gives the corrections to the exponential law. 27 references

  8. Compact versus noncompact quantum dynamics of time-dependent su(1,1)-valued Hamiltonians

    International Nuclear Information System (INIS)

    Penna, V.

    1996-01-01

    We consider the Schroedinger problem for time-dependent (TD) Hamiltonians represented by a linear combination of the compact generator and the hyperbolic generator of su(1,1). Several types of transitions, characterized by different time initial conditions on the generator coefficients, are analyzed by resorting to the harmonic oscillator model with a frequency vanishing for t→+∞. We provide examples that point out how the TD states of the transitions can be constructed either by the compact eigenvector basis or by the noncompact eigenvector basis depending on the initial conditions characterizing the frequency time behavior. Copyright copyright 1996 Academic Press, Inc

  9. On the convex closed set-valued operators in Banach spaces and their applications in control problems

    International Nuclear Information System (INIS)

    Vu Ngoc Phat; Jong Yeoul Park

    1995-10-01

    The paper studies a class of set-values operators with emphasis on properties of their adjoints and existence of eigenvalues and eigenvectors of infinite-dimensional convex closed set-valued operators. Sufficient conditions for existence of eigenvalues and eigenvectors of set-valued convex closed operators are derived. These conditions specify possible features of control problems. The results are applied to some constrained control problems of infinite-dimensional systems described by discrete-time inclusions whose right-hand-sides are convex closed set- valued functions. (author). 8 refs

  10. Noise Reduction in the Time Domain using Joint Diagonalization

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie; Benesty, Jacob; Jensen, Jesper Rindom

    2014-01-01

    , an estimate of the desired signal is found by subtraction of the noise estimate from the observed signal. The filter can be designed to obtain a desired trade-off between noise reduction and signal distortion, depending on the number of eigenvectors included in the filter design. This is explored through...... simulations using a speech signal corrupted by car noise, and the results confirm that the output signal-to-noise ratio and speech distortion index both increase when more eigenvectors are included in the filter design....

  11. A Spectral Analysis of Discrete-Time Quantum Walks Related to the Birth and Death Chains

    Science.gov (United States)

    Ho, Choon-Lin; Ide, Yusuke; Konno, Norio; Segawa, Etsuo; Takumi, Kentaro

    2018-04-01

    In this paper, we consider a spectral analysis of discrete time quantum walks on the path. For isospectral coin cases, we show that the time averaged distribution and stationary distributions of the quantum walks are described by the pair of eigenvalues of the coins as well as the eigenvalues and eigenvectors of the corresponding random walks which are usually referred as the birth and death chains. As an example of the results, we derive the time averaged distribution of so-called Szegedy's walk which is related to the Ehrenfest model. It is represented by Krawtchouk polynomials which is the eigenvectors of the model and includes the arcsine law.

  12. Low-lying eigenmodes of the Wilson-Dirac operator and correlations with topological objects

    International Nuclear Information System (INIS)

    Kusterer, Daniel-Jens; Hedditch, John; Kamleh, Waseem; Leinweber, D.B.; Williams, Anthony G.

    2002-01-01

    The probability density of low-lying eigenvectors of the hermitian Wilson-Dirac operator H(κ)=γ 5 D W (κ) is examined. Comparisons in position and size between eigenvectors, topological charge and action density are made. We do this for standard Monte-Carlo generated SU(3) background fields and for single instanton background fields. Both hot and cooled SU(3) background fields are considered. An instanton model is fitted to eigenmodes and topological charge density and the sizes and positions of these are compared

  13. Violating Bell inequalities maximally for two d-dimensional systems

    International Nuclear Information System (INIS)

    Chen Jingling; Wu Chunfeng; Oh, C. H.; Kwek, L. C.; Ge Molin

    2006-01-01

    We show the maximal violation of Bell inequalities for two d-dimensional systems by using the method of the Bell operator. The maximal violation corresponds to the maximal eigenvalue of the Bell operator matrix. The eigenvectors corresponding to these eigenvalues are described by asymmetric entangled states. We estimate the maximum value of the eigenvalue for large dimension. A family of elegant entangled states |Ψ> app that violate Bell inequality more strongly than the maximally entangled state but are somewhat close to these eigenvectors is presented. These approximate states can potentially be useful for quantum cryptography as well as many other important fields of quantum information

  14. A Perron–Frobenius theory for block matrices associated to a multiplex network

    International Nuclear Information System (INIS)

    Romance, Miguel; Solá, Luis; Flores, Julio; García, Esther; García del Amo, Alejandro; Criado, Regino

    2015-01-01

    The uniqueness of the Perron vector of a nonnegative block matrix associated to a multiplex network is discussed. The conclusions come from the relationships between the irreducibility of some nonnegative block matrix associated to a multiplex network and the irreducibility of the corresponding matrices to each layer as well as the irreducibility of the adjacency matrix of the projection network. In addition the computation of that Perron vector in terms of the Perron vectors of the blocks is also addressed. Finally we present the precise relations that allow to express the Perron eigenvector of the multiplex network in terms of the Perron eigenvectors of its layers

  15. On the discrete Frobenius-Perron operator of the Bernoulli map

    International Nuclear Information System (INIS)

    Bai Zaiqiao

    2006-01-01

    We study the spectra of a finite-dimensional Frobenius-Perron operator (matrix) of the Bernoulli map derived from phase space discretization. The eigenvalues and (right and left) eigenvectors are analytically calculated, which are closely related to periodic orbits on the partition points. In the degenerate case, Jordan decomposition of the matrix is explicitly constructed. Except for the isolated eigenvalue 1, there is no definite limit with respect to eigenvalues when n → ∞. The behaviour of the eigenvectors is discussed in the limit of large n

  16. A Perron-Frobenius theory for block matrices associated to a multiplex network

    Science.gov (United States)

    Romance, Miguel; Solá, Luis; Flores, Julio; García, Esther; García del Amo, Alejandro; Criado, Regino

    2015-03-01

    The uniqueness of the Perron vector of a nonnegative block matrix associated to a multiplex network is discussed. The conclusions come from the relationships between the irreducibility of some nonnegative block matrix associated to a multiplex network and the irreducibility of the corresponding matrices to each layer as well as the irreducibility of the adjacency matrix of the projection network. In addition the computation of that Perron vector in terms of the Perron vectors of the blocks is also addressed. Finally we present the precise relations that allow to express the Perron eigenvector of the multiplex network in terms of the Perron eigenvectors of its layers.

  17. Instanton dominance of topological charge fluctuations in QCD?

    International Nuclear Information System (INIS)

    Hip, I.; Lippert, Th.; Schilling, K.; Schroers, W.; Neff, H.

    2002-01-01

    We consider the local chirality of near-zero eigenvectors from Wilson-Dirac and clover improved Wilson-Dirac lattice operators as proposed recently by Horvath et al. We study finer lattices and repair for the loss of orthogonality due to the non-normality of the Wilson-Dirac matrix. As a result we do see a clear double peak structure on lattices with resolutions higher than 0.1 fm. We find that the lattice artifacts can be considerably reduced by exploiting the biorthogonal system of left and right eigenvectors. We conclude that the dominance of instantons in topological charge fluctuations is not ruled out by local chirality measurements

  18. Influence of N-butylscopolamine on SUV in FDG PET of the bowel

    International Nuclear Information System (INIS)

    Sanghera, B.; Emmott, J.; Chambers, J.; Wong, W.L.; Wellsted, D.

    2009-01-01

    Peristalsis can lead to confusing fluorodeoxyglucose (FDG) positron emission tomography (PET) bowel uptake artefacts and potential for recording inaccurate mean standardised uptake value (SUV) measurements in PET-CT scans. Accordingly, we investigate the influence of different SUV normalisations on FDG PET uptake of the bowel and assess which one(s) have least dependence on body size factors in patients with and without the introduction of the anti-peristalsis agent N-butylscopolamine (Buscopan). This study consisted of 92 prospective oncology patients, each having a whole body 18 F-FDG PET scan. Correlations were investigated between height, weight, glucose, body mass index (bmi), lean body mass (lbm) and body surface area (bsa) with maximum and mean SUV recorded for bowel normalised to weight (SUV w ), lbm (SUV lbm ), bsa (SUV bsa ) and blood glucose corrected versions (SUV wg , SUV lbmg , SUV bsag ). Standardised uptake value normalisations were significantly different between control and Buscopan groups with less variability experienced within individual SUV normalisations by the administration of Buscopan. Mean SUV normalisations accounted for 80% of correlations in the control group and 100% in the Buscopan group. Further, >86% of all correlations across both groups were dominated by mean SUV normalisations of which, about 69% were accounted for by SUV bsa and SUV bsag . We recommend avoiding mean SUV bsa and individual glucose normalisations especially, mean SUV bsag as these dominated albeit relatively weak correlations with body size factors in control and Buscopan groups. Mean and maximum SUV w , and SUV lbm were shown to be independent of any body size parameters investigated in both groups and therefore considered suitable for monitoring FDG PET uptake in the normal bowel for our patient cohort. (author)

  19. Cleaning the correlation matrix with a denoising autoencoder

    OpenAIRE

    Hayou, Soufiane

    2017-01-01

    In this paper, we use an adjusted autoencoder to estimate the true eigenvalues of the population correlation matrix from the sample correlation matrix when the number of samples is small. We show that the model outperforms the Rotational Invariant Estimator (Bouchaud) which is the optimal estimator in the sample eigenvectors basis when the dimension goes to infinity.

  20. Estimation of genetic parameters for test day records of dairy traits in the first three lactations

    Directory of Open Access Journals (Sweden)

    Ducrocq Vincent

    2005-05-01

    Full Text Available Abstract Application of test-day models for the genetic evaluation of dairy populations requires the solution of large mixed model equations. The size of the (covariance matrices required with such models can be reduced through the use of its first eigenvectors. Here, the first two eigenvectors of (covariance matrices estimated for dairy traits in first lactation were used as covariables to jointly estimate genetic parameters of the first three lactations. These eigenvectors appear to be similar across traits and have a biological interpretation, one being related to the level of production and the other to persistency. Furthermore, they explain more than 95% of the total genetic variation. Variances and heritabilities obtained with this model were consistent with previous studies. High correlations were found among production levels in different lactations. Persistency measures were less correlated. Genetic correlations between second and third lactations were close to one, indicating that these can be considered as the same trait. Genetic correlations within lactation were high except between extreme parts of the lactation. This study shows that the use of eigenvectors can reduce the rank of (covariance matrices for the test-day model and can provide consistent genetic parameters.

  1. Non-self-adjoint hamiltonians defined by Riesz bases

    Energy Technology Data Exchange (ETDEWEB)

    Bagarello, F., E-mail: fabio.bagarello@unipa.it [Dipartimento di Energia, Ingegneria dell' Informazione e Modelli Matematici, Facoltà di Ingegneria, Università di Palermo, I-90128 Palermo, Italy and INFN, Università di Torino, Torino (Italy); Inoue, A., E-mail: a-inoue@fukuoka-u.ac.jp [Department of Applied Mathematics, Fukuoka University, Fukuoka 814-0180 (Japan); Trapani, C., E-mail: camillo.trapani@unipa.it [Dipartimento di Matematica e Informatica, Università di Palermo, I-90123 Palermo (Italy)

    2014-03-15

    We discuss some features of non-self-adjoint Hamiltonians with real discrete simple spectrum under the assumption that the eigenvectors form a Riesz basis of Hilbert space. Among other things, we give conditions under which these Hamiltonians can be factorized in terms of generalized lowering and raising operators.

  2. On certain properties of some generalized special functions

    International Nuclear Information System (INIS)

    Pathan, M.A.; Khan, Subuhi

    2002-06-01

    In this paper, we derive a result concerning eigenvector for the product of two operators defined on a Lie algebra of endomorphisms of a vector space. The results given by Radulescu, Mandal and authors follow as special cases of this result. Further using these results, we deduce certain properties of generalized Hermite polynomials and Hermite Tricomi functions. (author)

  3. Network-Based Detection and Classification of Seismovolcanic Tremors: Example From the Klyuchevskoy Volcanic Group in Kamchatka

    Science.gov (United States)

    Soubestre, Jean; Shapiro, Nikolai M.; Seydoux, Léonard; de Rosny, Julien; Droznin, Dmitry V.; Droznina, Svetlana Ya.; Senyukov, Sergey L.; Gordeev, Evgeniy I.

    2018-01-01

    We develop a network-based method for detecting and classifying seismovolcanic tremors. The proposed approach exploits the coherence of tremor signals across the network that is estimated from the array covariance matrix. The method is applied to four and a half years of continuous seismic data recorded by 19 permanent seismic stations in the vicinity of the Klyuchevskoy volcanic group in Kamchatka (Russia), where five volcanoes were erupting during the considered time period. We compute and analyze daily covariance matrices together with their eigenvalues and eigenvectors. As a first step, most coherent signals corresponding to dominating tremor sources are detected based on the width of the covariance matrix eigenvalues distribution. Thus, volcanic tremors of the two volcanoes known as most active during the considered period, Klyuchevskoy and Tolbachik, are efficiently detected. As a next step, we consider the daily array covariance matrix's first eigenvector. Our main hypothesis is that these eigenvectors represent the principal components of the daily seismic wavefield and, for days with tremor activity, characterize dominant tremor sources. Those daily first eigenvectors, which can be used as network-based fingerprints of tremor sources, are then grouped into clusters using correlation coefficient as a measure of the vector similarity. As a result, we identify seven clusters associated with different periods of activity of four volcanoes: Tolbachik, Klyuchevskoy, Shiveluch, and Kizimen. The developed method does not require a priori knowledge and is fully automatic; and the database of the network-based tremor fingerprints can be continuously enriched with newly available data.

  4. More evidence of localization in the low-lying Dirac spectrum

    CERN Document Server

    Bernard, C; Gottlieb, Steven; Levkova, L.; Heller, U.M.; Hetrick, J.E.; Jahn, O.; Maresca, F.; Renner, Dru Bryant; Toussaint, D.; Sugar, R.; Forcrand, Ph. de; Gottlieb, Steven

    2006-01-01

    We have extended our computation of the inverse participation ratio of low-lying (asqtad) Dirac eigenvectors in quenched SU(3). The scaling dimension of the confining manifold is clearer and very near 3. We have also computed the 2-point correlator which further characterizes the localization.

  5. The Faddeev equation and essential spectrum of a Hamiltonian in Fock space

    International Nuclear Information System (INIS)

    Muminov, M.I.; Rasulov, T.H.

    2008-05-01

    A model operator H associated to a quantum system with non conserved number of particles is studied. The Faddeev type system of equation for eigenvectors of H is constructed. The essential spectrum of H is described by the spectrum of the channel operator. (author)

  6. An Application of the Vandermonde Determinant

    Science.gov (United States)

    Xu, Junqin; Zhao, Likuan

    2006-01-01

    Eigenvalue is an important concept in Linear Algebra. It is well known that the eigenvectors corresponding to different eigenvalues of a square matrix are linear independent. In most of the existing textbooks, this result is proven using mathematical induction. In this note, a new proof using Vandermonde determinant is given. It is shown that this…

  7. Spin wave spectrum and zero spin fluctuation of antiferromagnetic solid 3He

    International Nuclear Information System (INIS)

    Roger, M.; Delrieu, J.M.

    1981-08-01

    The spin wave spectrum and eigenvectors of the uudd antiferromagnetic phase of solid 3 He are calculated; an optical mode is predicted around 150 - 180 Mc and a zero point spin deviation of 0.74 is obtained in agreement with the antiferromagnetic resonance frequency measured by Osheroff

  8. Optimized Binomial Quantum States of Complex Oscillators with Real Spectrum

    International Nuclear Information System (INIS)

    Zelaya, K D; Rosas-Ortiz, O

    2016-01-01

    Classical and nonclassical states of quantum complex oscillators with real spectrum are presented. Such states are bi-orthonormal superpositions of n +1 energy eigenvectors of the system with binomial-like coefficients. For large values of n these optimized binomial states behave as photon added coherent states when the imaginary part of the potential is cancelled. (paper)

  9. Multi-Grid Lanczos

    Science.gov (United States)

    Clark, M. A.; Jung, Chulwoo; Lehner, Christoph

    2018-03-01

    We present a Lanczos algorithm utilizing multiple grids that reduces the memory requirements both on disk and in working memory by one order of magnitude for RBC/UKQCD's 48I and 64I ensembles at the physical pion mass. The precision of the resulting eigenvectors is on par with exact deflation.

  10. Multi-Grid Lanczos

    Directory of Open Access Journals (Sweden)

    Clark M. A.

    2018-01-01

    Full Text Available We present a Lanczos algorithm utilizing multiple grids that reduces the memory requirements both on disk and in working memory by one order of magnitude for RBC/UKQCD’s 48I and 64I ensembles at the physical pion mass. The precision of the resulting eigenvectors is on par with exact deflation.

  11. Algebraic Bethe ansatz for 19-vertex models with reflection conditions

    International Nuclear Information System (INIS)

    Utiel, Wagner

    2003-01-01

    In this work we solve the 19-vertex models with the use of algebraic Bethe ansatz for diagonal reflection matrices (Sklyanin K-matrices). The eigenvectors, eigenvalues and Bethe equations are given in a general form. Quantum spin chains of spin one derived from the 19-vertex models were also discussed

  12. Eigenvalue for Densely Defined Perturbations of Multivalued Maximal Monotone Operators in Reflexive Banach Spaces

    Directory of Open Access Journals (Sweden)

    Boubakari Ibrahimou

    2013-01-01

    maximal monotone with and . Using the topological degree theory developed by Kartsatos and Quarcoo we study the eigenvalue problem where the operator is a single-valued of class . The existence of continuous branches of eigenvectors of infinite length then could be easily extended to the case where the operator is multivalued and is investigated.

  13. Radioactivity computation of steady-state and pulsed fusion reactors operation

    International Nuclear Information System (INIS)

    Attaya, H.

    1994-06-01

    Different mathematical methods are used to calculate the nuclear transmutation in steady-state and pulsed neutron irradiation. These methods are the Schuer decomposition, the eigenvector decomposition, and the Pade approximation of the matrix exponential function. In the case of the linear decay chain approximation, a simple algorithm is used to evaluate the transition matrices

  14. Sensor scheme design for active structural acoustic control

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    Efficient sensing schemes for the active reduction of sound radiation from plates are presented based on error signals derived from spatially weighted plate velocity or near-field pressure. The schemes result in near-optimal reductions as compared to weighting procedures derived from eigenvector or

  15. Extended Park's transformation for 2×3-phase synchronous machine and converter phasor model with representation of AC harmonics

    DEFF Research Database (Denmark)

    Knudsen, Hans

    1995-01-01

    A model of the 2×3-phase synchronous machine is presented using a new transformation based on the eigenvectors of the stator inductance matrix. The transformation fully decouples the stator inductance matrix, and this leads to an equivalent diagram of the machine with no mutual couplings...

  16. Bone histology, phylogeny, and palaeognathous birds (Aves, Palaeognathae)

    DEFF Research Database (Denmark)

    Legendre, Lucas; Bourdon, Estelle; Scofield, Paul

    2014-01-01

    a comprehensive study in which we quantify the phylogenetic signal on 62 osteohistological features in an exhaustive sample of palaeognathous birds. We used four different estimators to measure phylogenetic signal – Pagel’s λ, Abouheif’s Cmean, Blomberg’s K, and Diniz-Filho’s phylogenetic eigenvector regressions...

  17. Using many pilot points and singular value decomposition in groundwater model calibration

    DEFF Research Database (Denmark)

    Christensen, Steen; Doherty, John

    2008-01-01

    over the model area. Singular value decomposition (SVD) of the normal matrix is used to reduce the large number of pilot point parameters to a smaller number of so-called super parameters that can be estimated by nonlinear regression from the available observations. A number of eigenvectors...

  18. A computational approach for fluid queues driven by truncated birth-death processes.

    NARCIS (Netherlands)

    Lenin, R.B.; Parthasarathy, P.R.

    2000-01-01

    In this paper, we analyze fluid queues driven by truncated birth-death processes with general birth and death rates. We compute the equilibrium distribution of the content of the fluid buffer by providing efficient numerical procedures to compute the eigenvalues and the eigenvectors of the

  19. On the rational approximation of the bidimensional potentials

    International Nuclear Information System (INIS)

    Niculescu, V.I.R; Catana, D.

    1997-01-01

    In the present letter we introduced a symmetrical bidimensional potential with Woods-Saxon tail. The potential approximation has permitted to replace in the matrix the evaluation of double integration by a product of two integrals. That implied a complexity reduction in the Hamiltonian eigenvalue and eigenvector evaluation. Also, the harmonic bidimensional basis simplifies significantly the evaluation of electric multipole operators. (authors)

  20. Semiclassical geometry of integrable systems

    Science.gov (United States)

    Reshetikhin, Nicolai

    2018-04-01

    The main result of this paper is a formula for the scalar product of semiclassical eigenvectors of two integrable systems on the same symplectic manifold. An important application of this formula is the Ponzano–Regge type of asymptotic of Racah–Wigner coefficients. Dedicated to the memory of P P Kulish.

  1. Universal growth modes of high-elevation conifers

    Czech Academy of Sciences Publication Activity Database

    Datsenko, N. M.; Sonechkin, D. M.; Büntgen, Ulf; Yang, B.

    2016-01-01

    Roč. 38, JUN (2016), s. 38-50 ISSN 1125-7865 Institutional support: RVO:67179843 Keywords : tree-ring chronologies * summer temperature-variations * northeastern tibetan plateau * climate signal * fennoscandian summers * annual precipitation * density * variability * qinghai * Growth modes * Ring width and maximum latewood density * Eigenvector analysis Subject RIV: EF - Botanics Impact factor: 2.259, year: 2016

  2. Maslov indices and monodromy

    International Nuclear Information System (INIS)

    Dullin, H R; Robbins, J M; Waalkens, H; Creagh, S C; Tanner, G

    2005-01-01

    We prove that for a Hamiltonian system on a cotangent bundle that is Liouville-integrable and has monodromy the vector of Maslov indices is an eigenvector of the monodromy matrix with eigenvalue 1. As a corollary, the resulting restrictions on the monodromy matrix are derived. (letter to the editor)

  3. Fusion And Inference From Multiple And Massive Disparate Distributed Dynamic Data Sets

    Science.gov (United States)

    2017-07-01

    computational execution together form a comprehensive, widely- applicable paradigm for statistical graph inference. Approved for Public Release; Distribution...always involve challenging empirical modeling and implementation issues. Our project has propelled the mathematical development, statistical design...D. J., and Sussman, D. L., “A limit theorem for scaled eigenvectors of random dot product graphs,” Sankhya A. Mathemat - ical Statistics and

  4. Exact Finite Differences. The Derivative on Non Uniformly Spaced Partitions

    Directory of Open Access Journals (Sweden)

    Armando Martínez-Pérez

    2017-10-01

    Full Text Available We define a finite-differences derivative operation, on a non uniformly spaced partition, which has the exponential function as an exact eigenvector. We discuss some properties of this operator and we propose a definition for the components of a finite-differences momentum operator. This allows us to perform exact discrete calculations.

  5. Deformed GOE for systems with a few degrees of freedom in the chaotic regime

    International Nuclear Information System (INIS)

    Hussein, M.S.; Pato, M.P.

    1990-01-01

    New distribution laws for the energy level spacings and the eigenvector amplitudes, appropriate for systems with a few degrees of freedom in the chaotic regime, are derived by conveniently deforming the GOE. The cases of 2X2 and 3X3 matrices are fully worked out. Suggestions concerning the general case of matrices with large dimensions are made. (author)

  6. Deformed GOE for systems with a few degrees of freedom in the chaotic regime

    International Nuclear Information System (INIS)

    Hussein, M.S.; Pato, M.P.

    1990-03-01

    New distribution laws for the energy level spacings and the eigenvector amplitudes, approapriate for systems with a few degrees of freedom in the chaotic regime, are derived by conveniently deforming the GOE. The cases of 2x2 and 3x3 matrices are fully worked out. Suggestions concerning the general case of matrices with large dimensions are made. (author) [pt

  7. Robust periodic steady state analysis of autonomous oscillators based on generalized eigenvalues

    NARCIS (Netherlands)

    Mirzavand, R.; Maten, ter E.J.W.; Beelen, T.G.J.; Schilders, W.H.A.; Abdipour, A.

    2011-01-01

    In this paper, we present a new gauge technique for the Newton Raphson method to solve the periodic steady state (PSS) analysis of free-running oscillators in the time domain. To find the frequency a new equation is added to the system of equations. Our equation combines a generalized eigenvector

  8. Robust periodic steady state analysis of autonomous oscillators based on generalized eigenvalues

    NARCIS (Netherlands)

    Mirzavand, R.; Maten, ter E.J.W.; Beelen, T.G.J.; Schilders, W.H.A.; Abdipour, A.; Michielsen, B.; Poirier, J.R.

    2012-01-01

    In this paper, we present a new gauge technique for the Newton Raphson method to solve the periodic steady state (PSS) analysis of free-running oscillators in the time domain. To find the frequency a new equation is added to the system of equations. Our equation combines a generalized eigenvector

  9. Mode repulsion of ultrasonic guided waves in rails

    CSIR Research Space (South Africa)

    Loveday, Philip W

    2018-03-01

    Full Text Available . The modes can therefore be numbered in the same way that Lamb waves in plates are numbered, making it easier to communicate results. The derivative of the eigenvectors with respect to wavenumber contains the same repulsion term and shows how the mode shapes...

  10. Certain properties of some special functions of two variables and two indices

    International Nuclear Information System (INIS)

    Khan, Subuhi

    2002-07-01

    In this paper, we derive a result concerning eigenvector and eigenvalue for a quadratic combination of four operators defined on a Lie algebra of endomorphisms of a vector space. Further, using this result, we deduce certain properties of some special functions of two variables and two indices. (author)

  11. Minute splitting of magnetic excitations in CsFeCl{sub 3} due to dipolar interaction observed by polarised neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Dorner, B [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France); Baehr, M [HMI, Berlin (Germany); Petitgrand, D [Laboratoire Leon Brillouin (LLB) - Centre d` Etudes de Saclay, 91 - Gif-sur-Yvette (France)

    1997-04-01

    Using inelastic neutron scattering with polarisation analysis it was possible, for the first time, to observe simultaneously the two magnetic modes split due to dipolar interaction. This would not have been possible with energy resolution only. An analysis of eigenvectors was also performed. (author). 4 refs.

  12. A classification system for one Killing vector solutions of Einstein's equations

    International Nuclear Information System (INIS)

    Hoenselaers, C.

    1978-01-01

    A double classification system for one Killing vector solutions in terms of the eigenvectors and eigenvalues of the Ricci and Bach tensor of the associated three manifold is proposed. The calculations of the Bach tensor are carried out for special cases. (author)

  13. Singularly Perturbed Equations in the Critical Case.

    Science.gov (United States)

    1980-02-01

    asymptotic properties of the differential equation (1) in the noncritical case (all ReXi (t) ɘ) . We will consider the critical case (k 0) ; the...the inequality (3), that is, ReXi (t,a) < 0 (58) The matrix ca(t,a) , consisting of the eigenvectors corresponding to w 0 , now has the form I (P -(t

  14. Neutral evolution of mutational robustness

    NARCIS (Netherlands)

    Nimwegen, Erik van; Crutchfield, James P.; Huynen, Martijn

    1999-01-01

    We introduce and analyze a general model of a population evolving over a network of selectively neutral genotypes. We show that the population s limit distribution on the neutral network is solely determined by the network topology and given by the principal eigenvector of the network

  15. Random matrix theory and acoustic resonances in plates with an approximate symmetry

    DEFF Research Database (Denmark)

    Andersen, Anders Peter; Ellegaard, C.; Jackson, A.D.

    2001-01-01

    We discuss a random matrix model of systems with an approximate symmetry and present the spectral fluctuation statistics and eigenvector characteristics for the model. An acoustic resonator like, e.g., an aluminum plate may have an approximate symmetry. We have measured the frequency spectrum and...

  16. Biological Applications in the Mathematics Curriculum

    Science.gov (United States)

    Marland, Eric; Palmer, Katrina M.; Salinas, Rene A.

    2008-01-01

    In this article we provide two detailed examples of how we incorporate biological examples into two mathematics courses: Linear Algebra and Ordinary Differential Equations. We use Leslie matrix models to demonstrate the biological properties of eigenvalues and eigenvectors. For Ordinary Differential Equations, we show how using a logistic growth…

  17. A Brief Historical Introduction to Determinants with Applications

    Science.gov (United States)

    Debnath, L.

    2013-01-01

    This article deals with a short historical introduction to determinants with applications to the theory of equations, geometry, multiple integrals, differential equations and linear algebra. Included are some properties of determinants with proofs, eigenvalues, eigenvectors and characteristic equations with examples of applications to simple…

  18. An algorithm to compute the square root of 3x3 positive definite matrix

    International Nuclear Information System (INIS)

    Franca, L.P.

    1988-06-01

    An efficient closed form to compute the square root of a 3 x 3 positive definite matrix is presented. The derivation employs the Cayley-Hamilton theorem avoiding calculation of eigenvectors. We show that evaluation of one eigenvalue of the square root matrix is needed and can not be circumvented. The algorithm is robust and efficient. (author) [pt

  19. Lineární algebra ukrytá v internetovém vyhledávači Google

    Czech Academy of Sciences Publication Activity Database

    Brandts, J.; Křížek, Michal

    2007-01-01

    Roč. 52, č. 3 (2007), s. 195-204 ISSN 0032-2423 R&D Projects: GA MŠk 1P05ME749 Institutional research plan: CEZ:AV0Z10190503 Keywords : data structures * teleportation matrix * eigenvalues and eigenvectors Subject RIV: BA - General Mathematics

  20. Power Grid Modelling From Wind Turbine Perspective Using Principal Componenet Analysis

    DEFF Research Database (Denmark)

    Farajzadehbibalan, Saber; Ramezani, Mohammad Hossein; Nielsen, Peter

    2015-01-01

    In this study, we derive an eigenvector-based multivariate model of a power grid from the wind farm's standpoint using dynamic principal component analysis (DPCA). The main advantages of our model over previously developed models are being more realistic and having low complexity. We show that th...

  1. Perfect state transfer in unitary Cayley graphs over local rings

    Directory of Open Access Journals (Sweden)

    Yotsanan Meemark

    2014-12-01

    Full Text Available In this work, using eigenvalues and eigenvectors of unitary Cayley graphs over finite local rings and elementary linear algebra, we characterize which local rings allowing PST occurring in its unitary Cayley graph. Moreover, we have some developments when $R$ is a product of local rings.

  2. A computational approach for a fluid queue driven by a truncated birth-death process

    NARCIS (Netherlands)

    Lenin, R.B.; Parthasarathy, P.R.

    1999-01-01

    In this paper, we consider a fluid queue driven by a truncated birth-death process with general birth and death rates. We find the equilibrium distribution of the content of the fluid buffer by computing the eigenvalues and eigenvectors of an associated real tridiagonal matrix. We provide efficient

  3. The non-linear Perron-Frobenius theorem : Perturbations and aggregation

    NARCIS (Netherlands)

    Dietzenbacher, E

    The dominant eigenvalue and the corresponding eigenvector (or Perron vector) of a non-linear eigensystem are considered. We discuss the effects upon these, of perturbations and of aggregation of the underlying mapping. The results are applied to study the sensivity of the outputs in a non-linear

  4. Complex Wedge-Shaped Matrices: A Generalization of Jacobi Matrices

    Czech Academy of Sciences Publication Activity Database

    Hnětynková, Iveta; Plešinger, M.

    2015-01-01

    Roč. 487, 15 December (2015), s. 203-219 ISSN 0024-3795 R&D Projects: GA ČR GA13-06684S Keywords : eigenvalues * eigenvector * wedge-shaped matrices * generalized Jacobi matrices * band (or block) Krylov subspace methods Subject RIV: BA - General Mathematics Impact factor: 0.965, year: 2015

  5. Jacobi-Davidson methods for generalized MHD-eigenvalue problems

    NARCIS (Netherlands)

    J.G.L. Booten; D.R. Fokkema; G.L.G. Sleijpen; H.A. van der Vorst (Henk)

    1995-01-01

    textabstractA Jacobi-Davidson algorithm for computing selected eigenvalues and associated eigenvectors of the generalized eigenvalue problem $Ax = lambda Bx$ is presented. In this paper the emphasis is put on the case where one of the matrices, say the B-matrix, is Hermitian positive definite. The

  6. Self-averaging correlation functions in the mean field theory of spin glasses

    International Nuclear Information System (INIS)

    Mezard, M.; Parisi, G.

    1984-01-01

    In the infinite range spin glass model, we consider the staggered spin σsub(lambda)associated with a given eigenvector of the interaction matrix. We show that the thermal average of sub(lambda)sup(2) is a self-averaging quantity and we compute it

  7. Fiber crossing in human brain depicted with diffusion tensor MR imaging

    DEFF Research Database (Denmark)

    Wiegell, M.R.; Larsson, H.B.; Wedeen, V.J.

    2000-01-01

    Human white matter fiber crossings were investigated with use of the full eigenstructure of the magnetic resonance diffusion tensor. Intravoxel fiber dispersions were characterized by the plane spanned by the major and medium eigenvectors and depicted with three-dimensional graphics. This method...

  8. Simultaneous maximization of spatial and temporal autocorrelation in spatio-temporal data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2002-01-01

    . This is done by solving the generalized eigenproblem represented by the Rayleigh coefficient where is the dispersion of and is the dispersion of the difference between and spatially shifted. Hence, the new variates are obtained from the conjugate eigenvectors and the autocorrelations obtained are , i.e., high...

  9. The Modern Origin of Matrices and Their Applications

    Science.gov (United States)

    Debnath, L.

    2014-01-01

    This paper deals with the modern development of matrices, linear transformations, quadratic forms and their applications to geometry and mechanics, eigenvalues, eigenvectors and characteristic equations with applications. Included are the representations of real and complex numbers, and quaternions by matrices, and isomorphism in order to show…

  10. Direct structural parameter identification by modal test results

    Science.gov (United States)

    Chen, J.-C.; Kuo, C.-P.; Garba, J. A.

    1983-01-01

    A direct identification procedure is proposed to obtain the mass and stiffness matrices based on the test measured eigenvalues and eigenvectors. The method is based on the theory of matrix perturbation in which the correct mass and stiffness matrices are expanded in terms of analytical values plus a modification matrix. The simplicity of the procedure enables real time operation during the structural testing.

  11. Analysis of numerical methods

    CERN Document Server

    Isaacson, Eugene

    1994-01-01

    This excellent text for advanced undergraduates and graduate students covers norms, numerical solution of linear systems and matrix factoring, iterative solutions of nonlinear equations, eigenvalues and eigenvectors, polynomial approximation, and other topics. It offers a careful analysis and stresses techniques for developing new methods, plus many examples and problems. 1966 edition.

  12. Feature extraction for classification in the data mining process

    NARCIS (Netherlands)

    Pechenizkiy, M.; Puuronen, S.; Tsymbal, A.

    2003-01-01

    Dimensionality reduction is a very important step in the data mining process. In this paper, we consider feature extraction for classification tasks as a technique to overcome problems occurring because of "the curse of dimensionality". Three different eigenvector-based feature extraction approaches

  13. Using the Jacobi-Davidson method to obtain the dominant Lambda modes of a nuclear power reactor

    Energy Technology Data Exchange (ETDEWEB)

    Verdu, G. [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Camino de Vera 14, 46022 Valencia (Spain)]. E-mail: gverdu@iqn.upv.es; Ginestar, D. [Departamento de Matematica Aplicada, Universidad Politecnica de Valencia, Camino de Vera 14, 46022 Valencia (Spain); Miro, R. [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Camino de Vera 14, 46022 Valencia (Spain); Vidal, V. [Departamento de Sistemas Informaticos y Computacion, Universidad Politecnica de Valencia, Camino de Vera 14, 46022 Valencia (Spain)

    2005-07-15

    The Jacobi-Davidson method is a modification of Davidson method, which has shown to be very effective to compute the dominant eigenvalues and their corresponding eigenvectors of a large and sparse matrix. This method has been used to compute the dominant Lambda modes of two configurations of Cofrentes nuclear power reactor, showing itself a quite effective method, especially for perturbed configurations.

  14. Seismic network based detection, classification and location of volcanic tremors

    Science.gov (United States)

    Nikolai, S.; Soubestre, J.; Seydoux, L.; de Rosny, J.; Droznin, D.; Droznina, S.; Senyukov, S.; Gordeev, E.

    2017-12-01

    Volcanic tremors constitute an important attribute of volcanic unrest in many volcanoes, and their detection and characterization is a challenging issue of volcano monitoring. The main goal of the present work is to develop a network-based method to automatically classify volcanic tremors, to locate their sources and to estimate the associated wave speed. The method is applied to four and a half years of seismic data continuously recorded by 19 permanent seismic stations in the vicinity of the Klyuchevskoy volcanic group (KVG) in Kamchatka (Russia), where five volcanoes were erupting during the considered time period. The method is based on the analysis of eigenvalues and eigenvectors of the daily array covariance matrix. As a first step, following Seydoux et al. (2016), most coherent signals corresponding to dominating tremor sources are detected based on the width of the covariance matrix eigenvalues distribution. With this approach, the volcanic tremors of the two volcanoes known as most active during the considered period, Klyuchevskoy and Tolbachik, are efficiently detected. As a next step, we consider the array covariance matrix's first eigenvectors computed every day. The main hypothesis of our analysis is that these eigenvectors represent the principal component of the daily seismic wavefield and, for days with tremor activity, characterize the dominant tremor sources. Those first eigenvectors can therefore be used as network-based fingerprints of tremor sources. A clustering process is developed to analyze this collection of first eigenvectors, using correlation coefficient as a measure of their similarity. Then, we locate tremor sources based on cross-correlations amplitudes. We characterize seven tremor sources associated with different periods of activity of four volcanoes: Tolbachik, Klyuchevskoy, Shiveluch, and Kizimen. The developed method does not require a priori knowledge, is fully automatic and the database of network-based tremor fingerprints

  15. Random matrix approach to cross correlations in financial data

    Science.gov (United States)

    Plerou, Vasiliki; Gopikrishnan, Parameswaran; Rosenow, Bernd; Amaral, Luís A.; Guhr, Thomas; Stanley, H. Eugene

    2002-06-01

    We analyze cross correlations between price fluctuations of different stocks using methods of random matrix theory (RMT). Using two large databases, we calculate cross-correlation matrices C of returns constructed from (i) 30-min returns of 1000 US stocks for the 2-yr period 1994-1995, (ii) 30-min returns of 881 US stocks for the 2-yr period 1996-1997, and (iii) 1-day returns of 422 US stocks for the 35-yr period 1962-1996. We test the statistics of the eigenvalues λi of C against a ``null hypothesis'' - a random correlation matrix constructed from mutually uncorrelated time series. We find that a majority of the eigenvalues of C fall within the RMT bounds [λ-,λ+] for the eigenvalues of random correlation matrices. We test the eigenvalues of C within the RMT bound for universal properties of random matrices and find good agreement with the results for the Gaussian orthogonal ensemble of random matrices-implying a large degree of randomness in the measured cross-correlation coefficients. Further, we find that the distribution of eigenvector components for the eigenvectors corresponding to the eigenvalues outside the RMT bound display systematic deviations from the RMT prediction. In addition, we find that these ``deviating eigenvectors'' are stable in time. We analyze the components of the deviating eigenvectors and find that the largest eigenvalue corresponds to an influence common to all stocks. Our analysis of the remaining deviating eigenvectors shows distinct groups, whose identities correspond to conventionally identified business sectors. Finally, we discuss applications to the construction of portfolios of stocks that have a stable ratio of risk to return.

  16. A comparison of working in small-scale and large-scale nursing homes: A systematic review of quantitative and qualitative evidence.

    Science.gov (United States)

    Vermeerbergen, Lander; Van Hootegem, Geert; Benders, Jos

    2017-02-01

    Ongoing shortages of care workers, together with an ageing population, make it of utmost importance to increase the quality of working life in nursing homes. Since the 1970s, normalised and small-scale nursing homes have been increasingly introduced to provide care in a family and homelike environment, potentially providing a richer work life for care workers as well as improved living conditions for residents. 'Normalised' refers to the opportunities given to residents to live in a manner as close as possible to the everyday life of persons not needing care. The study purpose is to provide a synthesis and overview of empirical research comparing the quality of working life - together with related work and health outcomes - of professional care workers in normalised small-scale nursing homes as compared to conventional large-scale ones. A systematic review of qualitative and quantitative studies. A systematic literature search (April 2015) was performed using the electronic databases Pubmed, Embase, PsycInfo, CINAHL and Web of Science. References and citations were tracked to identify additional, relevant studies. We identified 825 studies in the selected databases. After checking the inclusion and exclusion criteria, nine studies were selected for review. Two additional studies were selected after reference and citation tracking. Three studies were excluded after requesting more information on the research setting. The findings from the individual studies suggest that levels of job control and job demands (all but "time pressure") are higher in normalised small-scale homes than in conventional large-scale nursing homes. Additionally, some studies suggested that social support and work motivation are higher, while risks of burnout and mental strain are lower, in normalised small-scale nursing homes. Other studies found no differences or even opposing findings. The studies reviewed showed that these inconclusive findings can be attributed to care workers in some

  17. Effective collateral circulation may indicate improved perfusion territory restoration after carotid endarterectomy.

    Science.gov (United States)

    Lin, Tianye; Lai, Zhichao; Lv, Yuelei; Qu, Jianxun; Zuo, Zhentao; You, Hui; Wu, Bing; Hou, Bo; Liu, Changwei; Feng, Feng

    2018-02-01

    To investigate the relationship between the level of collateral circulation and perfusion territory normalisation after carotid endarterectomy (CEA). This study enrolled 22 patients with severe carotid stenosis that underwent CEA and 54 volunteers without significant carotid stenosis. All patients were scanned with ASL and t-ASL within 1 month before and 1 week after CEA. Collateral circulation was assessed on preoperative ASL images based on the presence of ATA. The postoperative flow territories were considered as back to normal if they conformed to the perfusion territory map in a healthy population. Neuropsychological tests were performed on patients before and within 7 days after surgery. ATA-based collateral score assessed on preoperative ASL was significantly higher in the flow territory normalisation group (n=11, 50 %) after CEA (P mean differences+2SD among control (MMSE=1.35, MOCA=1.02)]. This study demonstrated that effective collateral flow in carotid stenosis patients was associated with normalisation of t-ASL perfusion territory after CEA. The perfusion territory normalisation group tends to have more cognitive improvement after CEA. • Evaluation of collaterals before CEA is helpful for avoiding ischaemia during clamping. • There was good agreement on ATA-based ASL collateral grading. • Perfusion territories in carotid stenosis patients are altered. • Patients have better collateral circulation with perfusion territory back to normal. • MMSE and MOCA test scores improved more in the territory normalisation group.

  18. Analysis of experimental data: The average shape of extreme wave forces on monopile foundations and the NewForce model

    DEFF Research Database (Denmark)

    Schløer, Signe; Bredmose, Henrik; Ghadirian, Amin

    2017-01-01

    Experiments with a stiff pile subjected to extreme wave forces typical of offshore wind farm storm conditions are considered. The exceedance probability curves of the nondimensional force peaks and crest heights are analysed. The average force time history normalised with their peak values are co...... to the average shapes. For more nonlinear wave shapes, higher order terms has to be considered in order for the NewForce model to be able to predict the expected shapes.......Experiments with a stiff pile subjected to extreme wave forces typical of offshore wind farm storm conditions are considered. The exceedance probability curves of the nondimensional force peaks and crest heights are analysed. The average force time history normalised with their peak values...... are compared across the sea states. It is found that the force shapes show a clear similarity when grouped after the values of the normalised peak force, F/(ρghR2), normalised depth h/(gT2p) and presented in a normalised time scale t/Ta. For the largest force events, slamming can be seen as a distinct ‘hat...

  19. Normal levels of total body sodium and chlorine by neutron activation analysis

    International Nuclear Information System (INIS)

    Kennedy, N.S.J.; Eastell, R.; Smith, M.A.; Tothill, P.

    1983-01-01

    In vivo neutron activation analysis was used to measure total body sodium and chlorine in 18 male and 18 female normal adults. Corrections for body size were developed. Normalisation factors were derived which enable the prediction of the normal levels of sodium and chlorine in a subject. The coefficient of variation of normalised sodium was 5.9% in men and 6.9% in women, and of normalised chlorine 9.3% in men and 5.5% in women. In the range examined (40-70 years) no significant age dependence was observed for either element. Total body sodium was correlated with total body chlorine and total body calcium. Sodium excess, defined as the amount of body sodium in excess of that associated with chlorine, also correlated well with total body calcium. In females there was a mean annual loss of sodium excess of 1.2% after the menopause, similar to the loss of calcium. (author)

  20. A reference frame for blood volume in children and adolescents

    Directory of Open Access Journals (Sweden)

    Donckerwolcke Raymond

    2006-02-01

    Full Text Available Abstract Background Our primary purpose was to determine the normal range and variability of blood volume (BV in healthy children, in order to provide reference values during childhood and adolescence. Our secondary aim was to correlate these vascular volumes to body size parameters and pubertal stages, in order to determine the best normalisation parameter. Methods Plasma volume (PV and red cell volume (RCV were measured and F-cell ratio was calculated in 77 children with idiopathic nephrotic syndrome in drug-free remission (mean age, 9.8 ± 4.6 y. BV was calculated as the sum of PV and RCV. Due to the dependence of these values on age, size and sex, all data were normalised for body size parameters. Results BV normalised for lean body mass (LBM did not differ significantly by sex (p Conclusion LBM was the anthropometric index most closely correlated to vascular fluid volumes, independent of age, gender and pubertal stage.

  1. Asymptotic Poisson distribution for the number of system failures of a monotone system

    International Nuclear Information System (INIS)

    Aven, Terje; Haukis, Harald

    1997-01-01

    It is well known that for highly available monotone systems, the time to the first system failure is approximately exponentially distributed. Various normalising factors can be used as the parameter of the exponential distribution to ensure the asymptotic exponentiality. More generally, it can be shown that the number of system failures is asymptotic Poisson distributed. In this paper we study the performance of some of the normalising factors by using Monte Carlo simulation. The results show that the exponential/Poisson distribution gives in general very good approximations for highly available components. The asymptotic failure rate of the system gives best results when the process is in steady state, whereas other normalising factors seem preferable when the process is not in steady state. From a computational point of view the asymptotic system failure rate is most attractive

  2. Fast and robust multi-atlas segmentation of brain magnetic resonance images

    DEFF Research Database (Denmark)

    Lötjönen, Jyrki Mp; Wolz, Robin; Koikkalainen, Juha R

    2010-01-01

    We introduce an optimised pipeline for multi-atlas brain MRI segmentation. Both accuracy and speed of segmentation are considered. We study different similarity measures used in non-rigid registration. We show that intensity differences for intensity normalised images can be used instead of stand......We introduce an optimised pipeline for multi-atlas brain MRI segmentation. Both accuracy and speed of segmentation are considered. We study different similarity measures used in non-rigid registration. We show that intensity differences for intensity normalised images can be used instead...... of standard normalised mutual information in registration without compromising the accuracy but leading to threefold decrease in the computation time. We study and validate also different methods for atlas selection. Finally, we propose two new approaches for combining multi-atlas segmentation and intensity...

  3. Digital Particle Image Velocimetry: Partial Image Error (PIE)

    International Nuclear Information System (INIS)

    Anandarajah, K; Hargrave, G K; Halliwell, N A

    2006-01-01

    This paper quantifies the errors due to partial imaging of seeding particles which occur at the edges of interrogation regions in Digital Particle Image Velocimetry (DPIV). Hitherto, in the scientific literature the effect of these partial images has been assumed to be negligible. The results show that the error is significant even at a commonly used interrogation region size of 32 x 32 pixels. If correlation of interrogation region sizes of 16 x 16 pixels and smaller is attempted, the error which occurs can preclude meaningful results being obtained. In order to reduce the error normalisation of the correlation peak values is necessary. The paper introduces Normalisation by Signal Strength (NSS) as the preferred means of normalisation for optimum accuracy. In addition, it is shown that NSS increases the dynamic range of DPIV

  4. The effects of induction hardening on wear properties of AISI 4140 steel in dry sliding conditions

    International Nuclear Information System (INIS)

    Totik, Y.; Sadeler, R.; Altun, H.; Gavgali, M.

    2002-01-01

    Wear behaviour of induction hardened AISI 4140 steel was evaluated under dry sliding conditions. Specimens were induction hardened at 1000 Hz for 6, 10, 14, 18, 27 s, respectively, in the inductor which was a three-turn coil with a coupling distance of 2.8 mm. Normalised and induction hardened specimens were fully characterised before and after the wear testing using hardness, profilometer, scanning electron microscopy and X-ray diffraction. The wear tests using a pin-on-disc machine showed that the induction hardening treatments improved the wear behaviour of AISI 4140 steel specimens compared to normalised AISI 4140 steel as a result of residual stresses and hardened surfaces. The wear coefficients in normalised specimens are greater than that in the induction hardened samples. The lowest coefficient of the friction was obtained in specimens induction-hardened at 875 deg. C for 27 s

  5. The effects of induction hardening on wear properties of AISI 4140 steel in dry sliding conditions

    Energy Technology Data Exchange (ETDEWEB)

    Totik, Y.; Sadeler, R.; Altun, H.; Gavgali, M

    2002-02-15

    Wear behaviour of induction hardened AISI 4140 steel was evaluated under dry sliding conditions. Specimens were induction hardened at 1000 Hz for 6, 10, 14, 18, 27 s, respectively, in the inductor which was a three-turn coil with a coupling distance of 2.8 mm. Normalised and induction hardened specimens were fully characterised before and after the wear testing using hardness, profilometer, scanning electron microscopy and X-ray diffraction. The wear tests using a pin-on-disc machine showed that the induction hardening treatments improved the wear behaviour of AISI 4140 steel specimens compared to normalised AISI 4140 steel as a result of residual stresses and hardened surfaces. The wear coefficients in normalised specimens are greater than that in the induction hardened samples. The lowest coefficient of the friction was obtained in specimens induction-hardened at 875 deg. C for 27 s.

  6. Estimation and calibration of observation impact signals using the Lanczos method in NOAA/NCEP data assimilation system

    Directory of Open Access Journals (Sweden)

    M. Wei

    2012-09-01

    Full Text Available Despite the tremendous progress that has been made in data assimilation (DA methodology, observing systems that reduce observation errors, and model improvements that reduce background errors, the analyses produced by the best available DA systems are still different from the truth. Analysis error and error covariance are important since they describe the accuracy of the analyses, and are directly related to the future forecast errors, i.e., the forecast quality. In addition, analysis error covariance is critically important in building an efficient ensemble forecast system (EFS.

    Estimating analysis error covariance in an ensemble-based Kalman filter DA is straightforward, but it is challenging in variational DA systems, which have been in operation at most NWP (Numerical Weather Prediction centers. In this study, we use the Lanczos method in the NCEP (the National Centers for Environmental Prediction Gridpoint Statistical Interpolation (GSI DA system to look into other important aspects and properties of this method that were not exploited before. We apply this method to estimate the observation impact signals (OIS, which are directly related to the analysis error variances. It is found that the smallest eigenvalue of the transformed Hessian matrix converges to one as the number of minimization iterations increases. When more observations are assimilated, the convergence becomes slower and more eigenvectors are needed to retrieve the observation impacts. It is also found that the OIS over data-rich regions can be represented by the eigenvectors with dominant eigenvalues.

    Since only a limited number of eigenvectors can be computed due to computational expense, the OIS is severely underestimated, and the analysis error variance is consequently overestimated. It is found that the mean OIS values for temperature and wind components at typical model levels are increased by about 1.5 times when the number of eigenvectors is doubled

  7. Gait characteristics under different walking conditions: Association with the presence of cognitive impairment in community-dwelling older people.

    Directory of Open Access Journals (Sweden)

    Anne-Marie De Cock

    Full Text Available Gait characteristics measured at usual pace may allow profiling in patients with cognitive problems. The influence of age, gender, leg length, modified speed or dual tasking is unclear.Cross-sectional analysis was performed on a data registry containing demographic, physical and spatial-temporal gait parameters recorded in five walking conditions with a GAITRite® electronic carpet in community-dwelling older persons with memory complaints. Four cognitive stages were studied: cognitively healthy individuals, mild cognitive impaired patients, mild dementia patients and advanced dementia patients.The association between spatial-temporal gait characteristics and cognitive stages was the most prominent: in the entire study population using gait speed, steps per meter (translation for mean step length, swing time variability, normalised gait speed (corrected for leg length and normalised steps per meter at all five walking conditions; in the 50-to-70 years old participants applying step width at fast pace and steps per meter at usual pace; in the 70-to-80 years old persons using gait speed and normalised gait speed at usual pace, fast pace, animal walk and counting walk or steps per meter and normalised steps per meter at all five walking conditions; in over-80 years old participants using gait speed, normalised gait speed, steps per meter and normalised steps per meter at fast pace and animal dual-task walking. Multivariable logistic regression analysis adjusted for gender predicted in two compiled models the presence of dementia or cognitive impairment with acceptable accuracy in persons with memory complaints.Gait parameters in multiple walking conditions adjusted for age, gender and leg length showed a significant association with cognitive impairment. This study suggested that multifactorial gait analysis could be more informative than using gait analysis with only one test or one variable. Using this type of gait analysis in clinical practice

  8. Homogenisation of a Wigner-Seitz cell in two group diffusion theory

    International Nuclear Information System (INIS)

    Allen, F.R.

    1968-02-01

    Two group diffusion theory is used to develop a theory for the homogenisation of a Wigner-Seitz cell, neglecting azimuthal flux components of higher order than dipoles. An iterative method of solution is suggested for linkage with reactor calculations. The limiting theory for no cell leakage leads to cell edge flux normalisation of cell parameters, the current design method for SGHW reactor design calculations. Numerical solutions are presented for a cell-plus-environment model with monopoles only. The results demonstrate the exact theory in comparison with the approximate recipes of normalisation to cell edge, moderator average, or cell average flux levels. (author)

  9. University of Glasgow at WebCLEF 2005

    DEFF Research Database (Denmark)

    Macdonald, C.; Plachouras, V.; He, B.

    2006-01-01

    We participated in the WebCLEF 2005 monolingual task. In this task, a search system aims to retrieve relevant documents from a multilingual corpus of Web documents from Web sites of European governments. Both the documents and the queries are written in a wide range of European languages......, namely content, title, and anchor text of incoming hyperlinks. We use a technique called per-field normalisation, which extends the Divergence From Randomness (DFR) framework, to normalise the term frequencies, and to combine them across the three fields. We also employ the length of the URL path of Web...

  10. Technical normalization in the geoinformatics branch

    Directory of Open Access Journals (Sweden)

    Bronislava Horáková

    2006-09-01

    Full Text Available A basic principle of the technical normalisation is to hold the market development by developing unified technical rules for all concerned subjects. The information and communication technological industry is characterised by certain specific features contrary to the traditional industry. These features bring to the normalisation domain new demands, mainly the flexibility enabling to reflect the rapidly development market of ICT elastic way. The goal of the paper is to provide a comprehensive overview of the current process of technical normalization in the geoinformatic branch

  11. COPDIRC - calculation of particle deposition in reactor coolants

    International Nuclear Information System (INIS)

    Reeks, M.W.

    1982-06-01

    A description is given of a computer code COPDIRC intended for the calculation of the deposition of particulate onto smooth perfectly sticky surfaces in a gas cooled reactor coolant. The deposition is assumed to be limited by transport in the boundary layer adjacent to the depositing surface. This implies that the deposition velocity normalised with respect to the local friction velocity, is an almost universal function of the normalised particle relaxation time. Deposition is assumed similar to deposition in an equivalent smooth perfectly absorbing pipe. The deposition is calculated using 2 models. (author)

  12. Testing of Laterally Loaded Rigid Piles with Applied Overburden Pressure

    DEFF Research Database (Denmark)

    Sørensen, Søren Peder Hyldal; Ibsen, Lars Bo; Foglia, Aligi

    2015-01-01

    Small-scale tests have been conducted to investigate the quasi-static behaviour of laterally loaded, non-slender piles installed in cohesionless soil. For that purpose, a new and innovative test setup has been developed. The tests have been conducted in a pressure tank such that it was possible...... to apply an overburden pressure to the soil. As a result of that, the traditional uncertainties related to low effective stresses for small-scale tests have been avoided. A normalisation criterion for laterally loaded piles has been proposed based on dimensional analysis. The test results using the novel...... testing method have been compared with the use of the normalisation criterion....

  13. Impact of particle density and initial volume on mathematical compression models

    DEFF Research Database (Denmark)

    Sonnergaard, Jørn

    2000-01-01

    In the calculation of the coefficients of compression models for powders either the initial volume or the particle density is introduced as a normalising factor. The influence of these normalising factors is, however, widely different on coefficients derived from the Kawakita, Walker and Heckel...... equations. The problems are illustrated by investigations on compaction profiles of 17 materials with different molecular structures and particle densities. It is shown that the particle density of materials with covalent bonds in the Heckel model acts as a key parameter with a dominating influence...

  14. E-IMPACT - A ROBUST HAZARD-BASED ENVIRONMENTAL IMPACT ASSESSMENT APPROACH FOR PROCESS INDUSTRIES

    Directory of Open Access Journals (Sweden)

    KHANDOKER A. HOSSAIN

    2008-04-01

    Full Text Available This paper proposes a hazard-based environmental impact assessment approach (E-Impact, for evaluating the environmental impact during process design and retrofit stages. E-Impact replaces the normalisation step of the conventional impact assessment phase. This approach compares the impact scores for different options and assigns a relative score to each option. This eliminates the complexity of the normalisation step in the evaluation phase. The applicability of the E-Impact has been illustrated through a case study of solvent selection in an acrylic acid manufacturing plant. E-Impact is used in conjunction with Aspen-HYSYS process simulator to develop mass and heat balance data.

  15. β-glucuronidase use as a single internal control gene may confound analysis in FMR1 mRNA toxicity studies.

    Science.gov (United States)

    Kraan, Claudine M; Cornish, Kim M; Bui, Quang M; Li, Xin; Slater, Howard R; Godler, David E

    2018-01-01

    Relationships between Fragile X Mental Retardation 1 (FMR1) mRNA levels in blood and intragenic FMR1 CGG triplet expansions support the pathogenic role of RNA gain of function toxicity in premutation (PM: 55-199 CGGs) related disorders. Real-time PCR (RT-PCR) studies reporting these findings normalised FMR1 mRNA level to a single internal control gene called β-glucuronidase (GUS). This study evaluated FMR1 mRNA-CGG correlations in 33 PM and 33 age- and IQ-matched control females using three normalisation strategies in peripheral blood mononuclear cells (PBMCs): (i) GUS as a single internal control; (ii) the mean of GUS, Eukaryotic Translation Initiation Factor 4A2 (EIF4A2) and succinate dehydrogenase complex flavoprotein subunit A (SDHA); and (iii) the mean of EIF4A2 and SDHA (with no contribution from GUS). GUS mRNA levels normalised to the mean of EIF4A2 and SDHA mRNA levels and EIF4A2/SDHA ratio were also evaluated. FMR1mRNA level normalised to the mean of EIF4A2 and SDHA mRNA levels, with no contribution from GUS, showed the most significant correlation with CGG size and the greatest difference between PM and control groups (p = 10-11). Only 15% of FMR1 mRNA PM results exceeded the maximum control value when normalised to GUS, compared with over 42% when normalised to the mean of EIF4A2 and SDHA mRNA levels. Neither GUS mRNA level normalised to the mean RNA levels of EIF4A2 and SDHA, nor to the EIF4A2/SDHA ratio were correlated with CGG size. However, greater variability in GUS mRNA levels were observed for both PM and control females across the full range of CGG repeat as compared to the EIF4A2/SDHA ratio. In conclusion, normalisation with multiple control genes, excluding GUS, can improve assessment of the biological significance of FMR1 mRNA-CGG size relationships.

  16. Developments in national and international regulation in the field of ''corrosion protection of buried pipes''; Entwicklung im Bereich nationaler und internationaler Regelsetzung im Fachgebiet ''Korrosionsschutz erdverlegter Rohrleitungen''

    Energy Technology Data Exchange (ETDEWEB)

    Schoeneich, H.G. [E.ON Ruhrgas AG, Essen (Germany). Kompetenz-Center Korrosionsschutz

    2007-06-15

    This article summarizes the most important national and international rules for cathodic anti-corrosion protection of buried installations. The codes examined are those published by DIN (German Standardization Institute), the DVGW (German Association of Gas and Water Engineers) and AfK (Corrosion Protection Work Group). DIN publishes the results achieved by ISO (International Standardisation Organisation), CEN (Comite Europeen de Normalisation) and CENELEC (Comite Europeen de Normalisation Electrotechnique). The guidelines published by CEOCOR (European Committee for the Study of Corrosion and Protection of Pipes) are also briefly examined. Details of technical significance of a number of selected standards and revision projects are also stated and discussed. (orig.)

  17. Transitional Justice

    DEFF Research Database (Denmark)

    Gissel, Line Engbo

    This presentation builds on an earlier published article, 'Contemporary Transitional Justice: Normalising a Politics of Exception'. It argues that the field of transitional justice has undergone a shift in conceptualisation and hence practice. Transitional justice is presently understood to be th...... to be the provision of ordinary criminal justice in contexts of exceptional political transition.......This presentation builds on an earlier published article, 'Contemporary Transitional Justice: Normalising a Politics of Exception'. It argues that the field of transitional justice has undergone a shift in conceptualisation and hence practice. Transitional justice is presently understood...

  18. Bibliometric indicators of young authors in astrophysics

    DEFF Research Database (Denmark)

    Havemann, Frank; Larsen, Birger

    2015-01-01

    We test 16 bibliometric indicators with respect to their validity at the level of the individual researcher by estimating their power to predict later successful researchers. We compare the indicators of a sample of astrophysics researchers who later co-authored highly cited papers before...... their first landmark paper with the distributions of these indicators over a random control group of young authors in astronomy and astrophysics. We find that field and citation-window normalisation substantially improves the predicting power of citation indicators. The sum of citation numbers normalised...

  19. Interaktion mellem warfarin og oral miconazol-gel

    DEFF Research Database (Denmark)

    Ogard, C G; Vestergaard, Henrik

    2000-01-01

    We report a case of a 76 year-old woman who had been taking warfarin for seven years because of relapsing deep venous thrombosis. Her daily maintenance dose was 5 mg. Monthly measurements of international normalised ratio (INR) were stable between 2-3. She developed oral candidiasis and miconazole...... gel was prescribed. One week later she developed bleeding gums. Eight days later she was admitted to the hospital with haematuria. INR was > 10. Warfarin and the miconazole gel were withdrawn. She was treated with phytonadione. INR normalised after four days and she continued warfarin treatment....... Caution should be exercised whenever the combination of warfarin and miconazole gel are prescribed....

  20. Mobility in Learning: The Feasibility of Encouraging Language Learning on Smartphones

    Directory of Open Access Journals (Sweden)

    Keith Barrs

    2011-09-01

    Full Text Available With normalised technology in language learning contexts there is an unprecedented opportunity to re-define the nature of learning. Traditional ideas of classroom-based learning are giving way to modern ideas of ‘24/7 anywhere, anytime’ learning which is accessed and managed in part or in whole by the learners themselves, primarily on mobile devices. This is a "work in progress" article detailing the initial stages of a study investigating normalisation of smart phones in a language classroom in Japan.

  1. Complex correlation approach for high frequency financial data

    Science.gov (United States)

    Wilinski, Mateusz; Ikeda, Yuichi; Aoyama, Hideaki

    2018-02-01

    We propose a novel approach that allows the calculation of a Hilbert transform based complex correlation for unevenly spaced data. This method is especially suitable for high frequency trading data, which are of a particular interest in finance. Its most important feature is the ability to take into account lead-lag relations on different scales, without knowing them in advance. We also present results obtained with this approach while working on Tokyo Stock Exchange intraday quotations. We show that individual sectors and subsectors tend to form important market components which may follow each other with small but significant delays. These components may be recognized by analysing eigenvectors of complex correlation matrix for Nikkei 225 stocks. Interestingly, sectorial components are also found in eigenvectors corresponding to the bulk eigenvalues, traditionally treated as noise.

  2. Analysis of Heavy-Tailed Time Series

    DEFF Research Database (Denmark)

    Xie, Xiaolei

    This thesis is about analysis of heavy-tailed time series. We discuss tail properties of real-world equity return series and investigate the possibility that a single tail index is shared by all return series of actively traded equities in a market. Conditions for this hypothesis to be true...... are identified. We study the eigenvalues and eigenvectors of sample covariance and sample auto-covariance matrices of multivariate heavy-tailed time series, and particularly for time series with very high dimensions. Asymptotic approximations of the eigenvalues and eigenvectors of such matrices are found...... and expressed in terms of the parameters of the dependence structure, among others. Furthermore, we study an importance sampling method for estimating rare-event probabilities of multivariate heavy-tailed time series generated by matrix recursion. We show that the proposed algorithm is efficient in the sense...

  3. Performance Analysis of the Decentralized Eigendecomposition and ESPRIT Algorithm

    Science.gov (United States)

    Suleiman, Wassim; Pesavento, Marius; Zoubir, Abdelhak M.

    2016-05-01

    In this paper, we consider performance analysis of the decentralized power method for the eigendecomposition of the sample covariance matrix based on the averaging consensus protocol. An analytical expression of the second order statistics of the eigenvectors obtained from the decentralized power method which is required for computing the mean square error (MSE) of subspace-based estimators is presented. We show that the decentralized power method is not an asymptotically consistent estimator of the eigenvectors of the true measurement covariance matrix unless the averaging consensus protocol is carried out over an infinitely large number of iterations. Moreover, we introduce the decentralized ESPRIT algorithm which yields fully decentralized direction-of-arrival (DOA) estimates. Based on the performance analysis of the decentralized power method, we derive an analytical expression of the MSE of DOA estimators using the decentralized ESPRIT algorithm. The validity of our asymptotic results is demonstrated by simulations.

  4. On the structure of acceleration in turbulence

    DEFF Research Database (Denmark)

    Liberzon, A.; Lüthi, B.; Holzner, M.

    2012-01-01

    Acceleration and spatial velocity gradients are obtained simultaneously in an isotropic turbulent flow via three dimensional particle tracking velocimetry. We observe two distinct populations of intense acceleration events: one in flow regions of strong strain and another in regions of strong...... vorticity. Geometrical alignments with respect to vorticity vector and to the strain eigenvectors, curvature of Lagrangian trajectories and of streamlines for total acceleration, and for its convective part, , are studied in detail. We discriminate the alignment features of total and convective acceleration...... statistics, which are genuine features of turbulent nature from those of kinematic nature. We find pronounced alignment of acceleration with vorticity. Similarly, and especially are predominantly aligned at 45°with the most stretching and compressing eigenvectors of the rate of the strain tensor...

  5. Pore Fluid Effects on Shear Modulus for Sandstones with Soft Anisotropy

    International Nuclear Information System (INIS)

    Berryman, J G

    2004-01-01

    A general analysis of poroelasticity for vertical transverse isotropy (VTI) shows that four eigenvectors are pure shear modes with no coupling to the pore-fluidmechanics. The remaining two eigenvectors are linear combinations of pure compression and uniaxial shear, both of which are coupled to the fluid mechanics. After reducing the problem to a 2x2 system, the analysis shows in a relatively elementary fashion how a poroelastic system with isotropic solid elastic frame, but with anisotropy introduced through the poroelastic coefficients, interacts with the mechanics of the pore fluid and produces shear dependence on fluid properties in the overall mechanical system. The analysis shows, for example, that this effect is always present (though sometimes small in magnitude) in the systems studied, and can be quite large (up to a definite maximum increase of 20 per cent) in some rocks--including Spirit River sandstone and Schuler-Cotton Valley sandstone

  6. A Quantum Implementation Model for Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ammar Daskin

    2018-02-01

    Full Text Available The learning process for multilayered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow–Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, these iterative formulas result in terms formed by the principal components of the weight matrix, namely, the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the phase estimation algorithm is known to provide speedups over the conventional algorithms for the eigenvalue-related problems. Combining the quantum amplitude amplification with the phase estimation algorithm, a quantum implementation model for artificial neural networks using the Widrow–Hoff learning rule is presented. The complexity of the model is found to be linear in the size of the weight matrix. This provides a quadratic improvement over the classical algorithms. Quanta 2018; 7: 7–18.

  7. Deflation for inversion with multiple right-hand sides in QCD

    International Nuclear Information System (INIS)

    Stathopoulos, A; Abdel-Rehim, A M; Orginos, K

    2009-01-01

    Most calculations in lattice Quantum Chromodynamics (QCD) involve the solution of a series of linear systems of equations with exceedingly large matrices and a large number of right hand sides. Iterative methods for these problems can be sped up significantly if we deflate approximations of appropriate invariant spaces from the initial guesses. Recently we have developed eigCG, a modification of the Conjugate Gradient (CG) method, which while solving a linear system can reuse a window of the CG vectors to compute eigenvectors almost as accurately as the Lanczos method. The number of approximate eigenvectors can increase as more systems are solved. In this paper we review some of the characteristics of eigCG and show how it helps remove the critical slowdown in QCD calculations. Moreover, we study scaling with lattice volume and an extension of the technique to nonsymmetric problems.

  8. Symmetries of the second-difference matrix and the finite Fourier transform

    International Nuclear Information System (INIS)

    Aguilar, A.; Wolf, K.B.

    1979-01-01

    The finite Fourier transformation is well known to diagonalize the second-difference matrix and has been thus applied extensively to describe finite crystal lattices and electric networks. In setting out to find all transformations having this property, we obtain a multiparameter class of them. While permutations and unitary scaling of the eigenvectors constitute the trivial freedom of choice common to all diagonalization processes, the second-difference matrix has a larger symmetry group among whose elements we find the dihedral manifest symmetry transformations of the lattice. The latter are nevertheless sufficient for the unique specification of eigenvectors in various symmetry-adapted bases for the constrained lattice. The free symmetry parameters are shown to lead to a complete set of conserved quantities for the physical lattice motion. (author)

  9. An expert system in medical diagnosis

    International Nuclear Information System (INIS)

    Raboanary, R.; Raoelina Andriambololona; Soffer, J.; Raboanary, J.

    2001-01-01

    Health problem is still a crucial one in some countries. It is so important that it becomes a major handicap in economic and social development. In order to solve this problem, we have conceived an expert system that we called MITSABO, which means TO HEAL, to help the physicians to diagnose tropical diseases. It is clear that by extending the data base and the knowledge base, we can extend the application of the software to more general areas. In our expert system, we used the concept of 'self organization' of neural network based on the determination of the eigenvalues and the eigenvectors associated to the correlation matrix XX t . The projection of the data on the two first eigenvectors gives a classification of the diseases which is used to get a first approach in the diagnosis of the patient. This diagnosis is improved by using an expert system which is built from the knowledge base.

  10. Coupling coefficients for tensor product representations of quantum SU(2)

    International Nuclear Information System (INIS)

    Groenevelt, Wolter

    2014-01-01

    We study tensor products of infinite dimensional irreducible * -representations (not corepresentations) of the SU(2) quantum group. We obtain (generalized) eigenvectors of certain self-adjoint elements using spectral analysis of Jacobi operators associated to well-known q-hypergeometric orthogonal polynomials. We also compute coupling coefficients between different eigenvectors corresponding to the same eigenvalue. Since the continuous spectrum has multiplicity two, the corresponding coupling coefficients can be considered as 2 × 2-matrix-valued orthogonal functions. We compute explicitly the matrix elements of these functions. The coupling coefficients can be considered as q-analogs of Bessel functions. As a results we obtain several q-integral identities involving q-hypergeometric orthogonal polynomials and q-Bessel-type functions

  11. Coupling coefficients for tensor product representations of quantum SU(2)

    Science.gov (United States)

    Groenevelt, Wolter

    2014-10-01

    We study tensor products of infinite dimensional irreducible *-representations (not corepresentations) of the SU(2) quantum group. We obtain (generalized) eigenvectors of certain self-adjoint elements using spectral analysis of Jacobi operators associated to well-known q-hypergeometric orthogonal polynomials. We also compute coupling coefficients between different eigenvectors corresponding to the same eigenvalue. Since the continuous spectrum has multiplicity two, the corresponding coupling coefficients can be considered as 2 × 2-matrix-valued orthogonal functions. We compute explicitly the matrix elements of these functions. The coupling coefficients can be considered as q-analogs of Bessel functions. As a results we obtain several q-integral identities involving q-hypergeometric orthogonal polynomials and q-Bessel-type functions.

  12. Calculation of degenerated Eigenmodes with modified power method

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Peng; Lee, Hyun Suk; Lee, Deok Jung [School of Mechanical and Nuclear Engineering, Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2017-02-15

    The modified power method has been studied by many researchers to calculate the higher Eigenmodes and accelerate the convergence of the fundamental mode. Its application to multidimensional problems may be unstable due to degenerated or near-degenerated Eigenmodes. Complex Eigenmode solutions are occasionally encountered in such cases, and the shapes of the corresponding eigenvectors may change during the simulation. These issues must be addressed for the successful implementation of the modified power method. Complex components are examined and an approximation method to eliminate the usage of the complex numbers is provided. A technique to fix the eigenvector shapes is also provided. The performance of the methods for dealing with those aforementioned problems is demonstrated with two dimensional one group and three dimensional one group homogeneous diffusion problems.

  13. Conduction mechanism studies on electron transfer of disordered system

    Institute of Scientific and Technical Information of China (English)

    徐慧; 宋祎璞; 李新梅

    2002-01-01

    Using the negative eigenvalue theory and the infinite order perturbation theory, a new method was developed to solve the eigenvectors of disordered systems. The result shows that eigenvectors change from the extended state to the localized state with the increase of the site points and the disordered degree of the system. When electric field is exerted, the electrons transfer from one localized state to another one. The conductivity is induced by the electron transfer. The authors derive the formula of electron conductivity and find the electron hops between localized states whose energies are close to each other, whereas localized positions differ from each other greatly. At low temperature the disordered system has the character of the negative differential dependence of resistivity and temperature.

  14. Impact of sample size on principal component analysis ordination of an environmental data set: effects on eigenstructure

    Directory of Open Access Journals (Sweden)

    Shaukat S. Shahid

    2016-06-01

    Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.

  15. Introduction to the mathematics of inversion in remote sensing and indirect measurements

    CERN Document Server

    Twomey, S

    2013-01-01

    Developments in Geomathematics, 3: Introduction to the Mathematics of Inversion in Remote Sensing and Indirect Measurements focuses on the application of the mathematics of inversion in remote sensing and indirect measurements, including vectors and matrices, eigenvalues and eigenvectors, and integral equations. The publication first examines simple problems involving inversion, theory of large linear systems, and physical and geometric aspects of vectors and matrices. Discussions focus on geometrical view of matrix operations, eigenvalues and eigenvectors, matrix products, inverse of a matrix, transposition and rules for product inversion, and algebraic elimination. The manuscript then tackles the algebraic and geometric aspects of functions and function space and linear inversion methods, as well as the algebraic and geometric nature of constrained linear inversion, least squares solution, approximation by sums of functions, and integral equations. The text examines information content of indirect sensing m...

  16. High values of disorder-generated multifractals and logarithmically correlated processes

    International Nuclear Information System (INIS)

    Fyodorov, Yan V.; Giraud, Olivier

    2015-01-01

    In the introductory section of the article we give a brief account of recent insights into statistics of high and extreme values of disorder-generated multifractals following a recent work by the first author with P. Le Doussal and A. Rosso (FLR) employing a close relation between multifractality and logarithmically correlated random fields. We then substantiate some aspects of the FLR approach analytically for multifractal eigenvectors in the Ruijsenaars–Schneider ensemble (RSE) of random matrices introduced by E. Bogomolny and the second author by providing an ab initio calculation that reveals hidden logarithmic correlations at the background of the disorder-generated multifractality. In the rest we investigate numerically a few representative models of that class, including the study of the highest component of multifractal eigenvectors in the Ruijsenaars–Schneider ensemble

  17. Centrality measures in temporal networks with time series analysis

    Science.gov (United States)

    Huang, Qiangjuan; Zhao, Chengli; Zhang, Xue; Wang, Xiaojie; Yi, Dongyun

    2017-05-01

    The study of identifying important nodes in networks has a wide application in different fields. However, the current researches are mostly based on static or aggregated networks. Recently, the increasing attention to networks with time-varying structure promotes the study of node centrality in temporal networks. In this paper, we define a supra-evolution matrix to depict the temporal network structure. With using of the time series analysis, the relationships between different time layers can be learned automatically. Based on the special form of the supra-evolution matrix, the eigenvector centrality calculating problem is turned into the calculation of eigenvectors of several low-dimensional matrices through iteration, which effectively reduces the computational complexity. Experiments are carried out on two real-world temporal networks, Enron email communication network and DBLP co-authorship network, the results of which show that our method is more efficient at discovering the important nodes than the common aggregating method.

  18. Finite-lattice form factors in free-fermion models

    International Nuclear Information System (INIS)

    Iorgov, N; Lisovyy, O

    2011-01-01

    We consider the general Z 2 -symmetric free-fermion model on the finite periodic lattice, which includes as special cases the Ising model on the square and triangular lattices and the Z n -symmetric BBS τ (2) -model with n = 2. Translating Kaufman's fermionic approach to diagonalization of Ising-like transfer matrices into the language of Grassmann integrals, we determine the transfer matrix eigenvectors and observe that they coincide with the eigenvectors of a square lattice Ising transfer matrix. This allows us to find exact finite-lattice form factors of spin operators for the statistical model and the associated finite-length quantum chains, of which the most general is equivalent to the XY chain in a transverse field

  19. Calculations of transient fields in the Felix experiments at Argonne using null field integrated techniques

    International Nuclear Information System (INIS)

    Han, H.C.; Davey, K.R.; Turner, L.

    1985-08-01

    The transient eddy current problem is characteristically computationally intensive. The motivation for this research was to realize an efficient, accurate, solution technique involving small matrices via an eigenvalue approach. Such a technique is indeed realized and tested using the null field integral technique. Using smart (i.e., efficient, global) basis functions to represent unknowns in terms of a minimum number of unknowns, homogeneous eigenvectors and eigenvalues are first determined. The general excitatory response is then represented in terms of these eigenvalues/eigenvectors. Excellent results are obtained for the Argonne Felix cylinder experiments using a 4 x 4 matrix. Extension to the 3-D problem (short cylinder) is set up in terms of an 8 x 8 matrix

  20. Cross-correlation matrix analysis of Chinese and American bank stocks in subprime crisis

    International Nuclear Information System (INIS)

    Zhu Shi-Zhao; Li Xin-Li; Zhang Wen-Qing; Wang Bing-Hong; Nie Sen; Yu Gao-Feng; Han Xiao-Pu

    2015-01-01

    In order to study the universality of the interactions among different markets, we analyze the cross-correlation matrix of the price of the Chinese and American bank stocks. We then find that the stock prices of the emerging market are more correlated than that of the developed market. Considering that the values of the components for the eigenvector may be positive or negative, we analyze the differences between two markets in combination with the endogenous and exogenous events which influence the financial markets. We find that the sparse pattern of components of eigenvectors out of the threshold value has no change in American bank stocks before and after the subprime crisis. However, it changes from sparse to dense for Chinese bank stocks. By using the threshold value to exclude the external factors, we simulate the interactions in financial markets. (paper)

  1. Tailoring three-point functions and integrability IV. Θ-morphism

    Energy Technology Data Exchange (ETDEWEB)

    Gromov, Nikolay [Department of Mathematics WC2R 2LS, King’s College London,London (United Kingdom); St. Petersburg INP,St. Petersburg (Russian Federation); Vieira, Pedro [Perimeter Institute for Theoretical Physics,Waterloo, Ontario N2L 2Y5 (Canada)

    2014-04-09

    We compute structure constants in N=4 SYM at one loop using Integrability. This requires having full control over the two loop eigenvectors of the dilatation operator for operators of arbitrary size. To achieve this, we develop an algebraic description called the Θ-morphism. In this approach we introduce impurities at each spin chain site, act with particular differential operators on the standard algebraic Bethe ansatz vectors and generate in this way higher loop eigenvectors. The final results for the structure constants take a surprisingly simple form, recently reported by us in the short note http://arxiv.org/abs/1202.4103. These are based on the tree level and one loop patterns together and also on some higher loop experiments involving simple operators.

  2. Predicting the impact of urban flooding using open data.

    Science.gov (United States)

    Tkachenko, Nataliya; Procter, Rob; Jarvis, Stephen

    2016-05-01

    This paper aims to explore whether there is a relationship between search patterns for flood risk information on the Web and how badly localities have been affected by flood events. We hypothesize that localities where people stay more actively informed about potential flooding experience less negative impact than localities where people make less effort to be informed. Being informed, of course, does not hold the waters back; however, it may stimulate (or serve as an indicator of) such resilient behaviours as timely use of sandbags, relocation of possessions from basements to upper floors and/or temporary evacuation from flooded homes to alternative accommodation. We make use of open data to test this relationship empirically. Our results demonstrate that although aggregated Web search reflects average rainfall patterns, its eigenvectors predominantly consist of locations with similar flood impacts during 2014-2015. These results are also consistent with statistically significant correlations of Web search eigenvectors with flood warning and incident reporting datasets.

  3. On the relationship between Gaussian stochastic blockmodels and label propagation algorithms

    International Nuclear Information System (INIS)

    Zhang, Junhao; Hu, Junfeng; Chen, Tongfei

    2015-01-01

    The problem of community detection has received great attention in recent years. Many methods have been proposed to discover communities in networks. In this paper, we propose a Gaussian stochastic blockmodel that uses Gaussian distributions to fit weight of edges in networks for non-overlapping community detection. The maximum likelihood estimation of this model has the same objective function as general label propagation with node preference. The node preference of a specific vertex turns out to be a value proportional to the intra-community eigenvector centrality (the corresponding entry in principal eigenvector of the adjacency matrix of the subgraph inside that vertex's community) under maximum likelihood estimation. Additionally, the maximum likelihood estimation of a constrained version of our model is highly related to another extension of the label propagation algorithm, namely, the label propagation algorithm under constraint. Experiments show that the proposed Gaussian stochastic blockmodel performs well on various benchmark networks. (paper)

  4. Nature of complex time eigenvalues of the one speed transport equation in a homogeneous sphere

    International Nuclear Information System (INIS)

    Dahl, E.B.; Sahni, D.C.

    1990-01-01

    The complex time eigenvalues of the transport equation have been studied for one speed neutrons, scattered isotropically in a homogeneous sphere with vacuum boundary conditions. It is shown that the complex decay constants vary continuously with the radius of the sphere. Our earlier conjecture (Dahl and Sahni (1983-84)) regarding disjoint arcs is thus shown to be true. We also indicate that complex decay constants exist even for large assemblies, though with rapid oscillations in the corresponding eigenvectors. These modes cannot be predicted by the diffusion equation as this behaviour of the eigenvectors contradicts the assumption of 'slowly varying flux' needed to derive the diffusion approximation from the transport equation. For an infinite system, the existence of complex modes is related to the solution of a homogeneous equation. (author)

  5. A Slater parameter optimisation interface for the CIV3 atomic structure code and its possible use with the R-matrix close coupling collision code

    International Nuclear Information System (INIS)

    Fawcett, B.C.; Hibbert, A.

    1989-11-01

    Details are here provided of amendments to the atomic structure code CIV3 which allow the optional adjustment of Slater parameters and average energies of configurations so that they result in improved energy levels and eigenvectors. It is also indicated how, in principle, the resultant improved eigenvectors can be utilised by the R-matrix collision code, thus providing an optimised target for close coupling collision strength calculations. An analogous computational method was recently reported for distorted wave collision strength calculations and applied to Fe XIII. The general method is suitable for the computation of collision strengths for complex ions and in some cases can then provide a basis for collision strength calculations in ions where ab initio computations break down or result in unnecessarily large errors. (author)

  6. Predictive error dependencies when using pilot points and singular value decomposition in groundwater model calibration

    DEFF Research Database (Denmark)

    Christensen, Steen; Doherty, John

    2008-01-01

    A significant practical problem with the pilot point method is to choose the location of the pilot points. We present a method that is intended to relieve the modeler from much of this responsibility. The basic idea is that a very large number of pilot points are distributed more or less uniformly...... over the model area. Singular value decomposition (SVD) of the (possibly weighted) sensitivity matrix of the pilot point based model produces eigenvectors of which we pick a small number corresponding to significant eigenvalues. Super parameters are defined as factors through which parameter...... combinations corresponding to the chosen eigenvectors are multiplied to obtain the pilot point values. The model can thus be transformed from having many-pilot-point parameters to having a few super parameters that can be estimated by nonlinear regression on the basis of the available observations. (This...

  7. Comparison of (e,2e), photoelectron and conventional spectroscopies for the Ar2 ion

    International Nuclear Information System (INIS)

    McCarthy, I.E.; Uylings, P.; Poppe, R.

    1978-05-01

    States of the Ar2 ion whose eigenvectors contain large components of single-hole configurations are observed in the (e,2e) and (γ,e) reactions on the Ar1 atom. The cross section is regarded as being proportional to the spectroscopic factor, that is the state expectation value of the single-hole configuration in the eigenvector. State expectation values obtained from these reactions for 1/2 + states are compared with ones obtained by diagonalizing an effective hamiltonian in a model space, with radial matrix elememts determined by fitting spectra for bound states. (e,2e) and conventional spectroscopy are compatible and provide complementary information about structure. Simple analysis of present (γ,e) data does not lead to compatible information on spectroscopic factors

  8. Optical spectra and lattice dynamics of molecular crystals

    CERN Document Server

    Zhizhin, GN

    1995-01-01

    The current volume is a single topic volume on the optical spectra and lattice dynamics of molecular crystals. The book is divided into two parts. Part I covers both the theoretical and experimental investigations of organic crystals. Part II deals with the investigation of the structure, phase transitions and reorientational motion of molecules in organic crystals. In addition appendices are given which provide the parameters for the calculation of the lattice dynamics of molecular crystals, procedures for the calculation of frequency eigenvectors of utilizing computers, and the frequencies and eigenvectors of lattice modes for several organic crystals. Quite a large amount of Russian literature is cited, some of which has previously not been available to scientists in the West.

  9. Transfer matrix method for dynamics modeling and independent modal space vibration control design of linear hybrid multibody system

    Science.gov (United States)

    Rong, Bao; Rui, Xiaoting; Lu, Kun; Tao, Ling; Wang, Guoping; Ni, Xiaojun

    2018-05-01

    In this paper, an efficient method of dynamics modeling and vibration control design of a linear hybrid multibody system (MS) is studied based on the transfer matrix method. The natural vibration characteristics of a linear hybrid MS are solved by using low-order transfer equations. Then, by constructing the brand-new body dynamics equation, augmented operator and augmented eigenvector, the orthogonality of augmented eigenvector of a linear hybrid MS is satisfied, and its state space model expressed in each independent model space is obtained easily. According to this dynamics model, a robust independent modal space-fuzzy controller is designed for vibration control of a general MS, and the genetic optimization of some critical control parameters of fuzzy tuners is also presented. Two illustrative examples are performed, which results show that this method is computationally efficient and with perfect control performance.

  10. Algorithm 589. SICEDR: a FORTRAN subroutine for improving the accuracy of computed matrix eigenvalues

    International Nuclear Information System (INIS)

    Dongarra, J.J.

    1982-01-01

    SICEDR is a FORTRAN subroutine for improving the accuracy of a computed real eigenvalue and improving or computing the associated eigenvector. It is first used to generate information during the determination of the eigenvalues by the Schur decomposition technique. In particular, the Schur decomposition technique results in an orthogonal matrix Q and an upper quasi-triangular matrix T, such that A = QTQ/sup T/. Matrices A, Q, and T and the approximate eigenvalue, say lambda, are then used in the improvement phase. SICEDR uses an iterative method similar to iterative improvement for linear systems to improve the accuracy of lambda and improve or compute the eigenvector x in O(n 2 ) work, where n is the order of the matrix A

  11. An improved V-Lambda solution of the matrix Riccati equation

    Science.gov (United States)

    Bar-Itzhack, Itzhack Y.; Markley, F. Landis

    1988-01-01

    The authors present an improved algorithm for computing the V-Lambda solution of the matrix Riccati equation. The improvement is in the reduction of the computational load, results from the orthogonality of the eigenvector matrix that has to be solved for. The orthogonality constraint reduces the number of independent parameters which define the matrix from n-squared to n (n - 1)/2. The authors show how to specify the parameters, how to solve for them and how to form from them the needed eigenvector matrix. In the search for suitable parameters, the analogy between the present problem and the problem of attitude determination is exploited, resulting in the choice of Rodrigues parameters.

  12. Tailoring three-point functions and integrability IV. Θ-morphism

    International Nuclear Information System (INIS)

    Gromov, Nikolay; Vieira, Pedro

    2014-01-01

    We compute structure constants in N=4 SYM at one loop using Integrability. This requires having full control over the two loop eigenvectors of the dilatation operator for operators of arbitrary size. To achieve this, we develop an algebraic description called the Θ-morphism. In this approach we introduce impurities at each spin chain site, act with particular differential operators on the standard algebraic Bethe ansatz vectors and generate in this way higher loop eigenvectors. The final results for the structure constants take a surprisingly simple form, recently reported by us in the short note http://arxiv.org/abs/1202.4103. These are based on the tree level and one loop patterns together and also on some higher loop experiments involving simple operators

  13. Parameter estimation for an expanding universe

    Directory of Open Access Journals (Sweden)

    Jieci Wang

    2015-03-01

    Full Text Available We study the parameter estimation for excitations of Dirac fields in the expanding Robertson–Walker universe. We employ quantum metrology techniques to demonstrate the possibility for high precision estimation for the volume rate of the expanding universe. We show that the optimal precision of the estimation depends sensitively on the dimensionless mass m˜ and dimensionless momentum k˜ of the Dirac particles. The optimal precision for the ratio estimation peaks at some finite dimensionless mass m˜ and momentum k˜. We find that the precision of the estimation can be improved by choosing the probe state as an eigenvector of the hamiltonian. This occurs because the largest quantum Fisher information is obtained by performing projective measurements implemented by the projectors onto the eigenvectors of specific probe states.

  14. Evaluation of the synchrotron close orbit

    International Nuclear Information System (INIS)

    Bashmakov, Yu.A.; Karpov, V.A.

    1991-01-01

    The knowledge of the closed orbit position is an essential condition for the effective work of any accelerator. Therefore questions of calculations, measurements and controls have great importance. For example, during injection of particles into a synchrotron, the amplitudes of their betatron oscillations may become commensurable with the working region of the synchrotron. This makes one pay attention at the problem of formation of the optimum orbit with use of correcting optical elements. In addition, it is often necessary to calculate such an orbit at the end of the acceleration cycle when particles are deposited at internal targets or removed from the synchrotron. In this paper, the computation of the close orbit is reduced to a determination at an arbitrarily chosen azimuth of the eigenvector of the total transfer matrix of the synchrotron ring and to tracing with this vector desired orbit. The eigenvector is found as a result of an iteration

  15. Asymptotics of empirical eigenstructure for high dimensional spiked covariance.

    Science.gov (United States)

    Wang, Weichen; Fan, Jianqing

    2017-06-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.

  16. Spectral properties of Google matrix of Wikipedia and other networks

    Science.gov (United States)

    Ermann, Leonardo; Frahm, Klaus M.; Shepelyansky, Dima L.

    2013-05-01

    We study the properties of eigenvalues and eigenvectors of the Google matrix of the Wikipedia articles hyperlink network and other real networks. With the help of the Arnoldi method, we analyze the distribution of eigenvalues in the complex plane and show that eigenstates with significant eigenvalue modulus are located on well defined network communities. We also show that the correlator between PageRank and CheiRank vectors distinguishes different organizations of information flow on BBC and Le Monde web sites.

  17. The Topology of Symmetric Tensor Fields

    Science.gov (United States)

    Levin, Yingmei; Batra, Rajesh; Hesselink, Lambertus; Levy, Yuval

    1997-01-01

    Combinatorial topology, also known as "rubber sheet geometry", has extensive applications in geometry and analysis, many of which result from connections with the theory of differential equations. A link between topology and differential equations is vector fields. Recent developments in scientific visualization have shown that vector fields also play an important role in the analysis of second-order tensor fields. A second-order tensor field can be transformed into its eigensystem, namely, eigenvalues and their associated eigenvectors without loss of information content. Eigenvectors behave in a similar fashion to ordinary vectors with even simpler topological structures due to their sign indeterminacy. Incorporating information about eigenvectors and eigenvalues in a display technique known as hyperstreamlines reveals the structure of a tensor field. The simplify and often complex tensor field and to capture its important features, the tensor is decomposed into an isotopic tensor and a deviator. A tensor field and its deviator share the same set of eigenvectors, and therefore they have a similar topological structure. A a deviator determines the properties of a tensor field, while the isotopic part provides a uniform bias. Degenerate points are basic constituents of tensor fields. In 2-D tensor fields, there are only two types of degenerate points; while in 3-D, the degenerate points can be characterized in a Q'-R' plane. Compressible and incompressible flows share similar topological feature due to the similarity of their deviators. In the case of the deformation tensor, the singularities of its deviator represent the area of vortex core in the field. In turbulent flows, the similarities and differences of the topology of the deformation and the Reynolds stress tensors reveal that the basic addie-viscosity assuptions have their validity in turbulence modeling under certain conditions.

  18. A pragmatic approach to including complex natural modes of vibration in aeroelastic analysis

    CSIR Research Space (South Africa)

    Van Zyl, Lourens H

    2015-09-01

    Full Text Available complex natural modes of vibration in aeroelastic analysis Louw van Zyl International Aerospace Symposium of South Africa 14 to 16 September, 2015 Stellenbosch, South Africa Slide 2 © CSIR 2006 www.csir.co.za Problem statement..., the square of the angular frequencies in radians per second) [ ]{ } [ ]{ } [ ]{ } { }fxKxCxM =++ &&& [ ]{ } [ ]{ } 0=+ xKxMs2 Slide 4 © CSIR 2006 www.csir.co.za Structural Dynamics (continued) • The corresponding eigenvectors are real...

  19. Orbit control on SPEAR: a progress report

    International Nuclear Information System (INIS)

    Corbett, W.J.; Keeley, D.

    1994-01-01

    In this paper, we report work on initial studies of the global feedback system for SPEAR. In particular, we describe components of a comprehensive accelerator simulation program used to assess the performance of harmonic and eigenvector orbit control algorithms. This program has also been used to choose new bpm sites, and can simulate the interaction between global and local orbit feedback systems. A prototype on-line version used for SPEAR is discussed. copyright 1994 American Institute of Physics

  20. The detection of influential subsets in linear regression using an influence matrix

    OpenAIRE

    Peña, Daniel; Yohai, Víctor J.

    1991-01-01

    This paper presents a new method to identify influential subsets in linear regression problems. The procedure uses the eigenstructure of an influence matrix which is defined as the matrix of uncentered covariance of the effect on the whole data set of deleting each observation, normalized to include the univariate Cook's statistics in the diagonal. It is shown that points in an influential subset will appear with large weight in at least one of the eigenvector linked to the largest eigenvalue...

  1. Stabilization and Riesz basis property for an overhead crane model with feedback in velocity and rotating velocity

    Directory of Open Access Journals (Sweden)

    Toure K. Augustin

    2014-06-01

    Full Text Available This paper studies a variant of an overhead crane model's problem, with a control force in velocity and rotating velocity on the platform. We obtain under certain conditions the well-posedness and the strong stabilization of the closed-loop system. We then analyze the spectrum of the system. Using a method due to Shkalikov, we prove the existence of a sequence of generalized eigenvectors of the system, which forms a Riesz basis for the state energy Hilbert space.

  2. An algorithm for the basis of the finite Fourier transform

    Science.gov (United States)

    Santhanam, Thalanayar S.

    1995-01-01

    The Finite Fourier Transformation matrix (F.F.T.) plays a central role in the formulation of quantum mechanics in a finite dimensional space studied by the author over the past couple of decades. An outstanding problem which still remains open is to find a complete basis for F.F.T. In this paper we suggest a simple algorithm to find the eigenvectors of F.T.T.

  3. Hybrid numerical calculation method for bend waveguides

    OpenAIRE

    Garnier , Lucas; Saavedra , C.; Castro-Beltran , Rigoberto; Lucio , José Luis; Bêche , Bruno

    2017-01-01

    National audience; The knowledge of how the light will behave in a waveguide with a radius of curvature becomes more and more important because of the development of integrated photonics, which include ring micro-resonators, phasars, and other devices with a radius of curvature. This work presents a numerical calculation method to determine the eigenvalues and eigenvectors of curved waveguides. This method is a hybrid method which uses at first conform transformation of the complex plane gene...

  4. Parallel Symmetric Eigenvalue Problem Solvers

    Science.gov (United States)

    2015-05-01

    Research” and the use of copyright material. Approved by Major Professor(s): Approved by: Head of the Departmental Graduate Program Date Alicia Marie... matrix . . . . . . . . . . . . . . . . . 106 8.15 Sparsity patterns for the Nastran benchmark of order 1.5 million . . . . 108 8.16 Sparsity patterns...magnitude eigenvalues of a given matrix pencil (A,B) along with their associated eigenvectors. Computing the smallest eigenvalues is more difficult

  5. The Adf Insurgency Network in the Eastern Democratic Republic of the Congo: Spillover Effects Into Tanzania

    Science.gov (United States)

    2014-06-01

    The funding facilitated the conduct of this study. Lastly, but not least, I thank my family and my parents . Though they were miles away from me...beginning of the end for his regime. To continue stay in power, Mobutu had to do something. He opted for a laissez-faire style of leadership. He allowed...Barahiyan (56.786). 60 Figure 19. Sociogram of Eigenvector Centrality. C. COHESIVE SUBGROUP ANALYSIS 1. Subgroups The Girvan- Newman

  6. Spectral analysis connected with suspension bridge systems

    Czech Academy of Sciences Publication Activity Database

    Malík, Josef

    2016-01-01

    Roč. 81, č. 1 (2016), s. 42-75 ISSN 0272-4960 R&D Projects: GA MŠk ED2.1.00/03.0082 Institutional support: RVO:68145535 Keywords : suspension bridge * vertical and torsional oscillation * eigenvalue * eigenvector * flutter Subject RIV: JM - Building Engineering Impact factor: 0.945, year: 2016 http://imamat.oxfordjournals.org/content/early/2015/09/16/imamat.hxv027.short?rss=1

  7. Diffusion Forecasting Model with Basis Functions from QR-Decomposition

    Science.gov (United States)

    Harlim, John; Yang, Haizhao

    2017-12-01

    The diffusion forecasting is a nonparametric approach that provably solves the Fokker-Planck PDE corresponding to Itô diffusion without knowing the underlying equation. The key idea of this method is to approximate the solution of the Fokker-Planck equation with a discrete representation of the shift (Koopman) operator on a set of basis functions generated via the diffusion maps algorithm. While the choice of these basis functions is provably optimal under appropriate conditions, computing these basis functions is quite expensive since it requires the eigendecomposition of an N× N diffusion matrix, where N denotes the data size and could be very large. For large-scale forecasting problems, only a few leading eigenvectors are computationally achievable. To overcome this computational bottleneck, a new set of basis functions constructed by orthonormalizing selected columns of the diffusion matrix and its leading eigenvectors is proposed. This computation can be carried out efficiently via the unpivoted Householder QR factorization. The efficiency and effectiveness of the proposed algorithm will be shown in both deterministically chaotic and stochastic dynamical systems; in the former case, the superiority of the proposed basis functions over purely eigenvectors is significant, while in the latter case forecasting accuracy is improved relative to using a purely small number of eigenvectors. Supporting arguments will be provided on three- and six-dimensional chaotic ODEs, a three-dimensional SDE that mimics turbulent systems, and also on the two spatial modes associated with the boreal winter Madden-Julian Oscillation obtained from applying the Nonlinear Laplacian Spectral Analysis on the measured Outgoing Longwave Radiation.

  8. Exact analysis of the spectral properties of the anisotropic two-bosons Rabi model

    OpenAIRE

    Cui, Shuai; Cao, Jun-Peng; Fan, Heng; Amico, Luigi

    2015-01-01

    We introduce the anisotropic two-photon Rabi model in which the rotating and counter rotating terms enters along with two different coupling constants. Eigenvalues and eigenvectors are studied with exact means. We employ a variation of the Braak method based on Bogolubov rotation of the underlying $su(1,1)$ Lie algebra. Accordingly, the spectrum is provided by the analytical properties of a suitable meromorphic function. Our formalism applies to the two-modes Rabi model as well, sharing the s...

  9. A unified definition of a vortex derived from vortical flow and the resulting pressure minimum

    Energy Technology Data Exchange (ETDEWEB)

    Nakayama, K [Department of Mechanical Engineering, Aichi Institute of Technology, Toyota, Aichi 470–0392 (Japan); Sugiyama, K; Takagi, S, E-mail: nakayama@aitech.ac.jp, E-mail: kazuyasu.sugiyama@riken.jp, E-mail: takagi@mech.t.u-tokyo.ac.jp [Department of Mechanical Engineering, School of Engineering, The University of Tokyo, Hongo, Tokyo 113–8656 (Japan)

    2014-10-01

    This paper presents a novel definition of a vortex that integrates the concepts of the invariant swirling motion, the pressure minimum characteristics induced by the swirling motion and the positive Laplacian of the pressure. The current definition specifies a vortex that has a swirling motion and resulting pressure minimum feature in the swirl plane, which is simply represented by the eigenvalues and eigenvectors of the velocity gradient tensor. (paper)

  10. Computation of diverging sums based on a finite number of terms

    Science.gov (United States)

    Lv, Q. Z.; Norris, S.; Pelphrey, R.; Su, Q.; Grobe, R.

    2017-10-01

    We propose a numerical method that permits us to compute the sum of a diverging series from only the first N terms by generalizing the traditional Borel technique. The method is rather robust and can be used to recover the ground state energy from the diverging perturbation theory for quantum field theoretical systems that are spatially constrained. Surprisingly, even the corresponding eigenvectors can be generated despite the intrinsic non-perturbative nature of bound state problems.

  11. Commutator perturbation method in the study of vibrational-rotational spectra of diatomic molecules

    International Nuclear Information System (INIS)

    Matamala-Vasquez, A.; Karwowski, J.

    2000-01-01

    The commutator perturbation method, an algebraic version of the Van Vleck-Primas perturbation method, expressed in terms of ladder operators, has been applied to solving the eigenvalue problem of the Hamiltonian describing the vibrational-rotational motion of a diatomic molecule. The physical model used in this work is based on Dunham's approach. The method facilitates obtaining both energies and eigenvectors in an algebraic way

  12. Matrix theory selected topics and useful results

    CERN Document Server

    Mehta, Madan Lal

    1989-01-01

    Matrices and operations on matrices ; determinants ; elementary operations on matrices (continued) ; eigenvalues and eigenvectors, diagonalization of normal matrices ; functions of a matrix ; positive definiteness, various polar forms of a matrix ; special matrices ; matrices with quaternion elements ; inequalities ; generalised inverse of a matrix ; domain of values of a matrix, location and dispersion of eigenvalues ; symmetric functions ; integration over matrix variables ; permanents of doubly stochastic matrices ; infinite matrices ; Alexander matrices, knot polynomials, torsion numbers.

  13. Targeting functional motifs of a protein family

    Science.gov (United States)

    Bhadola, Pradeep; Deo, Nivedita

    2016-10-01

    The structural organization of a protein family is investigated by devising a method based on the random matrix theory (RMT), which uses the physiochemical properties of the amino acid with multiple sequence alignment. A graphical method to represent protein sequences using physiochemical properties is devised that gives a fast, easy, and informative way of comparing the evolutionary distances between protein sequences. A correlation matrix associated with each property is calculated, where the noise reduction and information filtering is done using RMT involving an ensemble of Wishart matrices. The analysis of the eigenvalue statistics of the correlation matrix for the β -lactamase family shows the universal features as observed in the Gaussian orthogonal ensemble (GOE). The property-based approach captures the short- as well as the long-range correlation (approximately following GOE) between the eigenvalues, whereas the previous approach (treating amino acids as characters) gives the usual short-range correlations, while the long-range correlations are the same as that of an uncorrelated series. The distribution of the eigenvector components for the eigenvalues outside the bulk (RMT bound) deviates significantly from RMT observations and contains important information about the system. The information content of each eigenvector of the correlation matrix is quantified by introducing an entropic estimate, which shows that for the β -lactamase family the smallest eigenvectors (low eigenmodes) are highly localized as well as informative. These small eigenvectors when processed gives clusters involving positions that have well-defined biological and structural importance matching with experiments. The approach is crucial for the recognition of structural motifs as shown in β -lactamase (and other families) and selectively identifies the important positions for targets to deactivate (activate) the enzymatic actions.

  14. Algorithms for orbit control on SPEAR

    International Nuclear Information System (INIS)

    Corbett, J.; Keeley, D.; Hettel, R.; Linscott, I.; Sebek, J.

    1994-06-01

    A global orbit feedback system has been installed on SPEAR to help stabilize the position of the photon beams. The orbit control algorithms depend on either harmonic reconstruction of the orbit or eigenvector decomposition. The orbit motion is corrected by dipole corrector kicks determined from the inverse corrector-to-bpm response matrix. This paper outlines features of these control algorithms as applied to SPEAR

  15. 16TH Annual Review of Progress in Applied Computational Electromagnetics at the Naval Postgraduate School Monterey, CA, March 20-24, 2000, Volume II

    Science.gov (United States)

    2000-03-24

    simplification neglects the transportation of heat radiation from one object to another and only the emission of heat from a surface. This becomes...P D - (1i 12) where -i j i(13) Oji =E 1 a, Fj, aj T4 where at, and &, are the absorption and emission coefficient of surface A , respectively...follows: _0 = GOt + hho (27) where h, 0 is nonzero eigenvector corresponding to zero eigenvalue, and h 0 is an arbitrary nonzero vector in space {h 1

  16. Numerically stable algorithm for discrete-ordinate-method radiative transfer in multiple scattering and emitting layered media

    Science.gov (United States)

    Stamnes, Knut; Tsay, S.-CHEE; Jayaweera, Kolf; Wiscombe, Warren

    1988-01-01

    The transfer of monochromatic radiation in a scattering, absorbing, and emitting plane-parallel medium with a specified bidirectional reflectivity at the lower boundary is considered. The equations and boundary conditions are summarized. The numerical implementation of the theory is discussed with attention given to the reliable and efficient computation of eigenvalues and eigenvectors. Ways of avoiding fatal overflows and ill-conditioning in the matrix inversion needed to determine the integration constants are also presented.

  17. Low-frequency electromagnetic field in a Wigner crystal

    OpenAIRE

    Stupka, Anton

    2016-01-01

    Long-wave low-frequency oscillations are described in a Wigner crystal by generalization of the reverse continuum model for the case of electronic lattice. The internal self-consistent long-wave electromagnetic field is used to describe the collective motions in the system. The eigenvectors and eigenvalues of the obtained system of equations are derived. The velocities of longitudinal and transversal sound waves are found.

  18. Quantum interference vs. quantum chaos in the nuclear shell model

    International Nuclear Information System (INIS)

    Fernández, Gerardo; Hautefeuille, M; Velázquez, V; Hernández, Edna M; Landa, E; Morales, I O; Frank, A; Fossion, R; Vargas, C E

    2015-01-01

    In this paper we study the complexity of the nuclear states in terms of a two body quadupole-quadrupole interaction. Energy distributions and eigenvectors composition exhibit a visible interference pattern which is dependent on the intensity of the interaction. In analogy with optics, the visibility of the interference is related to the purity of the states, therefore, we show that the fluctuations associated with quantum chaos have as their origin the remaining quantum coherence with a visibility magnitude close to 5%

  19. Generalized Perron--Frobenius Theorem for Nonsquare Matrices

    OpenAIRE

    Avin, Chen; Borokhovich, Michael; Haddad, Yoram; Kantor, Erez; Lotker, Zvi; Parter, Merav; Peleg, David

    2013-01-01

    The celebrated Perron--Frobenius (PF) theorem is stated for irreducible nonnegative square matrices, and provides a simple characterization of their eigenvectors and eigenvalues. The importance of this theorem stems from the fact that eigenvalue problems on such matrices arise in many fields of science and engineering, including dynamical systems theory, economics, statistics and optimization. However, many real-life scenarios give rise to nonsquare matrices. A natural question is whether the...

  20. Nonlinear signaling on biological networks: The role of stochasticity and spectral clustering

    Science.gov (United States)

    Hernandez-Hernandez, Gonzalo; Myers, Jesse; Alvarez-Lacalle, Enrique; Shiferaw, Yohannes

    2017-03-01

    Signal transduction within biological cells is governed by networks of interacting proteins. Communication between these proteins is mediated by signaling molecules which bind to receptors and induce stochastic transitions between different conformational states. Signaling is typically a cooperative process which requires the occurrence of multiple binding events so that reaction rates have a nonlinear dependence on the amount of signaling molecule. It is this nonlinearity that endows biological signaling networks with robust switchlike properties which are critical to their biological function. In this study we investigate how the properties of these signaling systems depend on the network architecture. Our main result is that these nonlinear networks exhibit bistability where the network activity can switch between states that correspond to a low and high activity level. We show that this bistable regime emerges at a critical coupling strength that is determined by the spectral structure of the network. In particular, the set of nodes that correspond to large components of the leading eigenvector of the adjacency matrix determines the onset of bistability. Above this transition the eigenvectors of the adjacency matrix determine a hierarchy of clusters, defined by its spectral properties, which are activated sequentially with increasing network activity. We argue further that the onset of bistability occurs either continuously or discontinuously depending upon whether the leading eigenvector is localized or delocalized. Finally, we show that at low network coupling stochastic transitions to the active branch are also driven by the set of nodes that contribute more strongly to the leading eigenvector. However, at high coupling, transitions are insensitive to network structure since the network can be activated by stochastic transitions of a few nodes. Thus this work identifies important features of biological signaling networks that may underlie their biological

  1. Energy levels of germanium, Ge I through Ge XXXII

    International Nuclear Information System (INIS)

    Sugar, J.; Musgrove, A.

    1993-01-01

    Atomic energy levels of germanium have been compiled for all stages of ionization for which experimental data are available. No data have yet been published for Ge VIII through Ge XIII and Ge XXXII. Very accurate calculated values are compiled for Ge XXXI and XXXII. Experimental g-factors and leading percentages from calculated eigenvectors of levels are given. A value for the ionization energy, either experimental when available or theoretical, is included for the neutral atom and each ion. section

  2. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    Science.gov (United States)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  3. Kernel principal component analysis for change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Morton, J.C.

    2008-01-01

    region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....

  4. A unified development of several techniques for the representation of random vectors and data sets

    Science.gov (United States)

    Bundick, W. T.

    1973-01-01

    Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.

  5. Lattice Paths and the Constant Term

    International Nuclear Information System (INIS)

    Brak, R; Essam, J; Osborn, J; Owczarek, A L; Rechnitzer, A

    2006-01-01

    We firstly review the constant term method (CTM), illustrating its combinatorial connections and show how it can be used to solve a certain class of lattice path problems. We show the connection between the CTM, the transfer matrix method (eigenvectors and eigenvalues), partial difference equations, the Bethe Ansatz and orthogonal polynomials. Secondly, we solve a lattice path problem first posed in 1971. The model stated in 1971 was only solved for a special case - we solve the full model

  6. The 'carry-over' effects of patient self-testing: positive effects on usual care management by an anticoagulation management service.

    LENUS (Irish Health Repository)

    Ryan, Fiona

    2010-11-01

    Patient self-testing (PST) of the international normalised ratio (INR) has a positive effect on anticoagulation control. This study investigated whether the benefits of PST (other than increased frequency of testing, e.g. patient education, empowerment, compliance etc.) could be \\'carried-over\\' into usual care management after a period of home-testing has ceased.

  7. Development of a single nucleotide polymorphism (SNP) marker for ...

    African Journals Online (AJOL)

    The nature of the single nucleotide polymorphism (SNP) marker was validated by DNA sequencing of the parental PCR products. Using high resolution melt (HRM) profiles and normalised difference plots, we successfully differentiated the homozygous dominant (wild type), homozygous recessive (LPA) and heterozygous ...

  8. Improvement of the photon flux measurement at the BGO-OD experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kohl, Katrin [Physikalisches Institut, Universitaet Bonn (Germany); Collaboration: BGO-OD-Collaboration

    2016-07-01

    The BGO-OD experiment at the ELSA accelerator facility at Bonn investigates the internal reaction mechanisms of the nucleon, using an energy tagged bremsstrahlung photon beam. Absolute normalisation of the beam flux is required for cross section determination. In this talk the measurement principle is presented, and an improved method of the photon flux monitoring of the experiment is introduced.

  9. Interrupting the Symphony: Unpacking the Importance Placed on Classical Concert Experiences

    Science.gov (United States)

    Hess, Juliet

    2018-01-01

    The Toronto Symphony Orchestra presents a series of youth concerts each year to introduce and attract younger audiences to the symphony. Music teachers often attend these concerts with students, and the importance of such experiences is frequently emphasised and normalised. This article explores the historical roots of the following relations,…

  10. Rivervale Nursing Home, Old Birr Road, Rathnaleen, Tipperary.

    LENUS (Irish Health Repository)

    Ryan, Fiona

    2010-11-01

    Patient self-testing (PST) of the international normalised ratio (INR) has a positive effect on anticoagulation control. This study investigated whether the benefits of PST (other than increased frequency of testing, e.g. patient education, empowerment, compliance etc.) could be \\'carried-over\\' into usual care management after a period of home-testing has ceased.

  11. Challenging 'normalcy': possibilities and pitfalls of paralympic bodies

    African Journals Online (AJOL)

    Drawing upon a Foucauldian conceptualisation of biopower in connection with Haraway's articulation of the cyborg, we highlight how hybrid bodies inevitably fail to promote embodied difference because they constitute, in and of themselves, a product of 'normalising' technology. In the light of critiques, such as that of the ...

  12. Otolith shape as a valuable tool to evaluate the stock structure of ...

    African Journals Online (AJOL)

    Swordfish Xiphias gladius is an oceanic-pelagic species. Its population structure in the Western Indian Ocean was studied from the shape of the sagittal otoliths of 391 individuals collected from 2009 to 2014. Normalised elliptical Fourier descriptors (EFDs) were extracted automatically using TNPC software. Principal ...

  13. q-Deformed Kink solutions

    International Nuclear Information System (INIS)

    Lima, A.F. de

    2003-01-01

    The q-deformed kink of the λφ 4 -model is obtained via the normalisable ground state eigenfunction of a fluctuation operator associated with the q-deformed hyperbolic functions. The kink mass, the bosonic zero-mode and the q-deformed potential in 1+1 dimensions are found. (author)

  14. Dark Matter: The "Gravitational Pull" of Maternalist Discourses on Politicians' Decision Making for Early Childhood Policy in Australia

    Science.gov (United States)

    Bown, Kathryn; Sumsion, Jennifer; Press, Frances

    2011-01-01

    The article reports on a study investigating influences on Australian politicians' decision making for early childhood education and care (ECEC) policy. The astronomical concept of dark matter is utilised as a metaphor for considering normalising, and therefore frequently difficult to detect and disrupt, influences implicated in politicians'…

  15. Behaviour of REEs in a tropical estuary and adjacent continental ...

    Indian Academy of Sciences (India)

    total organic carbon, U/Th ratio, authigenic U, Cu/Zn, V/Cr ratios revealed the oxic environment and thus the ... tions due to depletion by sorption onto particles. .... trace elements (Cr, Ni, Co, Zn) were analysed along ... Results. The concentration of REE and trace elements ..... This effect causes a split of the normalised REE.

  16. Role of the pharmacist in delivering point-of-care therapy for ...

    African Journals Online (AJOL)

    The wide variation in biological effect, narrow therapeutic range and pharmacokinetic and pharmacodynamic characteristics of warfarin require monitoring of the international normalised ratio (INR). Point-of-care results that are readily accessible for interpretation, allows the pharmacist to make dose adjustments ...

  17. Impact of eye detection error on face recognition performance

    NARCIS (Netherlands)

    Dutta, A.; Günther, Manuel; El Shafey, Laurent; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2015-01-01

    The locations of the eyes are the most commonly used features to perform face normalisation (i.e. alignment of facial features), which is an essential preprocessing stage of many face recognition systems. In this study, the authors study the sensitivity of open source implementations of five face

  18. Nutritional support of children with chronic liver disease | Nel | South ...

    African Journals Online (AJOL)

    Diets are usually enriched with medium-chain fatty acids because of their better absorption in cholestatic liver disease. High-dose fat-soluble vitamin supplements are given while care is taken to avoid toxicity. Initial doses are two to three times the RDI and then adjusted according to serum levels or international normalised ...

  19. The effect of equiaxial stretching on the osteogenic differentiation and mechanical properties of human adipose stem cells

    DEFF Research Database (Denmark)

    Virjula, Sanni; Zhao, Feihu; Leivo, Joni

    2017-01-01

    , and the proliferation and alkaline phosphatase activity, as a sign of early osteogenic differentiation, were analysed on days 0, 6 and 10. Furthermore, the mechanical properties of hASCs, in terms of apparent Young's modulus and normalised contractility, were obtained using a combination of atomic force microscopy...

  20. CD4(+) memory T cells with high CD26 surface expression are enriched for Th1 markers and correlate with clinical severity of multiple sclerosis

    DEFF Research Database (Denmark)

    Krakauer, M; Sorensen, P S; Sellebjerg, F

    2006-01-01

    ) memory T lymphocytes contained the high levels of markers of Th1, activation, and effector functions and cell counts of this subset correlated with MS disease severity. This subset had lower expression of PD-1, CCR4, and L-selectin in MS than in controls. These changes were only partially normalised...